AI Predictions for 2026
2026 is all about making rad shit with AI
2025 was the year AI learned to code. 2026 is the year everyone else figures out what that means.
The tools that felt like magic tricks a year ago are now table stakes. 85% of developers now use AI coding tools regularly. Cursor crossed $500M ARR and a $10B valuation. The question isn’t whether AI can help you build software. The question is what happens when the barriers to building collapse entirely. When designers ship without engineers. When your kid builds an app over the weekend. When the people who were “non-technical” last year are shipping products this year.
Here’s where I’m placing my bets.
1. The Rise of the Full-Stack Designer
For decades designers, engineers and product managers have worked together to build software. Design skills and engineering skills required deep orthogonal specialization, meaning very few people possessed enough of both skills to drive results without the other. PMs served a critical role connecting these dots and bringing in expertise from functions spanning business, research and program management. But that’s all changing. The hegemony of the silo is collapsing.
Designers, who long have felt limited by the game of telephone between ideas and execution, are embracing AI coding with incredible enthusiasm. Tools like Lovable are explicitly designed for designers, described as “the most beginner-friendly AI coding tool” that excels at “building out stylized user interfaces that seamlessly transition into working prototypes.” Engineers and PMs are also embracing these tools, but it’s the designer who will benefit the most in 2026. We’re about to see a new group of 10x designers who integrate the silos. The full-stack designer is the hero of 2026. Find them on your teams, empower them to build and watch the magic happen.
2. Deeply Personal Assistants
2026 is the year AI assistants finally cross over. For ages, we’ve been sold on the idea of JARVIS but have been limited to frustrating, incomplete experiences that feel like Dragon Dictate from the 90s. That changes this year. Gemini, Claude, and OpenAI will deeply integrate with wells of personal context while learning to use tools directly and on your behalf. This will lead to personal assistant experiences that finally deliver the dream.
The bold call here is on Apple. According to MacRumors, Apple is finalizing a deal worth approximately $1 billion per year to license Google’s Gemini for a reimagined Siri. The new assistant will use a custom version of Gemini 3 Pro with 1.2 trillion parameters, while keeping personal data processed on-device through Apple’s Private Cloud Compute. The launch is targeted for Fall 2026 with iOS 26.4. Siri will be useful by the end of 2026. Maybe not as useful as Gemini on Pixel. But useful.
3. WYSIWYG Development
My use of AI-enabled IDE coding tools like Cursor and Antigravity is changing. In the beginning, I was looking at code changes and approving each. As I learned to trust the models, that’s evolved to me mostly interacting with the agent chat and multi-agent orchestration system and largely ignoring the other panels in the experience. I rarely look at the code at all anymore, other than to doublecheck something I’m curious about. When I use CLI based tools like Gemini-cli, Codex and Claude Code, I embrace this approach even more, but something seems missing with a terminal interface. The most exciting update I see now is when my agent opens a preview of the thing we’re building, either on the web or in a phone emulator, and directly modifies the code based on what’s on screen.
In 2026, we’re going to see the return of the WYSIWYG interface for creation. Dreamweaver is back baby. Bolt.new already combines visual editing with AI code generation, letting you build full-stack apps entirely in-browser. Lovable offers Figma-to-code and one-click deployment. That wasted space in your IDE where you used to look at code will become the surface for creation itself. A place where you’ll nudge UX to where you want it, where you’ll annotate changes you want and where you’ll flag comments to the AI helping you build the code behind the scenes. This is the revolution required for AI coding tools to crossover to the mainstream.
4. Just-in-Time UX
Chat has been the predominant interface for AI and most of the interface innovations have had to fit into the metaphor of chat. There are interesting examples of linking useful UI components into a chat interface: the Photoshop integration into ChatGPT where when you want to modify something like exposure for a photo, a Photoshop control for exposure is integrated directly into the chat. This is a peek into where these UIs are headed.
In November 2025, Anthropic and OpenAI partnered to release the MCP Apps Extension, a specification that brings standardized interactive UI capabilities to the Model Context Protocol. MCP has been called the “USB-C for AI” and now enables agents to serve rich UIs like charts, maps, and forms as part of tool calls. The Agentic AI Foundation launched under the Linux Foundation with OpenAI, Anthropic, Google, Microsoft, and AWS. This paradigm will accelerate this year.
This will lead to a new class of apps that self-compose UIs in real-time. They’ll draw components for existing patterns from vast libraries of capabilities, but when a novel need arises, they’ll create bespoke experiences to match real-time user requests. The very definition of software will forever change, and 2026 is when we see the beginnings of this evolution.
5. Bespoke Software
In December, my son wanted a motivational quotes app for his phone and was disgusted to see that all of the options in the app store were not just paid apps, but required a subscription. What he wanted was pretty simple: a collection of quotes, presented beautifully in an app and as a widget he could keep on his homepage. After a quick tutorial on Antigravity and a few hours of tweaking, he had exactly the app he wanted.
He’s not alone. Gartner projects that by 2026, low-code development tools will account for 75% of new application development, up from 40% in 2021. The low-code market is growing from $37 billion in 2025 to a projected $264 billion by 2032. Non-technical founders are already finding success: one growth marketer built a women’s safety app entirely using AI tools, reaching 10,000+ users and $456K ARR with zero engineering background.
As these tools become even more accessible, the build vs. buy equation will change. People will build the thing they want, or tweak the thing they have into exactly what they want. At first, we’ll see an explosion of new apps from a wide range of people. This will put pressure on app stores and traditional distribution systems. Analysts predict integrated AI platforms will render 60% of single-purpose apps obsolete. In 2026, it’ll be almost impossible to rise above the noise, and for existing apps who depend on paywalls for thinly differentiated features, the flood of free alternatives should make you nervous. Where there is friction, users will innovate.
Problems with too few users to previously justify will now be viable to build. At the limit, even a user base of one will be enough: a revolution of bespoke software solving even the smallest of needs including one-time-use experiences. Wild.
6. Lots of Hype for World Models
Last year I predicted the ongoing expansion of LLMs with multi-modal capabilities and we saw that trend take shape. As these systems understand more about our world, they tend to work better across a wider variety of use cases. The difference between a world model and an LLM is nuanced, but fundamentally world models are designed to master cause and effect, physics, and the consequences of actions across a wide range of domains. We see these attributes in the way Veo seems to have an understanding of how water flows and reacts to a range of inputs, in how sounds and music match their visual complements.
The race is already heating up. Runway’s Gen-4.5, released December 2025, is explicitly marketed as moving beyond “video generation” toward “world models that understand physics.” Fei-Fei Li’s World Labs launched Marble in September 2025, generating explorable 3D worlds from text and images. NVIDIA’s Cosmos world foundation models have been downloaded over 2 million times. Yann LeCun raised €500M for AMI Labs focused on the same goal. In 2026, multi-modality takes another large leap forward, leading to a lot of hype for world models. There’s debate across the industry about the right approach, but regardless of their impact the hype will be loud.
7. AI Software Teams & Agent Swarms
Teams spanning use-case specific agentic specialization are quickly becoming common approaches for development. Gartner reported a 1,445% surge in multi-agent system inquiries from Q1 2024 to Q2 2025. The orchestration software market is projected to hit $8.7 billion by 2026. At the tail end of 2025, we’re seeing clever agent orchestration systems driving step changes in system reliability and capability.
In 2026, the impact of agent teams and agent swarms will be profound. Gartner predicts 40% of enterprise applications will embed AI agents by end of 2026, up from less than 5% in 2025. Smart software teams will optimize agentic integrations across providers and ask systems to debate options ahead of implementation. Planning mode will transform to include multiple agents spanning a range of expertise. Token use will explode, as will the quality of systems we build.
8. Real-Time Interactive Avatars
I predicted this last year and was just a tad early. Synthesia raised $200M at a $4 billion valuation in December 2025 and launched Synthesia 3.0 with their new Express-2 avatars featuring full-body gestures and state-of-the-art voice cloning. Their “Video Agents” for real-time interactive experiences are coming in early 2026 for Enterprise customers. Runway is building GWM-Avatars to simulate human behavior and is in active conversations with robotics firms and enterprises.
2026 is the year when these land, leading to virtual agent experiences that feel eerily natural. These agents will speak every language, draw on deep wells of specialized knowledge and be empowered to act on your behalf. The power of the deeply personalized agents we mentioned in prediction #2, brought to life as personalized characters. The interactions won’t feel completely human, but the illusion will be compelling nonetheless.
9. Generative 3D Assets, Environments and Worlds
Another prediction from 2025 that I’m pulling forward to 2026. This year, generative worlds will have a moment in the sun. 2025 was the year 3D Gaussian Splatting became production-ready for media and entertainment. Superman was the first major motion picture to use dynamic Gaussian splatting. World Labs’ Spark renderer was named one of the most influential libraries of 2025 by GitHub.
You’ll be able to prompt for fully immersive worlds allowing you to explore aspects of imagination in delightful new ways. World Labs’ Marble already generates explorable 3D environments from text, images, or video and exports them as Gaussian splats, meshes, or videos. Many will seek out methods for making these experiences more efficient, repeatable and shareable leveraging capabilities such as 3DGS and parallel advancements in AI-generated 3D assets and environments. Industry experts are “100% convinced that radiance field representations like Gaussian splatting are a fundamental imaging medium” and predict accelerated adoption in 2026. The full potential of these capabilities won’t be realized in 2026, but we’re going to see a ton of meaningful progress.
10. Redefining the OS & On-Device AI
Low-cost, efficient small models are getting really good. Google’s Gemma 3 1B runs at 2,585 tokens per second on mobile GPU with only a 529MB footprint. Gemma 3 270M is designed specifically for on-device deployment with strong instruction-following out of the box. FunctionGemma enables edge agents that map natural language to executable API actions, running locally on phones, laptops, and small accelerators like NVIDIA Jetson Nano.
Meanwhile, on-device compute continues to accelerate with AI-enabled engines integrated into every modern smartphone and computer. With datacenter scale compute facing challenges getting enough power to meet the explosion of demand, we’ll see more inference driven to end user devices. The incentives meet the moment in 2026, leading to significant opportunities for on-device experiences.
Having worked on building Chrome OS, the prospect of rethinking how computing works at a fundamental level is exciting to me. The ingredients to radically reinvent computing are here in 2026. The outcomes that land this year are unlikely to fully realize the potential, but we’re going to see the beginnings of entirely new computing systems show up in our lives.
11. Software Re-Defined Hardware
Another hardware trend I’m excited about is the impact of AI coding on the hardware we already have in our lives. Over the last 10 years, we’ve seen everything from our vacuums to our dishwashers become connected and smart. Unfortunately, most of these systems remain pitifully dumb: sometimes because they’re locked behind an app built by the lowest bidder, and often connected to apps without sustainable business models. Why would someone pay for a subscription to an app for their dishwasher? It makes no sense. This has led to a ton of abandonware and devices with wasted utility waiting to be unlocked by innovative users.
In 2026, hackers and hobbyists will take back their hardware to do amazing things. Arduino was acquired by Qualcomm and released the Uno Q. The Raspberry Pi AI Kit bundles M.2 HAT+ with Hailo AI acceleration. Running AI on a Raspberry Pi is now practical and reliable for offline, privacy-preserving systems. I’ve been waiting for someone to connect their JTAGulator to Claude Code and liberate devices that were previously locked down. Hardware was always hard. In 2026 these trends converge and hardware becomes less hard.
12. Potpourri
As I was writing these, I kept finding things I wanted to add to my predictions. AI is literally affecting everything we do and it’s impossible to choose where the impact will be greatest. Rather than tossing all of the other thoughts into the trash, I’m going to mix them into a stew of half-baked predictions:
Robotics accelerates, though most of the gains are for industrial use cases. The global industrial robot market hit an all-time high of $16.7 billion. Humanoid robot costs are projected to fall from $35K to $13-17K per unit over the next decade.
Shopping becomes deeply personal as agents finally help shop for us. OpenAI announced deals with Target, Instacart, and DoorDash. Amazon launched “Buy For Me.” Visa plans AI-driven purchases inside chatbots as early as Q1 2026. Morgan Stanley predicts nearly half of online shoppers will use AI shopping agents by 2030. Just yesterday Google announced an industry alliance for Agentic shopping called UCP. This is happening in 2026.
We see AI within games at all levels, leading to fun new styles of gameplay. 90% of game developers are already using AI in their workflows according to Google Cloud research. The AI in gaming market is projected to grow from $3.28 billion to $51 billion by 2033. But there will still be backlash like what we saw with Expedition 33 last year.
Entry level software jobs become scarce, except among multi-faceted AI native builders. Entry-level hiring at the top 15 tech firms fell 25% from 2023 to 2024. Junior developer postings are down 60% since 2022. Salesforce announced it will hire “no new engineers” in 2025. The path forward is becoming an AI-native builder who can leverage these tools, not compete with them.
I write to think. This article isn’t just a list of predictions; it’s how I force myself to synthesize everything I’m seeing into a coherent view of where we’re headed. The act of committing these ideas to writing sharpens them. It exposes the gaps in my logic. It makes me defend positions I might otherwise hold loosely.
But I don’t have this figured out. If you think I’m wrong about something, tell me. If you see a trend I’m missing, I want to hear it. If one of these predictions strikes you as naive or overly optimistic, challenge me. The best thinking happens in conversation, not isolation.
The common thread across all twelve predictions: the builders win. Not the people who wait to see what happens. Not the people who debate whether AI is overhyped. The people who pick up the tools and start making things.
I build things to figure out how they work. That’s how I learned AI. That’s how I learned photography. That’s how I teach my kids. And I write to think through what I’m learning. That’s why this Substack exists.
Come December, I’ll grade myself on each of these publicly. Until then, I’m building. What about you?



