<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Build Rad Shit]]></title><description><![CDATA[Building rad shit people love. Product lessons and experiments from 18+ years of shipping products at Google—and the side quests I build to learn.]]></description><link>https://trond.ai</link><generator>Substack</generator><lastBuildDate>Mon, 06 Apr 2026 02:25:00 GMT</lastBuildDate><atom:link href="https://trond.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Trond Wuellner]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[radshit@trond.ai]]></webMaster><itunes:owner><itunes:email><![CDATA[radshit@trond.ai]]></itunes:email><itunes:name><![CDATA[Trond Wuellner]]></itunes:name></itunes:owner><itunes:author><![CDATA[Trond Wuellner]]></itunes:author><googleplay:owner><![CDATA[radshit@trond.ai]]></googleplay:owner><googleplay:email><![CDATA[radshit@trond.ai]]></googleplay:email><googleplay:author><![CDATA[Trond Wuellner]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[What my daughter taught me about NotebookLM]]></title><description><![CDATA[NotebookLM isn&#8217;t about finding your learning style. It&#8217;s more interesting than that.]]></description><link>https://trond.ai/p/what-my-daughter-taught-me-about</link><guid isPermaLink="false">https://trond.ai/p/what-my-daughter-taught-me-about</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Mon, 30 Mar 2026 14:32:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6Lma!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6Lma!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6Lma!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!6Lma!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!6Lma!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!6Lma!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6Lma!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6439569,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/192381118?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6Lma!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!6Lma!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!6Lma!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!6Lma!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb407b7b-b6e1-41a9-b09c-630f3470150f_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Over winter break, my 12-year-old called me over to look at her laptop.</p><p>She was studying for a quiz on a book she&#8217;d been reading. She had flashcards. And a podcast about the book &#8212; two AI hosts walking through the themes, the characters, the things that would probably show up on the test. She&#8217;d made both herself using NotebookLM. I didn&#8217;t even realize she knew about the product. Which is funny, because I helped build the Audio Overviews feature she was showing me.</p><p>It turns out NotebookLM is very popular at her school both among students and teachers. She had been using it all semester to build herself a study system. On her own. Which is pretty cool.</p><p>That&#8217;s the moment I understood how impactful NotebookLM can be.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>When we were developing Audio Overviews at Google Labs, we knew it was a novel way to help people engage with their content. The immediate reaction when we shipped was delightful &#8212; people were surprised that such lifelike audio was possible. But the real impact turned out to be something we felt more than predicted: it reached people who felt they learned best by listening. People who&#8217;d been handed PDFs and book chapters their whole lives, formats they struggled to connect with.</p><p>That realization pushed the team further. Mind maps for content that benefits from clearly outlined structure. Slide decks for information that combines visual and written information. Videos for content that benefits from a deeper telling of the story or motion to bring ideas to life. Flashcards to help people test themselves and secure knowledge.</p><p><strong>Builder&#8217;s note:</strong> This isn&#8217;t a feature roadmap built around &#8220;what&#8217;s technically cool.&#8221; Each output format is a bet that the same information, delivered differently, becomes useful to a different person or within another context. Google&#8217;s mission is to organize the world&#8217;s information and make it accessible and useful. NotebookLM is what &#8220;useful&#8221; looks like when you take it seriously.</p><div><hr></div><h2><strong>What NotebookLM actually is</strong></h2><p>Most people I talk to think of it as a chatbot for their files. Upload a PDF, ask it questions. That&#8217;s not wrong, but it&#8217;s missing most of what makes it interesting.</p><p>NotebookLM is a research and learning environment grounded entirely in sources you choose. You bring the material; the system stays inside it. Every answer comes with citations back to your specific sources, which massively reduces hallucinations. That constraint is the whole point.</p><p>The interface has three panels. Sources on the left: everything you&#8217;ve uploaded or linked. Chat in the middle: ask questions, get answers, save the good ones as notes. Studio on the right: where you generate outputs to make your information more useful.</p><p>You can put in: PDFs, Google Docs, websites, YouTube URLs, audio files, raw text, even your own notes. I find it helpful to share voice recordings of my own disorganized thoughts about the topic. The system does the organization automatically, which feels like magic.</p><p>What you get out: Audio Overviews, Video Overviews (in three formats, including a Cinematic mode that is genuinely stunning), Mind Maps, Infographics, Slide Decks, Flashcards, Quizzes, Briefing Documents, Study Guides, FAQs. All grounded in your sources, all completely different ways into the same material.</p><div><hr></div><h2><strong>The learning styles myth</strong></h2><p>Before I get into how to use NotebookLM, I want to flag something.</p><p>The obvious framing for a tool like NotebookLM is: figure out your learning style, pick the matching output format. Visual learner? Mind map. Audio learner? Podcast. You get the idea.</p><p>That framing is wrong. Not intuitively wrong &#8212; it feels right, which is why over 70% of educators still believe it. Scientifically wrong. The research on &#8220;learning styles&#8221; as fixed, matchable preferences has been thoroughly debunked. A 2024 meta-analysis across more than 1,700 students found that matching teaching style to learning style had no meaningful impact on performance. The effect size is 0.04. Essentially zero.</p><p>What the research does support is something more interesting: combining formats builds better understanding than any single format alone. This is called Dual Coding Theory. When you pair verbal information with visual information, the two memory systems reinforce each other. You retain more. You understand deeper.</p><p>My daughter didn&#8217;t pick her learning style. She made flashcards <em>and</em> a podcast. Both. That&#8217;s Dual Coding in action, figured out by a 12-year-old without anyone explaining the theory to her.</p><p>So the tips below aren&#8217;t about finding your type. They&#8217;re about using the tool the way the science suggests learning actually works.</p><div><hr></div><h2><strong>Six tips for getting started</strong></h2><p><strong>1. Start with something you actually need to understand</strong></p><p>Don&#8217;t test NotebookLM with a random PDF just to see what happens. Pick something you&#8217;re genuinely trying to learn &#8212; a topic at work, a book you&#8217;re reading, a subject your kid is studying for a test. The tool works best when you have a real question. Generic inputs produce generic outputs. Here are a few topics already set up as notebooks if you want a running start:</p><ul><li><p><a href="https://notebooklm.google.com/notebook/f7607d7a-584c-4f35-96fc-f6815c573a6c">Introduction to NotebookLM</a></p></li><li><p><a href="https://notebooklm.google.com/notebook/505ee4b1-ad05-4673-a06b-1ec106c2b940">Parenting Advice for the Digital Age</a></p></li><li><p><a href="https://notebooklm.google.com/notebook/40b0bb3f-afa6-49b2-959f-d91fb0a91a3b">Jane Austen: The Complete Works</a></p></li><li><p><a href="https://notebooklm.google.com/notebook/780a38ee-d0a6-4fb1-b255-aa03c8d67dce">Secrets of the Super Agers</a></p></li></ul><p><strong>2. Add more sources than you think you need &#8212; and let NotebookLM find them</strong></p><p>One document produces one-document results. A YouTube lecture, two articles, and a PDF on the same topic produces something much richer &#8212; the AI can find connections and disagreements across sources that you wouldn&#8217;t catch reading them separately.</p><p>Here&#8217;s the part most people miss: you don&#8217;t have to find the sources yourself. Hit the <strong>Discover</strong> button in the Sources panel, describe what you&#8217;re trying to learn, and NotebookLM searches the web and returns curated sources with annotated summaries. Add the ones that look useful with one click. That&#8217;s how I built the notebook backing the research in this article &#8212; I described the topic, let it do the research, and had a solid set of sources in a few minutes.</p><p><em>[&#128279; You can explore the full notebook here: <a href="https://notebooklm.google.com/notebook/cba7293b-896a-4958-87a7-230a2cedd6bb">The Learning Styles Myth</a>]</em></p><p><strong>3. Try the Audio Overview first</strong></p><p>It&#8217;s the fastest way to understand what NotebookLM pulled out as the key ideas. Even if you don&#8217;t normally listen to things, run it once before generating anything else. It surfaces connections and frames the material in a way that makes every subsequent output more useful.</p><p><em>[&#127911; Listen: Audio Overview &#8212; The Learning Styles Myth]</em></p><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;d198e30e-ace3-4d41-975c-ab496d52e4dc&quot;,&quot;duration&quot;:1558.1519,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p><strong>4. Don&#8217;t stop at one format</strong></p><p>This is the tip that changes how you use the tool. Generate the Audio Overview. Then generate the Mind Map. If the content is complex, try the Cinematic Video. The learning styles research says picking your &#8220;type&#8221; doesn&#8217;t help. The Dual Coding research says layering formats does. Use both.</p><p><em>[&#128506;&#65039; Mind Map &#8212; Key concepts from the research]</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!payx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F820e208d-a746-4b74-aab8-2a05d0d45c96_4288x3237.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!payx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F820e208d-a746-4b74-aab8-2a05d0d45c96_4288x3237.png 424w, https://substackcdn.com/image/fetch/$s_!payx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F820e208d-a746-4b74-aab8-2a05d0d45c96_4288x3237.png 848w, https://substackcdn.com/image/fetch/$s_!payx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F820e208d-a746-4b74-aab8-2a05d0d45c96_4288x3237.png 1272w, https://substackcdn.com/image/fetch/$s_!payx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F820e208d-a746-4b74-aab8-2a05d0d45c96_4288x3237.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!payx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F820e208d-a746-4b74-aab8-2a05d0d45c96_4288x3237.png" width="1456" height="1099" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/820e208d-a746-4b74-aab8-2a05d0d45c96_4288x3237.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1099,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:709818,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/192381118?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F820e208d-a746-4b74-aab8-2a05d0d45c96_4288x3237.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!payx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F820e208d-a746-4b74-aab8-2a05d0d45c96_4288x3237.png 424w, https://substackcdn.com/image/fetch/$s_!payx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F820e208d-a746-4b74-aab8-2a05d0d45c96_4288x3237.png 848w, https://substackcdn.com/image/fetch/$s_!payx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F820e208d-a746-4b74-aab8-2a05d0d45c96_4288x3237.png 1272w, https://substackcdn.com/image/fetch/$s_!payx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F820e208d-a746-4b74-aab8-2a05d0d45c96_4288x3237.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>[&#128202; Infographic &#8212; Beyond the Myth: A Science-Based Guide]</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bQk0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff357a4eb-67d0-4689-937c-acbc719118a5_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bQk0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff357a4eb-67d0-4689-937c-acbc719118a5_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!bQk0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff357a4eb-67d0-4689-937c-acbc719118a5_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!bQk0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff357a4eb-67d0-4689-937c-acbc719118a5_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!bQk0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff357a4eb-67d0-4689-937c-acbc719118a5_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bQk0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff357a4eb-67d0-4689-937c-acbc719118a5_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f357a4eb-67d0-4689-937c-acbc719118a5_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6413955,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/192381118?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff357a4eb-67d0-4689-937c-acbc719118a5_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bQk0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff357a4eb-67d0-4689-937c-acbc719118a5_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!bQk0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff357a4eb-67d0-4689-937c-acbc719118a5_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!bQk0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff357a4eb-67d0-4689-937c-acbc719118a5_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!bQk0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff357a4eb-67d0-4689-937c-acbc719118a5_2752x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>[&#128253;&#65039; Video Overview &#8212; <a href="https://notebooklm.google.com/notebook/cba7293b-896a-4958-87a7-230a2cedd6bb?artifactId=828ce9cb-d57f-44b6-acef-726de066d1b9">Cinematic Video: The Learning Styles Myth</a>]</em></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;6276cef6-ad4d-4817-b653-1e745862bbad&quot;,&quot;duration&quot;:null}"></div><p><em>[&#128209; Slide Deck &#8212; <a href="https://notebooklm.google.com/notebook/cba7293b-896a-4958-87a7-230a2cedd6bb?artifactId=616277c2-f013-4e89-ab34-fd23a2921da2">Download the full presentation</a>]</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://notebooklm.google.com/notebook/cba7293b-896a-4958-87a7-230a2cedd6bb?artifactId=616277c2-f013-4e89-ab34-fd23a2921da2" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nkQ5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9264baa1-6ab5-4505-bfe0-f11e27440f80_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!nkQ5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9264baa1-6ab5-4505-bfe0-f11e27440f80_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!nkQ5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9264baa1-6ab5-4505-bfe0-f11e27440f80_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!nkQ5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9264baa1-6ab5-4505-bfe0-f11e27440f80_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nkQ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9264baa1-6ab5-4505-bfe0-f11e27440f80_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9264baa1-6ab5-4505-bfe0-f11e27440f80_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1270770,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://notebooklm.google.com/notebook/cba7293b-896a-4958-87a7-230a2cedd6bb?artifactId=616277c2-f013-4e89-ab34-fd23a2921da2&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/192381118?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9264baa1-6ab5-4505-bfe0-f11e27440f80_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nkQ5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9264baa1-6ab5-4505-bfe0-f11e27440f80_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!nkQ5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9264baa1-6ab5-4505-bfe0-f11e27440f80_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!nkQ5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9264baa1-6ab5-4505-bfe0-f11e27440f80_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!nkQ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9264baa1-6ab5-4505-bfe0-f11e27440f80_1376x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>5. Use the Steering Prompt</strong></p><p>Every output in the Studio panel has a customization option before you generate. You can tell NotebookLM to focus on specific sources, approach the topic from a particular angle, or explain things at a certain level. Most people skip this and wonder why the output feels generic. &#8220;Focus on the disagreements between these sources&#8221; produces a completely different Audio Overview than the default. Try it.</p><p><strong>6. Save the good stuff to Notes</strong></p><p>When the chat surfaces an insight worth keeping, save it. Notes live inside your notebook alongside the sources. Your own thinking and the AI&#8217;s synthesis sit in the same place. Over time, the notes become a record of how your understanding of a topic developed &#8212; which is useful in a way that a chat history never is.</p><div><hr></div><h2><strong>The thing my daughter understood</strong></h2><p>She didn&#8217;t ask what kind of learner she was. She made flashcards to test herself and a podcast to listen to while she studied. She layered formats because that&#8217;s what felt right, and it turns out the science agrees with her instincts.</p><p>NotebookLM is good at a lot of things. The Cinematic Video is genuinely amazing &#8212; watch the embed above if you haven&#8217;t yet. The Mind Map makes complex source material navigable in a way that&#8217;s hard to describe until you&#8217;ve dragged a branch around and watched the rest reorganize.</p><p>But none of that is the point. The point is that the same information can be made useful in a dozen different ways, and most of us have been stuck with one or two of them our whole lives.</p><p>My daughter had a quiz to pass. She figured out how to prepare for it. You can do the same thing with whatever&#8217;s on your plate. Big sales call? Load up the client&#8217;s annual reports. Job interview? Drop in the job description and let Discover find you prep material. Presentation next week? Paste in your notes, record a voice brain dump, and watch it turn into slides. Even if it only breaks the writer&#8217;s block, that&#8217;s worth it.</p><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/p/what-my-daughter-taught-me-about?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Build Rad Shit! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/p/what-my-daughter-taught-me-about?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trond.ai/p/what-my-daughter-taught-me-about?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p>So, what are you trying to understand right now? Have you tried more than one format yet? I&#8217;m curious what you&#8217;re hoping to learn and what works for you!</p><p>&#8212; T</p>]]></content:encoded></item><item><title><![CDATA[It's awkward watching agents use computers]]></title><description><![CDATA[We need a new OS built for agents]]></description><link>https://trond.ai/p/its-awkward-watching-agents-use-computers</link><guid isPermaLink="false">https://trond.ai/p/its-awkward-watching-agents-use-computers</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Tue, 24 Mar 2026 14:27:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!aANG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aANG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aANG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aANG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aANG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aANG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aANG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:850655,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/191948045?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aANG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aANG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aANG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aANG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02a82b6f-5dd6-480f-99e4-0e736477f495_1376x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I had lunch with my friend Ronny at work a few weeks ago and we were (shocker!) talking about AI. Specifically, we were talking about how AI is absorbing more of the daily computing tasks that used to require a person sitting at a screen. Scheduling, research, drafting, monitoring, summarizing. The list grows every month.</p><p>That led us to a question: if AI agents are becoming the primary operators of computing systems, what happens to all the infrastructure we built so that <em>humans</em> could operate them?</p><p>Because that&#8217;s what operating systems actually are. They&#8217;re abstraction layers. Decades of engineering spent translating machine reality into something a human mind can understand. File systems, because humans think in containers and locations. Graphical interfaces, because humans perceive visually. Process scheduling, because humans experience time linearly. Every design decision in Unix, Windows, Android, ChromeOS and MacOS encodes endless assumptions about who&#8217;s sitting at the controls.</p><p>But what if the operator we&#8217;re building for is no longer a human?</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2><strong>15,000 tokens per second</strong></h2><p>Let&#8217;s take a quick aside. There is a company called Taalas who has recently built an app called <a href="https://chatjimmy.ai/">Chat Jimmy</a>. It&#8217;s a demonstration of their main business: building AI models compiled directly into custom silicon. Not software inference running on a general-purpose chip. The model <em>is in the computer</em>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kn7K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09873902-4950-421d-a2fe-e7b0e6207a11_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kn7K!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09873902-4950-421d-a2fe-e7b0e6207a11_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!kn7K!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09873902-4950-421d-a2fe-e7b0e6207a11_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!kn7K!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09873902-4950-421d-a2fe-e7b0e6207a11_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!kn7K!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09873902-4950-421d-a2fe-e7b0e6207a11_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kn7K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09873902-4950-421d-a2fe-e7b0e6207a11_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/09873902-4950-421d-a2fe-e7b0e6207a11_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2375756,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/191948045?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09873902-4950-421d-a2fe-e7b0e6207a11_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kn7K!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09873902-4950-421d-a2fe-e7b0e6207a11_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!kn7K!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09873902-4950-421d-a2fe-e7b0e6207a11_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!kn7K!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09873902-4950-421d-a2fe-e7b0e6207a11_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!kn7K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09873902-4950-421d-a2fe-e7b0e6207a11_2752x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Chat Jimmy operates consistently above 15,000 tokens per second. For context, human speech runs at roughly 3 tokens per second. At 15,000 tokens per second, an agent can reason through a complex problem, take multiple actions, and produce a result before a human has finished reading the first sentence of the last response. The inference bottleneck for most computing tasks totally disappears.</p><p>If we can get hardware of this nature to a place where local models operate at similar speeds, something shifts with on-device models. They become the computer. And we don&#8217;t have an OS built for that world yet.</p><div><hr></div><h2><strong>The big inversion</strong></h2><p>It won&#8217;t be trivial to get 15k tokens per second running on a device you keep on your desk, but it will happen. And it&#8217;ll signal a shift in how we work. At current inference speeds, agents wait on compute. The OS doesn&#8217;t matter much because the model is the slow part. Get the model fast enough and the calculus flips. The agent isn&#8217;t waiting on the model. The model is waiting on the human. I already see some of this happening today.</p><p>Humans will be the rate-limiting constraint on our systems.</p><p>That&#8217;s a completely different world to design for. An operating system doesn&#8217;t need to be optimized for human-speed interaction anymore. It needs to be optimized for machine-speed agents that are supervised by humans who still move at human speed. Safety, reversibility, and trust enforcement become <em>the</em> central design problem.</p><p>I don&#8217;t think anyone has asked what an OS designed for this user inversion would look like yet.</p><div><hr></div><h2><strong>A sketch of the stack</strong></h2><p>I&#8217;ve been thinking about this question and sketching out an architecture that could be interesting. By no means is this thoroughly thought through, so think of it like an outline for what might work. If you&#8217;ve thought about this as well or have some ideas, reach out &#8212; I&#8217;d love to chat.</p><p><strong>Layer 0: Linux kernel.</strong> Start here and keep it. It handles hardware, networking, and process isolation. These are solved problems not worth re-solving. You can add eBPF programs on top to enforce capability rules at the kernel boundary. No kernel fork required.</p><p><strong>Layer 1: Hardware model.</strong> An on-device LLM minted into silicon that&#8217;s always on, always available, near-zero latency. The runtime talks to this chip in natural language. That&#8217;s not a poetic choice. If the model is the chip and human language is its instruction set, then system calls are just English. Every operation in the audit log is human-readable by default. LLMs printed into chips will have knowledge gaps, so this layer gets augmented with an online model for some tasks &#8212; but we&#8217;ll handle that later.</p><p><strong>Layer 2: Agent runtime.</strong> This is the interesting layer. It replaces all of traditional user space. It&#8217;s the part of the stack that exists to mediate between the machine and a human user. Instead of mediating for humans, it manages agents: their context windows, their tool access, their permissions, their lifecycle.</p><p>A few key jobs it handles:</p><ul><li><p><em>Context management.</em> An agent&#8217;s working memory is its context window, not just RAM. The OS manages context as a first-class resource &#8212; what gets evicted, summarized, or persisted across sessions.</p></li><li><p><em>Tools instead of syscalls.</em> Instead of a syscall table, a tool registry. Agents call tools. The registry knows what each tool does, whether it&#8217;s reversible, what it costs, and whether human approval is required before execution.</p></li><li><p><em>Capabilities instead of permissions.</em> Unix permissions are a file ownership model. That&#8217;s not the right primitive for agents. We need something closer to structured, delegatable, auditable capabilities.</p></li><li><p><em>Reversibility as infrastructure.</em> Right now, every application that wants undo has to build it from scratch. An agent OS should make reversibility a first-class primitive. A staging area for irreversible actions that waits for explicit human approval before executing. At 15,000 tokens per second, mistakes accumulate fast.</p></li></ul><p><strong>Layers 3 and 4: Specialist models and agents.</strong> Specialist models load on demand above the runtime &#8212; like kernel modules for specific domains: code, vision, planning. Above that, agents run as goal-oriented processes rather than session-bound apps. This is where we can add online models to overcome the natural knowledge gap of an LLM printed to silicon.</p><div><hr></div><h2><strong>I&#8217;ve seen this movie before</strong></h2><p>One of my first jobs at Google was helping build ChromeOS. I know a little about what it takes to rethink an operating system. ChromeOS asked: what could an OS be if the web browser were the only app? The answer was a much more efficient system. Strip away 40 years of complexity built to run native applications. Boot in 8 seconds. Let the browser do everything else.</p><p>Today we&#8217;re in a similar spot, asking a similar question: what would an OS need to be if you built the whole thing for AI agents instead of people?</p><p>I work next to people building the next generation of operating systems and I haven&#8217;t even asked them about this. Partly because what I&#8217;m sketching is a thought experiment, and I didn&#8217;t want anyone else&#8217;s answers to constrain mine. The Android and ChromeOS teams have something in development called Aluminum &#8212; I don&#8217;t know what&#8217;s in it, and this isn&#8217;t that.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!F6Rx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db40050-61d3-4d0d-af9e-db0f628709fe_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!F6Rx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db40050-61d3-4d0d-af9e-db0f628709fe_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!F6Rx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db40050-61d3-4d0d-af9e-db0f628709fe_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!F6Rx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db40050-61d3-4d0d-af9e-db0f628709fe_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!F6Rx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db40050-61d3-4d0d-af9e-db0f628709fe_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!F6Rx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db40050-61d3-4d0d-af9e-db0f628709fe_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7db40050-61d3-4d0d-af9e-db0f628709fe_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2556260,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/191948045?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db40050-61d3-4d0d-af9e-db0f628709fe_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!F6Rx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db40050-61d3-4d0d-af9e-db0f628709fe_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!F6Rx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db40050-61d3-4d0d-af9e-db0f628709fe_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!F6Rx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db40050-61d3-4d0d-af9e-db0f628709fe_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!F6Rx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db40050-61d3-4d0d-af9e-db0f628709fe_2752x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What I do think is clear: every company building an OS right now is trying to figure out how to add AI into it. But the answer we&#8217;re looking for isn&#8217;t an existing paradigm with AI bolted on. It isn&#8217;t ChromeOS or MacOS or Android with a chat window. It&#8217;s something designed from scratch around the assumption that the thing doing the computing is an agent, and the human is there to supervise and interact with it &#8212; not to drive all the details of making it work.</p><p>That&#8217;s a different OS than anything that exists today.</p><div><hr></div><h2><strong>Why does this matter now?</strong></h2><p>The window is specific to this moment. Every day another AI system announces more capabilities for computer use: schedules, remote access, browser automation, skills, MCP servers. All of these systems designed so agents can pretend to operate like people. We&#8217;re forcing computers to pretend to behave like humans in order to do things on computers. It&#8217;s a strange layer of friction we keep injecting because we&#8217;re using a wrench to hammer a nail.</p><p>At the same time, local inference is still too slow to run capable agents on-device for everything a modern OS relies on. Even the newest Qwen models on a tricked-out Mac Studio fall short of what we need for agents to run all of the details of these systems continuously. Until we close that gap, agents live in the cloud, on human-designed infrastructure, using agent harnesses to drive systems built for people.</p><p>Before long these constraints will be gone. Chips will hit 15,000 tokens per second and will run efficiently on your laptop. Then on your phone. Then on your headphones. Local capable agents will become practical on every device. And everyone who wants to run those agents needs an OS built specifically to maximize that experience, or we&#8217;ll keep absorbing friction translating from computer to human and back to computers.</p><p>When that OS exists, it will shape what agents can do and how safe they are. If we keep heading down the road we&#8217;re on with defaults of agents running on Linux and Windows &#8212; systems with no concept of a tool registry, no capability model for autonomous action, no staging area for irreversible decisions &#8212; we&#8217;re headed for problems.</p><p>The time to design this is before the hardware arrives. After it arrives, you&#8217;re retrofitting. Or maybe the agents will just do it themselves.</p><div><hr></div><h2><strong>What&#8217;s left to ponder?</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zoAr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b0c626-68c9-4bdb-8748-30ac96b5dbf4_1376x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zoAr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b0c626-68c9-4bdb-8748-30ac96b5dbf4_1376x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zoAr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b0c626-68c9-4bdb-8748-30ac96b5dbf4_1376x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zoAr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b0c626-68c9-4bdb-8748-30ac96b5dbf4_1376x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zoAr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b0c626-68c9-4bdb-8748-30ac96b5dbf4_1376x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zoAr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b0c626-68c9-4bdb-8748-30ac96b5dbf4_1376x768.jpeg" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9b0c626-68c9-4bdb-8748-30ac96b5dbf4_1376x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:389928,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/191948045?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b0c626-68c9-4bdb-8748-30ac96b5dbf4_1376x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zoAr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b0c626-68c9-4bdb-8748-30ac96b5dbf4_1376x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zoAr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b0c626-68c9-4bdb-8748-30ac96b5dbf4_1376x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zoAr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b0c626-68c9-4bdb-8748-30ac96b5dbf4_1376x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zoAr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b0c626-68c9-4bdb-8748-30ac96b5dbf4_1376x768.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Because I&#8217;m using this as a way to write-to-think, I&#8217;m very much missing a lot of detail. A few gaps I&#8217;m genuinely curious about:</p><p><strong>The Tensor question.</strong> Google&#8217;s Tensor chips already do pretty impressive on-device Gemini Nano inference. Is it close enough to what Layer 1 needs, and how well-suited is this approach to being what we want exposed to software? This matters for whether we&#8217;re talking about new silicon or new software on existing silicon.</p><p><strong>The safety research.</strong> The capability and reversibility model I&#8217;ve sketched is conceptually interesting, but I don&#8217;t know how much formal work exists on verifying capability systems for LLM agents. This is probably a known problem in security research that I haven&#8217;t fully worked through yet.</p><p><strong>The device form factor.</strong> A personal AI server &#8212; a small always-on box that runs your agents locally &#8212; seems like a natural first hardware expression of this. But I&#8217;m not sure if that&#8217;s the right starting point, or if this begins as a server product and migrates to the edge over time. What do you think?</p><div><hr></div><p>I find this compelling enough to think more about, which is why I wrote it down.</p><p>If you&#8217;re working on any part of this stack &#8212; hardware, runtime, capabilities, reversibility &#8212; I&#8217;d love to know. And if I&#8217;ve missed something obvious, tell me. What&#8217;s the right device form factor for this? Does it start in your living room, or in a data center?</p><p>&#8212; T</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Make your AI agent work while you sleep]]></title><description><![CDATA[A step-by-step guide anyone can follow to build rad shit with proactive AI agents]]></description><link>https://trond.ai/p/make-your-ai-agent-work-while-you</link><guid isPermaLink="false">https://trond.ai/p/make-your-ai-agent-work-while-you</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Tue, 10 Mar 2026 15:21:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Fpjv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Fpjv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Fpjv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!Fpjv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!Fpjv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!Fpjv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Fpjv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8137768,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/190474074?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Fpjv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!Fpjv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!Fpjv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!Fpjv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59cccd05-38ba-446b-a32a-79dea1937b96_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>One of my readers sent me an email last week with the subject line &#8220;<em>Building Semi-Rad Shit</em>.&#8221;</p><p>He&#8217;d been inspired by something I wrote and stayed up late putting it to work. He trained Gemini to review solar site title insurance exceptions. Then he had it pull utility hosting capacity maps, find available parcels, and cross-reference them against where the grid has room. A list of viable solar development sites, ranked by proximity to power lines. In a few hours.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><blockquote><p>&#8220;Last time I did site identification a few years ago,&#8221; he wrote, &#8220;I pulled the map data off the utility website into a database, then matched it with parcel data from each county by hand... this time I got a solid list in only a few hours!&#8221;</p></blockquote><p>Then, near the bottom of his email, he hit the wall.</p><blockquote><p>&#8220;It&#8217;s a little annoying that I can&#8217;t just have it do things offline, and it can&#8217;t send me emails once it&#8217;s done the work periodically.&#8221;</p></blockquote><p>Almost every builder I know hits this wall eventually. You do something impressive with AI, you see what&#8217;s possible, and then you realize: your AI only works when you&#8217;re there asking it questions. It goes to sleep the moment you close the tab.</p><p>This post is about how to take that next leap with AI.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2><strong>The gap between an assistant and an agent</strong></h2><p>Most AI tools are reactive. You prompt, they respond. You close the tab, they stop existing. That&#8217;s useful. But it&#8217;s not the same as having something working <em>for</em> you all the time.</p><p>A proactive agent is different in one specific way: it has a schedule. It wakes up, does a job, and reaches out to you if it finds something it needs your input on. If nothing&#8217;s worth saying, you hear nothing. You only know it&#8217;s running because, every so often, your phone buzzes with something that is worthy of your attention.</p><p>The first time that happens something profound clicks.</p><p>The good news is that three ingredients are all you need:</p><ol><li><p><strong>A schedule:</strong> a system to trigger your agent at regular intervals</p></li><li><p><strong>A task:</strong> a specific job with clear instructions about what to do each time</p></li><li><p><strong>A way to reach you:</strong> configuration for a chat message or an email</p></li></ol><p>Everything else is implementation details.</p><div><hr></div><h2><strong>Introducing OpenClaw</strong></h2><p>The most popular tool right now for building these sorts of agents is <a href="https://openclaw.ai/">OpenClaw</a>. It&#8217;s a self-hosted AI gateway you run on your own machine. It handles the scheduling, the agent sessions, and the channel connections needed to keep an agent running in a loop. It&#8217;s become very popular lately because it can be configured to do just about anything and has a rapidly growing ecosystem of skills you can use with it.</p><p>It&#8217;s not the only option for this, but it&#8217;s the one I&#8217;ve actually used to build everything I&#8217;m about to describe. It runs on Mac, Linux, any VPS, a Raspberry Pi, or even a Docker container you can run on your main computer. Monthly cost beyond the hardware is just your Anthropic, Gemini or OpenAI API usage. For light-to-moderate agent workloads, this can be had for about $5&#8211;20 per month.</p><p>Let&#8217;s help get you started.</p><div><hr></div><h2><strong>Step 1: Install OpenClaw</strong></h2><p><strong>On a Mac</strong><br>I have my OpenClaw running an a Mac mini I wasn&#8217;t using, so let&#8217;s start with this option. If you&#8217;ve been trying to buy a Mac mini recently and found them out of stock, OpenClaw is part of why. There&#8217;s a wave of people buying them specifically as always-on personal AI servers. The M4 model idles at about 10 watts. Less than a light bulb to keep an agent running around the clock.</p><p>If you&#8217;re using a Mac for OpenClaw you really want to dedicate that device to your setup so it&#8217;s not competing with you for resources. Once you have updates applied and a fresh install, open terminal and run this:</p><pre><code><code>curl -fsSL https://openclaw.ai/install.sh | bash</code></code></pre><p>This one command detects how you system is setup, installs Node if needed, installs OpenClaw, and walks you through the next steps. If you have a Mac ready to go, this is your fastest way to get started.</p><p><strong>On a Virtual Private Server</strong><br>Another popular option is to spin up an inexpensive virtual server (aka a VPS). Hetzner is a good option with their CX22 plan at $4/month. It provides 2 vCPU and 4GB RAM which is plenty for most basic OpenClaw setups. Here&#8217;s how set it up:</p><p><strong>a. Create the server.</strong><br>Go to <a href="https://hetzner.com/">hetzner.com</a>, create an account, and create a new server. Pick <strong>Ubuntu 24.04</strong>, the <strong>CX22</strong> plan, and select a data center near you. When asked about SSH keys, you can skip it and Hetzner will email you a root password &#8212; or if you want to set one up, <a href="https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent">GitHub&#8217;s SSH key guide</a> is the clearest I&#8217;ve found. It&#8217;s worth doing once.</p><p><strong>b. Connect to your new server using SSH.</strong><br>Open terminal and type this:</p><pre><code><code>ssh root@YOUR_SERVER_IP</code></code></pre><p><strong>c. Run the installer.</strong><br>This single command sets up OpenClaw, creates a firewall, configures auto-start on boot, and implements Tailscale for secure remote access:</p><pre><code><code>curl -fsSL https://raw.githubusercontent.com/openclaw/openclaw-ansible/main/install.sh | bash</code></code></pre><p>You don&#8217;t need to know what any of that means. The installer walks you through everything, including how to add your LLM API key (Anthropic, Gemini or OpenAI). Once you finish, you&#8217;ll have a running OpenClaw gateway.</p><p><strong>d. Verify.</strong><br>After setup, you should check that everything is working as expected. You can do that with these commands:</p><pre><code><code>openclaw doctor    # checks for config issues
openclaw status    # confirms the gateway is running</code></code></pre><p><code>gateway: running</code> means you&#8217;re done with this step and ready to go.</p><div><hr></div><h2><strong>Step 2: Connect Telegram</strong></h2><p><a href="https://telegram.org/">Telegram</a> is the fastest way to get your agent talking to you. It&#8217;s free, works everywhere, has no iMessage dependency, and is the default for a reason. Works on every platform, no phone number required.</p><p><strong>2a. Create a bot</strong></p><p>Open Telegram and search for <code>@BotFather</code>. Start a chat and run:</p><pre><code><code>/newbot</code></code></pre><p>Follow the prompts, pick a name and username for your bot. BotFather will give you a token that looks like:</p><pre><code><code>7412938456:AAHdqTgsH8m9a3Xe5lk0mBHZJzwp1234xxcd</code></code></pre><p>Copy that token. You&#8217;ll need it in the next step.</p><p><strong>2b. Add the token to OpenClaw config</strong></p><p>Open your OpenClaw config file (the onboarding wizard tells you where it lives, typically <code>~/.openclaw/config.json5</code>) and add the following to the file. I&#8217;m going to ask you to edit a few text files. If you&#8217;re on a mac, you can open the files in TextEdit which is pre-installed. On a VPS you&#8217;ll need to use vim which is a bit trickier to learn and somewhat unintuitive. Google it and figure it out. I believe in you.</p><p>Anyway, once you figure out how to edit text files add this to the config.json5 file:</p><pre><code><code>{
  channels: {
    telegram: {
      enabled: true,
      botToken: "YOUR_TOKEN_HERE",
      dmPolicy: "pairing",
    },
  },
}</code></code></pre><p>Save the file, then restart the gateway:</p><pre><code><code>openclaw gateway restart</code></code></pre><p><strong>2c. Pair your Telegram account</strong></p><p>Open Telegram and send any message to your new bot. Then back in your terminal:</p><pre><code><code>openclaw pairing list telegram</code></code></pre><p>You&#8217;ll see a pairing code tied to your Telegram user ID. Approve it:</p><pre><code><code>openclaw pairing approve telegram YOUR_CODE</code></code></pre><p>Send another message to your bot. It should respond. That&#8217;s the connection live.</p><div><hr></div><h2><strong>Step 3: Build your first proactive agent</strong></h2><p>The basic setup is ready. Now for the good part.</p><p>A cron job in OpenClaw is just a scheduled task with a natural language prompt. You describe what you want the agent to do, when to do it, and where to send the result. No code.</p><p>Here&#8217;s the exact setup for a monitoring agent, the same pattern I use for a dozen different tasks:</p><pre><code><code>openclaw cron add \
  --name "My First Monitor" \
  --cron "0 */4 * * *" \
  --tz "America/Chicago" \
  --session isolated \
  --message "Check [URL or data source] for [specific condition]. If you find [thing worth reporting], send me a Telegram message at [your Telegram ID] with the details. If nothing has changed or nothing meets the condition, output QUIET and stop. Do not send any message."</code></code></pre><p>The <code>--cron "0 */4 * * *"</code> runs it every 4 hours. <a href="https://crontab.guru/">Crontab.guru</a> is a handy tool for building your own expressions. Some common ones:</p><ul><li><p>Every morning at 7am: <code>"0 7 * * *"</code></p></li><li><p>Every Monday at 8am: <code>"0 8 * * 1"</code></p></li><li><p>Every hour: <code>"0 * * * *"</code></p></li></ul><p><strong>The most important part of the prompt:</strong> tell the agent to be quiet when nothing is worth reporting. <code>output QUIET and stop</code> prevents notification fatigue. You want to hear from it when something matters, not on a schedule.</p><p><strong>For our reader&#8217;s solar use case</strong>, this might look like:</p><pre><code><code>Check the ComEd hosting capacity map at [URL] for any new high-capacity zones 
that weren't in my last report. Cross-reference with parcel data for any lots 
over [n] acres within [m] miles of those zones that are listed for sale. 
If you find new matches, email me with a summary. If nothing new, output QUIET.</code></code></pre><p><strong>But you don&#8217;t need to any of that by hand!</strong> You can literally open Telegram and send your bot a message with the prompt above along with a description of when you want it run and how often. And your brand new agent will do it for you. This is the most important lesson here. Your agent knows how to set itself up and can do just about anything you want it to do from now on. This is why we&#8217;re careful about the machine we give it access to.</p><div><hr></div><h2><strong>Step 4: The AgentMail upgrade</strong></h2><p>Telegram works great for personal alerts. But if you want your agent to send proper emails, or receive emails and act on them, you&#8217;re going to want to add <a href="https://agentmail.to/">AgentMail</a>. I like this service because it&#8217;s built specifically for agents and how we use them securely. You could give your agent access to your personal email, but before you do that I recommend you <em>really</em> understand what you&#8217;re doing so you don&#8217;t bork your personal inbox <a href="https://techcrunch.com/2026/02/23/a-meta-ai-security-researcher-said-an-openclaw-agent-ran-amok-on-her-inbox/">like some people did</a>.</p><p>Once you have it setup AgentMail gives your agent its own inbox. Next is setting up your system so your Agent knows about it&#8217;s new email account. You can install the skill by hand with this command:</p><pre><code><code>clawdhub install agentmail</code></code></pre><p>Then your agent can send from <code>solar-scout@agentmail.to</code> (or whatever you configure), format a proper email report, and drop it in your inbox every Monday morning. It can also receive emails. If you reply to a report, the agent processes your reply and takes action.</p><p>For our reader this is the missing piece. Instead of a Telegram ping, he&#8217;d wake up to a formatted email from his solar scout with a table of new sites, ranked by proximity to capacity, with links to the parcel listings.</p><div><hr></div><h2><strong>What changes</strong></h2><p>The first few days you&#8217;ll keep checking if it worked. Opening the logs, running <code>openclaw status</code>, making sure the cron fired, etc. Then you forget it&#8217;s running.</p><p>A week later your phone might buzz. Your agent found something. Something you would have missed because you weren&#8217;t looking.</p><p>That&#8217;s the shift you need to get comfortable with. Your mindset shifts from &#8220;What can AI do for me?&#8221; to &#8220;How do I make my AI work for me when I&#8217;m not working?&#8221; This is the answer. The initial setup takes 30 minutes. The payoff is an AI that has a job, shows up for it, and only bothers you when it has something worth saying.</p><p>Before long you&#8217;ll be hooked. You&#8217;ll discover you can simply ask your agent to do all sorts of things for you proactively and it&#8217;ll set itself up to make it happen. You&#8217;ll browse <a href="https://clawhub.ai/">clawhub.ai</a> for skills and eventually write your own skills. You&#8217;ll read about how people are doing amazing things with their setups and replicate some of their efforts. It gets fun fast.</p><p>On Sunday, my agent texted me at 7am about sold out Harry Potter tickets for a trip we&#8217;re taking to London. Stella was keeping an eye out for cancellations, so I could pounce if any came available. We didn&#8217;t get the tickets because someone else was faster. That&#8217;s a different problem. But I&#8217;m hopeful I&#8217;ll have another chance.</p><p>What would you have your agent do for you?</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/p/make-your-ai-agent-work-while-you?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Build Rad Shit! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/p/make-your-ai-agent-work-while-you?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trond.ai/p/make-your-ai-agent-work-while-you?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div><hr></div><p><em>Builder&#8217;s note: The OpenClaw docs are at <a href="https://docs.openclaw.ai/">docs.openclaw.ai</a> and the community is active on <a href="https://discord.com/invite/clawd">Discord</a> if you get stuck. If you end up building something like solar scout, I want to hear about it.</em></p><div><hr></div><p><em>Trond Wuellner builds things at Google Labs and writes about what he learns. If this was useful, forward it to someone who&#8217;d appreciate it.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trond.ai/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[How to Build Your First Rad Thing]]></title><description><![CDATA[A brief guide on how to get started building your first thing using AI.]]></description><link>https://trond.ai/p/how-to-build-your-first-rad-thing</link><guid isPermaLink="false">https://trond.ai/p/how-to-build-your-first-rad-thing</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Mon, 02 Mar 2026 16:18:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6XKh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I spent last weekend in Whistler with a group of friends from business school. Solar entrepreneurs, VC partners, leaders at Apple and Google. An entrepreneur who sold his last company and now runs a business distributing wild game in Sweden and Europe.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6XKh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6XKh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6XKh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6XKh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6XKh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6XKh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1245848,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/189619933?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6XKh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6XKh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6XKh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6XKh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdad70e0e-82e7-4eb4-8ed8-6afe9a6e0ad3_3456x1944.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Smart people. All of them.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Every single one has played with ChatGPT. Most use it regularly. And when I described what I&#8217;ve been building: AI assistants that manage my family&#8217;s calendar, generate math worksheets for my kids, call restaurants on my behalf, monitor my house. They all had the same look. Not skepticism. More of an eagerness to dive in to figure out where these tools might help them at home and at work.</p><blockquote><p>&#8220;Okay but how do you actually start?&#8221;</p></blockquote><p>That&#8217;s what this is. The answer to that question. Not theory or hype. A step-by-step guide for someone who understands their business, isn&#8217;t afraid of technology, and is ready to go from curious to building.</p><p>This is my favorite part of this moment we&#8217;re living in. You don&#8217;t need to be a developer. You need to be willing to try and eager to learn. This guide is for you.</p><div><hr></div><h2><strong>The Decision</strong></h2><p>Before you install anything, understand what you&#8217;re actually signing up for.</p><p>Building with AI coding tools is not like using ChatGPT. You&#8217;re not typing questions and copying answers. You&#8217;re setting up an ongoing collaboration with a system that writes code, runs commands on your computer, reads your files, searches the web, and iterates based on your feedback.</p><p>And the output isn&#8217;t always software in the traditional sense. It might be an automated workflow, a document processor, a report generator, or a tool that monitors your inbox and surfaces what matters. If it involves information and repetition, these tools can help.</p><p>It&#8217;s closer to hiring a junior developer than using a search engine. A very fast, very capable junior developer who has read most of the internet but has no context about your specific situation.</p><p>Your job is to provide that context. To ask good questions. To review what it builds and tell it what to change. You don&#8217;t need to understand every line of code it writes. You need to understand what you want, and be able to recognize when you&#8217;re getting it.</p><p>That&#8217;s it. That&#8217;s the skill.</p><p>The tools do the rest.</p><div><hr></div><h2><strong>A Word on Cost</strong></h2><p>Before you install anything: check what you already have.</p><p>A lot of people are already paying for access to these tools without realizing it.</p><ul><li><p><strong>Google Workspace</strong> Business Standard and above includes Gemini built in: summarization in Gmail, Docs, Sheets, and more. Check with your IT admin or look in your Google account under the Gemini icon.</p></li><li><p><strong>ChatGPT Plus</strong> ($20/month) includes Codex. If you&#8217;re already subscribed, you&#8217;re in.</p></li><li><p><strong>Microsoft 365</strong> offers Copilot as a paid add-on, but if your company has rolled it out, you may already have access to some of these capabilities.</p></li></ul><p>If you have any of these, start with what you have before buying anything new.</p><p><strong>If you need to pay, subscriptions are almost always the better choice.</strong> The $20/month plans (<a href="https://claude.com/pricing">Claude Pro</a>, <a href="https://openai.com/chatgpt/pricing">ChatGPT Plus</a>, <a href="https://one.google.com/about/plans">Google AI Pro</a>) give you predictable costs, generous limits, and the most capable models. For most people, a subscription goes further than pay-per-use at the same price.</p><p>The higher tiers, Claude Max ($100-200/month) and ChatGPT Pro ($200/month), are worth it once you&#8217;re using these tools daily and want priority access, maximum limits, and the fastest responses. Think of it like a SaaS tool: pay for the plan that matches your usage.</p><p>The pay-per-use API is powerful but unpredictable. Claude Code can run $20-50 in a single day on a large project without you noticing. It makes sense eventually, especially for automated workflows that run without you. It&#8217;s not where you want to start.</p><p><strong>The progression that makes sense:</strong></p><ol><li><p>Check if you already have access (Workspace, ChatGPT Plus, Microsoft 365)</p></li><li><p>Start with a free tier (Gemini CLI, Antigravity) to get comfortable</p></li><li><p>Upgrade to a $20/month subscription when you&#8217;re ready to go deeper</p></li><li><p>Move to higher tiers ($100-200/month) if you&#8217;re using it every day</p></li><li><p>Add pay-per-use API access only when you&#8217;re building automated workflows that run on their own</p></li></ol><div><hr></div><h2><strong>Pick Your Tool</strong></h2><p>There are essentially three tools worth using right now. They all do roughly the same thing: AI-assisted coding and automation. Different providers, each with different strengths.</p><p>All have desktop apps to get you building without touching a terminal. Claude&#8217;s desktop app includes a Code tab (Claude Code with a visual interface) and Cowork, an autonomous mode where Claude acts on files in a folder you authorize. Codex has a standalone app. Google&#8217;s Antigravity IDE is free and helpful for getting started quickly.</p><p>Pick the tool that matches what you already have access to. You can always change later. We&#8217;re just finding the right path to get you started today.</p><p><strong>My simple point of view:</strong></p><p>Start with what you already pay for. ChatGPT Plus gives you Codex. Google Workspace gives many people Gemini Pro. Check what you already have before you buy anything.</p><p>If you&#8217;re starting from zero: Antigravity (Google&#8217;s IDE) is free, polished, and requires no terminal. Gemini is especially good if your project involves reading lots of documents, emails, or spreadsheets; which is a bonus.</p><p>Codex is the natural choice if you already use ChatGPT Plus. You&#8217;re already paying for it. Just start there.</p><p>I generally start most projects with Antigravity, but I also love what Claude can do even though it requires a subscription. Their desktop app is quite impressive. The Code tab handles most coding tasks beautifully, and Cowork (included from Pro and above) takes it to another level. It reasons more carefully than the others, handles nuance well, and is often the most honest about what it doesn&#8217;t know. Worth it once you&#8217;re willing to add another subscription to your monthly nut.</p><p>You don&#8217;t have to pick just one. You&#8217;ll learn what you prefer as you start building and can easily change later.</p><div><hr></div><h2><strong>Install It: Step by Step on Mac</strong></h2><h3><strong>Option A: Gemini (Google)</strong></h3><p><strong>The easy way: Antigravity</strong></p><p>Google released an agent-first IDE called <a href="https://antigravity.google/download">Antigravity</a> in late 2025. It&#8217;s free and runs Gemini models under the hood. The standout feature: a &#8220;Manager View&#8221; that orchestrates multiple AI agents in parallel across your editor, terminal, and browser. Think Mission Control for coding tasks. Download it like any Mac app, sign in with your Google account, and you&#8217;re building in minutes.</p><p><strong>The power-user way: CLI</strong></p><p>The Gemini CLI unlocks its biggest advantage: the ability to read enormous amounts of text at once. Useful for projects involving lots of documents, emails, or large codebases.</p><pre><code><code>brew install node
npm install -g @google/gemini-cli
gemini</code></code></pre><p>On first run it opens a browser window for Google sign-in. That&#8217;s the whole setup. For the most powerful model:</p><pre><code><code>gemini -m gemini-pro-latest</code></code></pre><div><hr></div><h3><strong>Option B: Claude (Anthropic)</strong></h3><p><strong>The easy way: desktop app</strong></p><p>Download the Claude desktop app at <a href="https://claude.com/download">claude.com/download</a>. Free to download, works on Mac and Windows. Sign in with your Anthropic account and you get two modes worth knowing about:</p><ul><li><p><strong>Code tab:</strong> a graphical interface for Claude Code. You get visual diff review, live previews of what you&#8217;re building, and the ability to run multiple tasks in parallel, all without touching the terminal. This is Claude Code with a UI on top.</p></li><li><p><strong>Cowork:</strong> a research preview (Pro and above) where Claude autonomously acts on files in a folder you authorize. Think of it as giving Claude a desk of its own: it can organize documents, build spreadsheets from receipts, draft reports. You review and confirm significant changes.</p></li></ul><p>For most people starting out, the Code tab is the right place to begin.</p><p><strong>The power-user way: CLI</strong></p><p>The terminal version is one line to install:</p><pre><code><code>curl -fsSL https://claude.ai/install.sh | bash</code></code></pre><p>After it finishes, just run:</p><pre><code><code>claude</code></code></pre><p>Sign in with your Anthropic account. You&#8217;ll need a <a href="https://claude.com/pricing">Pro or Max subscription</a> (starting at $20/month), or an API key from <a href="https://console.anthropic.com/">console.anthropic.com</a> to pay by usage instead. The CLI and the desktop app share the same config, including your <code>CLAUDE.md</code> files, so you can move between them freely.</p><div><hr></div><h3><strong>Option C: Codex (OpenAI)</strong></h3><p><strong>The easy way: Codex app</strong></p><p>OpenAI has a standalone Codex app at <a href="https://openai.com/codex">openai.com/codex</a>. Purpose-built for coding tasks. If you already pay for <a href="https://openai.com/chatgpt/pricing">ChatGPT Plus</a> ($20/month), you have access.</p><p><strong>The power-user way: CLI</strong></p><pre><code><code>brew install node
npm install -g @openai/codex
export OPENAI_API_KEY="your-key-here"
codex</code></code></pre><p>Get your API key at <a href="https://platform.openai.com/">platform.openai.com</a> &#8594; API Keys &#8594; Create Key.</p><div><hr></div><h2><strong>Your First Session: What You&#8217;ll Actually See</strong></h2><p>Here&#8217;s what you&#8217;ll actually see when you open each tool for the first time.</p><h3><strong>If you&#8217;re using Antigravity (Google)</strong></h3><p>When Antigravity opens you&#8217;ll see two views in the sidebar: <strong>Editor</strong> and <strong>Manager</strong>.</p><p>Start in <strong>Editor</strong>. It looks like a code editor: a big panel on the left for files, a writing area in the center, and a chat panel on the right. That chat panel is where you talk to the AI. You don&#8217;t need to touch the file panel or the code. Just focus on the chat.</p><p>To open your project: File &#8594; Open Folder &#8594; navigate to your project folder (Desktop or Documents is fine) and select it. Your files appear in the left panel. Now when you talk to the AI, it can see and edit those files directly.</p><p>Type your first message in the chat panel on the right. The AI responds, makes changes to your files, and shows you what it did. In a moment, we&#8217;ll show you what to type in that chat box!</p><h3><strong>If you&#8217;re using the Claude desktop app</strong></h3><p>When it opens you&#8217;ll see a clean interface with a chat window in the center. At the top you&#8217;ll see tabs: <strong>Chat</strong>, <strong>Code</strong>, and (on Pro and above) <strong>Cowork</strong>.</p><p>Click <strong>Code</strong>. You&#8217;ll see a chat interface similar to the regular Claude chat, but with file access. Click the folder icon or go to File &#8594; Open Project to point it at your project folder. The AI can now read and edit files in that folder.</p><p>Type your message in the chat box at the bottom. The AI responds in the main window, shows you code changes with visual diffs (highlighted additions and deletions), and asks for confirmation before editing files.</p><h3><strong>If you&#8217;re using Claude Code in the terminal</strong></h3><p>After running <code>claude</code>, you&#8217;ll see a prompt that looks like this:</p><pre><code><code>Claude Code (claude-opus-4-5)
Type a message or /help for commands
&gt;</code></code></pre><p>Navigate to your project folder first:</p><pre><code><code>cd ~/Desktop/my-project
claude</code></code></pre><p>Full Claude Code docs at <a href="https://code.claude.com/docs/en/overview">docs.anthropic.com/claude-code</a>.</p><p>Now type your message at the <code>&gt;</code> prompt. The AI responds inline, shows you the files it&#8217;s changing, and asks your permission before modifying anything.</p><div><hr></div><h3><strong>If you&#8217;re using the Codex desktop app</strong></h3><p>When Codex opens you&#8217;ll see a <strong>Projects panel</strong> on the left and a main area on the right. Projects are how Codex organizes your work: each project points to a folder on your computer (or a GitHub repo, if you have one).</p><p>To get started: click <strong>New Project</strong>, give it a name, and point it at your project folder. Codex loads the files and you&#8217;re ready.</p><p>The main area shows <strong>Threads</strong>: each thread is a task you&#8217;ve given to an AI agent. Think of threads like chat conversations, except instead of just answering, the AI is actively editing files and running code in the background. You can have multiple threads running at once, each working on a different thing.</p><p>Type your first message in the input box at the bottom of the main panel. The AI picks it up as a new thread, starts working, and shows you what it&#8217;s doing in real time: which files it&#8217;s reading, what changes it&#8217;s making, whether the code ran successfully. When it&#8217;s done, you&#8217;ll see a diff &#8212; a summary of what changed &#8212; and you can approve it, ask for changes, or open the result in your own editor.</p><p>One thing worth knowing: Codex is built for <strong>parallel work</strong>. You can kick off one task, then start another while the first is still running. Each stays in its own thread. Genuinely useful once you have more than one thing to build at once. For your first session, just start one thread and get a feel for the loop before you try running multiple.</p><div><hr></div><h3><strong>The big thing most people don&#8217;t realize</strong></h3><p>You don&#8217;t need to know how to run code. In all three tools, the AI runs the code itself. You ask it to build something, it builds it, runs it to check it works, shows you the result, and tells you if anything went wrong. Your job is to tell it what you want and give feedback on what you see.</p><p>If the AI produces a script you need to run yourself, it will tell you exactly what to type. Copy it. Paste it. Run it. That&#8217;s the full extent of what you need to know.</p><h3><strong>On permissions and trust</strong></h3><p>One thing that catches people off guard: these tools ask for your approval constantly. Before editing a file, before running a command, before accessing the internet: you&#8217;ll see a prompt asking if it&#8217;s okay to proceed.</p><p>Often you won&#8217;t know exactly what it&#8217;s asking. The command it wants to run might look like gibberish. You&#8217;re faced with a choice: trust the system and say yes, or stop and ask what it means before proceeding.</p><p>Here&#8217;s a rough guide to what&#8217;s routine and what deserves a pause:</p><p><strong>Generally fine to approve:</strong></p><ul><li><p>Reading or editing files inside your project folder</p></li><li><p>Running the code it just wrote</p></li><li><p>Installing packages or dependencies (it needs these to build things)</p></li><li><p>Creating new files in your project</p></li></ul><p><strong>Worth a moment&#8217;s thought:</strong></p><ul><li><p>Making network requests or calling external APIs: ask what it&#8217;s connecting to and why</p></li><li><p>Deleting files: make sure you know what&#8217;s being removed</p></li><li><p>Commands that reference folders outside your project: why does it need to go there?</p></li><li><p>Anything involving credentials, API keys, or passwords: confirm it&#8217;s not storing these somewhere unexpected</p></li></ul><p><strong>Decline and ask questions:</strong></p><ul><li><p>Anything with <code>sudo</code> (administrator access) that the task clearly doesn&#8217;t require</p></li><li><p>Commands touching sensitive system locations you didn&#8217;t direct it toward</p></li><li><p>Sending data to a service you didn&#8217;t set up or recognize</p></li></ul><p>When in doubt: if a command looks completely alien, type &#8220;explain exactly what this does in plain English before running it.&#8221; The AI will walk you through it. You&#8217;ll either feel confident to proceed, or you&#8217;ll catch something that shouldn&#8217;t happen.</p><p>Once you trust the project and want to stop approving every small action, Claude Code has an answer: <strong>yolo mode</strong>. Type <code>/yolo</code> in the chat. It skips permission prompts for the session. Your files are still there, changes are still visible, and you can undo. It just means you&#8217;ve decided to let the AI work without interruption. Most people switch to it once a project is underway and they understand what the AI is doing.</p><div><hr></div><h2><strong>Setting Up Your First Project</strong></h2><p>Every project starts the same way: a folder and a briefing file.</p><p><strong>If you&#8217;re using a desktop app (Antigravity, Codex or Claude):</strong> Create a folder anywhere. Desktop or Documents is fine. Name it something descriptive. Then open it in the app. That&#8217;s your project.</p><p><strong>If you&#8217;re using the terminal:</strong> Create the folder and navigate into it:</p><pre><code><code>mkdir ~/Desktop/my-project
cd ~/Desktop/my-project</code></code></pre><p>The briefing file is called <code>AGENTS.md</code>. Think of it as permanent memory for your project: context the AI reads at the start of every session so you never have to re-explain from scratch. Claude Code reads <code>CLAUDE.md</code> natively, and most other AI coding tools are converging on <code>AGENTS.md</code> as the shared standard. Using <code>AGENTS.md</code> means every tool picks it up automatically.</p><p><strong>A quick note on the .md file format.</strong> These files are written in a format called Markdown: a simple way of formatting plain text using symbols like <code>#</code> for headings and <code>-</code> for bullet points. You&#8217;ve probably seen it without knowing it. The AI reads it perfectly, and it&#8217;s easy to write once you&#8217;ve seen an example.</p><p>Macs don&#8217;t come with a great Markdown editor by default. A few good options:</p><ul><li><p><strong><a href="https://bear.app/">Bear</a>:</strong> Mac-native, beautiful, free to start. The one I&#8217;d start with.</p></li><li><p><strong><a href="https://obsidian.md/">Obsidian</a>:</strong> free, powerful, worth it if you want your notes and projects in one place.</p></li><li><p><strong><a href="https://code.visualstudio.com/">VS Code</a>:</strong> free, already installed if you&#8217;re using Antigravity (it&#8217;s built on the same base), and has Markdown preview built in.</p></li></ul><p>Again, remember that you can always <strong>just ask the AI to create the file for you.</strong> After the interview, say &#8220;Based on what we just discussed, create an AGENTS.md file for this project.&#8221; It will write the whole thing correctly. If you want, you can open it in Bear or Obsidian to review and tweak.</p><p><strong>Seriously, don&#8217;t start from scratch.</strong> The community has built templates you can copy. Just use those as a starting point, they&#8217;re going to be better than whatever you&#8217;ll do at first:</p><ul><li><p><strong><a href="https://github.com/agentsmd/agents.md">agentsmd/agents.md</a>:</strong> The simplest, most widely-adopted template.</p></li><li><p><strong><a href="https://github.com/davila7/claude-code-templates">davila7/claude-code-templates</a>:</strong> Ready-to-use configurations with working examples.</p></li></ul><p>Here&#8217;s a starting point if you absolutely want to roll your own:</p><pre><code><code># AGENTS.md

## What This Project Is
[One paragraph: what it does and why it exists]

## Who Uses It
[Describe the person this is built for]

## Current Status
[What's done, what's in progress, what's next]

## Important Constraints
[Things the AI must never do, hard requirements, known limitations]

## Style and Standards
[How you want things done: tone, format, patterns to follow consistently]</code></code></pre><p>You don&#8217;t need to fill in every section before you start. The first two are enough to begin. The AI will help you figure out the rest as you go.</p><p>The Style and Standards section is the most underrated part. Tell the AI once how you want things done, and it applies that consistently every session. You stop correcting the same things over and over.</p><p><em>One note on naming</em>: <code>AGENTS.md</code> works across most tools. If you&#8217;re using Claude, it reads a file called <code>CLAUDE.md</code> instead. That file has the same purpose and works the same way but they don&#8217;t use the common naming scheme as everyone else. Sigh.</p><p><strong>Starting every session, your opening message:</strong></p><pre><code><code>Please read AGENTS.md to get context on this project, then tell me what you understand and what questions you have before we continue.</code></code></pre><p>It&#8217;s a good idea to say this at the start of every new chat. It takes five seconds and prevents you from re-explaining your project from scratch. If you&#8217;re in the terminal, navigate to your project folder first (<code>cd ~/Desktop/my-project</code>), then start the tool.</p><div><hr></div><h2><strong>Your First Project: The Interview Method</strong></h2><p>Here&#8217;s one mistake people make when they get started: they open the tool and immediately say &#8220;build me X.&#8221;</p><p>The model has no idea who you are, what your business does, what constraints you&#8217;re working under, or what &#8220;good&#8221; looks like for you. So it builds something generic that doesn&#8217;t fit, you get frustrated, and you assume AI coding tools don&#8217;t work.</p><p>They do work, but you need to teach them what you want first.</p><p>I use what I call interview mode. Open your tool of choice, make sure it&#8217;s pointing at your project folder, and type this into the chat window:</p><pre><code><code>Before we start building anything, I want to make sure you deeply understand what I need. Please ask me up to 10 clarifying questions about this project &#8212; one at a time, waiting for my answer before asking the next. Focus on: who the users are, what problem we're solving, what success looks like, what constraints I'm working under (time, budget, technical), and what I definitely don't want. When you feel you have enough, summarize what you've heard and ask if it's right before we proceed.</code></code></pre><p>The model will ask you things you hadn&#8217;t thought to specify. Answer honestly in the chat, one question at a time. When it summarizes back to you, correct anything that&#8217;s off. This usually takes 10-15 minutes and saves hours of misdirected work.</p><p>After the interview, you&#8217;re going to want to capture the key insights in a structured way using 2 docs. The AI will write them and save them directly to your project folder:</p><p><strong>The requirements prompt:</strong></p><pre><code><code>Based on our conversation, write a short Product Requirements Document and save it as REQUIREMENTS.md in this project folder. Include: the problem we're solving, who it's for, the core features in priority order, what we're explicitly not building, and what "done" looks like for the first version.</code></code></pre><p><strong>The plan prompt:</strong></p><pre><code><code>Now write a step-by-step build plan and save it as PLAN.md. What are the major pieces? What order would you tackle them in? Flag anything where you see a real tradeoff I should weigh in on.</code></code></pre><p>Once the AI saves these files, they&#8217;ll appear in your project folder. Open them in Bear or VS Code to read through them. Correct anything that&#8217;s off. These become the ground truth that keeps you and the AI aligned throughout the build, and the foundation of your <code>AGENTS.md</code>.</p><div><hr></div><h2><strong>The Build Loop</strong></h2><p>You&#8217;ve briefed the AI, you have a plan, you&#8217;ve started a session. Now what?</p><p>Building with AI tools is a loop:</p><ol><li><p><strong>Ask for something specific.</strong> Not &#8220;build my app&#8221; but &#8220;build the function that reads incoming emails and pulls out the dates.&#8221;</p></li><li><p><strong>Review what it builds</strong> (you don&#8217;t need to understand every line, but read the explanation it gives you)</p></li><li><p><strong>Test it.</strong> Say &#8220;run this and show me the output.&#8221; The AI executes the code itself and reports back what happened. You don&#8217;t need to know how to run code.</p></li><li><p><strong>Tell it what&#8217;s wrong.</strong> Be specific: &#8220;this crashes when the email has no subject line&#8221; or &#8220;the output is missing the date field.&#8221;</p></li><li><p><strong>Repeat</strong></p></li></ol><p><strong>Reading responses:</strong> The AI will often explain what it did and why. Read this. It&#8217;s showing its reasoning. If the reasoning sounds off, say so before it goes further. Correcting direction early is much easier than unwinding bad work later.</p><p><strong>When it gets stuck:</strong> All three tools will sometimes hit a wall and go in circles. Signs: it rewrites the same thing repeatedly, asks you the same question again, or apologizes for the confusion more than twice. When this happens, stop. Start a fresh conversation. In the app, click &#8220;New Chat&#8221; or &#8220;New Session&#8221;. In the terminal, type <code>exit</code> and run <code>claude</code> again. Then paste the relevant context: &#8220;Read AGENTS.md and DECISIONS.md, then let&#8217;s approach [the specific problem] differently.&#8221; Fresh context, fresh angle.</p><p><strong>When to push vs ask:</strong> If you don&#8217;t understand a choice the AI made, ask. &#8220;Why did you use X here instead of Y?&#8221; It will explain. Either there&#8217;s a good reason you hadn&#8217;t considered, or it made a mistake. Either way, you learn something. Never accept work you don&#8217;t understand at all. You&#8217;ll have to deal with it later.</p><div><hr></div><h2><strong>The Learning Loop: How the System Gets Smarter Over Time</strong></h2><p>Here&#8217;s something most people miss. It&#8217;s probably the biggest multiplier once you get past your first project.</p><p>Every AI coding tool starts with zero knowledge about you. Your preferences. Your standards. The decisions you&#8217;ve already made. What you tried last week and why it didn&#8217;t work. Each new session, you&#8217;re starting from scratch unless you build a system to prevent that.</p><p>The fix is simple: document decisions as you go, in a place the AI can read them.</p><p>Create a file called <code>DECISIONS.md</code> in your project folder. After each meaningful session, end with this prompt:</p><pre><code><code>Summarize what we built today, what decisions we made, and why. Format it as a short list I can append to DECISIONS.md.</code></code></pre><p>Paste the output into the file. Next session, start with:</p><pre><code><code>Before we continue, read AGENTS.md and DECISIONS.md to get full context on this project and the decisions we've already made.</code></code></pre><p>Over time, this file becomes institutional memory. The AI stops suggesting things that contradict choices you&#8217;ve already made. It stops proposing approaches you already tried and abandoned. It builds on what exists.</p><p>What to capture:</p><ul><li><p>Choices made and why (&#8221;we&#8217;re using X instead of Y because...&#8221;)</p></li><li><p>Things that didn&#8217;t work (&#8221;tried Z, caused problems because...&#8221;)</p></li><li><p>Standards you want applied consistently (&#8221;always do it this way&#8221;)</p></li><li><p>Open questions you haven&#8217;t resolved yet</p></li></ul><p>You don&#8217;t need to be exhaustive. Three or four bullet points per session compounds fast over weeks.</p><p>The projects where this discipline holds get easier over time. The ones where it breaks down get harder. The AI gets genuinely more useful the more context it has. <code>DECISIONS.md</code> is the simplest way to build that context.</p><div><hr></div><h2><strong>API Keys Done Right</strong></h2><p><em>Most people don&#8217;t need this section yet.</em> If you&#8217;re using a subscription plan (Claude Pro, ChatGPT Plus, Google One), you authenticate with your account and you&#8217;re done. API keys are for automated workflows that run without you: scripts, scheduled jobs, integrations that fire in the background. API keys are also necessary if you want to connect to a service programmatically.</p><p>For example, I built a system that used a USDA dataset to calculate the nutritional value of various recipes and to set that up, I needed to create an API key with the service and save it securely. Come back here when you&#8217;re building something like that.</p><p>In short, API keys are like passwords. If someone gets yours, they can use your account and you pay the bill. Here&#8217;s how to handle them safely.</p><p><strong>Create a secrets folder that won&#8217;t accidentally get shared:</strong></p><pre><code><code>mkdir -p ~/.secrets
chmod 700 ~/.secrets</code></code></pre><p><strong>Store each key in its own file:</strong></p><pre><code><code>echo 'ANTHROPIC_API_KEY=your-key-here' &gt; ~/.secrets/anthropic.env
echo 'OPENAI_API_KEY=your-key-here' &gt; ~/.secrets/openai.env
echo 'GOOGLE_API_KEY=your-key-here' &gt; ~/.secrets/google.env  # Get key at aistudio.google.com</code></code></pre><p><strong>Load them automatically when you open Terminal:</strong></p><pre><code><code>echo 'source ~/.secrets/anthropic.env' &gt;&gt; ~/.zshrc
source ~/.zshrc</code></code></pre><p>Do this for each key. Now, your keys load automatically every time you open Terminal and they&#8217;re in a folder with restricted permissions where they can&#8217;t accidentally get shared.</p><p>Three rules:</p><ol><li><p>Never paste your API key directly into code</p></li><li><p>Never share a file that contains your API key</p></li><li><p>Set spending limits in each provider&#8217;s dashboard before heavy use (<a href="https://console.anthropic.com/settings/limits">Anthropic limits</a>, <a href="https://platform.openai.com/account/limits">OpenAI limits</a>)</p></li></ol><div><hr></div><h2><strong>Skills: Going Further</strong></h2><p>Once you&#8217;ve shipped something, you&#8217;ll want the AI to do more than write code.</p><p>Skills are reusable packages of instructions that teach your AI how to handle a specific recurring task. A skill for reading and summarizing emails. A skill for generating weekly reports in a consistent format. A skill for reviewing what you built and flagging problems. You install it once, and the AI knows how to do that thing every session.</p><p>Where to find them:</p><ul><li><p><strong><a href="https://skills.sh/">skills.sh</a>:</strong> The largest open directory, with contributions from Anthropic, Google, Microsoft, and the community. Browse by category.</p></li><li><p><strong><a href="https://playbooks.com/">playbooks.com</a>:</strong> Curated skills and bundles. Good for finding a set that works together.</p></li></ul><div><hr></div><h2><strong>Prompt Resources Worth Bookmarking</strong></h2><ul><li><p><strong><a href="https://ai.google.dev/gemini-api/cookbook">Gemini API Cookbook</a>: </strong>Google&#8217;s official Gemini prompt collection.</p></li><li><p><strong><a href="https://docs.anthropic.com/en/prompt-library">Anthropic Prompt Library</a>:</strong> Claude&#8217;s official collection of prompts for common tasks.</p></li><li><p><strong><a href="https://cookbook.openai.com/">OpenAI Cookbook</a>:</strong> Practical examples for Codex-based workflows.</p></li><li><p><strong><a href="https://www.promptingguide.ai/">PromptingGuide.ai</a>:</strong> Deep reference with research-backed techniques.</p></li></ul><p>Three techniques worth learning first:</p><p><strong>Think step by step.</strong> Add &#8220;Think through this step by step before answering&#8221; to any complex request. The AI slows down, shows its reasoning, and catches mistakes it would otherwise skip. It sounds almost too simple. It works.</p><p><strong>Label the parts of your prompt.</strong> When your request has multiple parts, label each one clearly. Like this:</p><pre><code><code>Context: I'm building a tool that summarizes supplier emails for a food distribution business.

Task: Write a function that reads an email and pulls out any price changes mentioned.

Constraints: Only flag changes above 5%. Ignore shipping cost changes.

Format: Return a simple list with supplier name, product, and new price.</code></code></pre><p>Claude in particular responds much better to a structured prompt than a wall of text. It knows exactly what&#8217;s background and what&#8217;s the actual ask.</p><p><strong>Plan before you build.</strong> Before asking the AI to do anything, ask it to plan first. &#8220;Outline the steps you&#8217;d take to do X, without doing anything yet.&#8221; Review the plan. Fix anything that looks wrong. Then say go. You catch bad assumptions before they turn into bad work.</p><div><hr></div><h2><strong>The Real Bar</strong></h2><p>Here&#8217;s the thing I kept saying in Whistler that I want to say again here.</p><blockquote><p>The bar for &#8220;building something&#8221; is not as high as you think.</p></blockquote><p>A solar entrepreneur who automates the extraction of permit requirements from municipal PDFs into a structured spreadsheet has built something. A wild game distributor who sets up a system to summarize supplier emails and flag price changes has built something. A VC partner who creates a tool that generates first-pass memos from founder decks has built something.</p><p>None of those require a developer. They require someone who understands the problem, can explain it clearly, and is willing to iterate.</p><p>The AI does the technical part. You do the part that matters: knowing what&#8217;s worth building.</p><p>Start with the smallest possible version of the thing you actually need. Get it working. Then make it better. Repeat until you&#8217;ve built something rad.</p><p>That&#8217;s it.</p><div><hr></div><p><em>If you build something using this guide, I want to hear about it. Reply to this email or find me on Twitter/X. I read everything.</em></p><p><em>And if you&#8217;re not sure what to build first, that&#8217;s okay too. Start the interview with: &#8220;I&#8217;m not sure what to build. Here&#8217;s my business and here&#8217;s where I spend the most time on things that feel repetitive.&#8221; Let the AI help you figure out what&#8217;s worth automating. That conversation alone is usually worth the setup.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[One wild trick to help your kids with math]]></title><description><![CDATA[I spent 8 years painstakingly hand crafting math problems for my kids. Then I built an AI that makes better worksheets, study guides and answer keys than I ever could.]]></description><link>https://trond.ai/p/i-spent-8-years-handwriting-math</link><guid isPermaLink="false">https://trond.ai/p/i-spent-8-years-handwriting-math</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Mon, 23 Feb 2026 16:32:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!06xk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!06xk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!06xk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!06xk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!06xk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!06xk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!06xk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png" width="1344" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1537569,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/188875433?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!06xk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!06xk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!06xk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!06xk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad13ba05-c22b-4b62-bc40-8bcd79ae905b_1344x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>For eight years, when my kids needed extra practice before a math test, I sat down and handwrote practice problems myself. Sounds wholesome. It was mostly tedious.</p><p>Something you might not know about handcrafting math problems: you have to solve them (in your head) as you write them. Because if you don&#8217;t, you end up with division that produces some wretched 11-decimal answer, or a quadratic that has no real solutions, and your kid is sitting there staring at it while you quietly panic and say &#8220;let me check that one.&#8221; Teachers really aren&#8217;t paid enough.</p><p>The process I used was: come up with a good problem, pre-solve it, write the answer somewhere, repeat. Then when they come back with their work, solve it again alongside them to help them check. For basic arithmetic, totally manageable. For my freshman in Algebra 2 or my seventh grader in Pre-Algebra? That 10-minute process quickly grew to 45-minutes or more. My daughter once solved problems faster than I could write them. That was a humbling evening.</p><p>I started trying to use AI for this maybe three years ago. The results were... fine. Inconsistent. The formatting was always a mess and I&#8217;d have to clean it up before printing. And the problems weren&#8217;t always right. Or leveled correctly. Or just plain repetitive.</p><p>Over the last year it got noticeably better. I started testing models by asking them to generate LaTeX directly. Real formatted worksheets became easily printable PDFs. Over time, its gotten more reliable. But I was still doing a lot of shepherding: prompt, review, fix the formatting, regenerate problems that didn&#8217;t make sense, spot-check the answers. Better, but still bad.</p><p>Something has shifted in the last few weeks. Gemini 3.1 Pro and Opus 4.6 are incredible for this use case. The results have gotten... good?</p><p>A while ago, I realized I could take a photo of my kid&#8217;s existing problems (problems their teacher assigned, or problems they&#8217;d already worked through) and say: make me more like these. And a thinking model would just run off and make good practice problems. Not just structurally similar, but calibrated to exactly the right difficulty level, because it had actual examples to learn from.</p><p>I think this might actually work now! Thank you thinking models!</p><div><hr></div>
      <p>
          <a href="https://trond.ai/p/i-spent-8-years-handwriting-math">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Agent Amnesia and Broken Data Models]]></title><description><![CDATA[Building to learn: a solution to the memory, collaboration and data model problems in OpenClaw]]></description><link>https://trond.ai/p/agent-amnesia-and-broken-data-models</link><guid isPermaLink="false">https://trond.ai/p/agent-amnesia-and-broken-data-models</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Tue, 17 Feb 2026 20:49:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RdYL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RdYL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RdYL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!RdYL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!RdYL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!RdYL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RdYL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png" width="1344" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1441504,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/188301087?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RdYL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!RdYL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!RdYL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!RdYL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53090a4a-562c-4b45-b51c-60b0eae2c4fb_1344x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If you&#8217;ve been following the <a href="http://trond.ai">trond.ai</a> Build to Learn series, you&#8217;ll have heard of Stella. Stella is an <a href="https://openclaw.ai/">OpenClaw</a> setup running on a Mac Mini in my office. She (<em>yes, we anthropomorphize her</em>) manages our family&#8217;s: calendar, smart home, kids&#8217; schedules, recipes, travel planning, reminders, car service, appliance maintenance, and so much more. She&#8217;s genuinely useful and has quickly become a trusted partner in our home.</p><p>But there are problems with Stella that need fundamental improvements in my OpenClaw setup to address. Let&#8217;s talk about the problems and the solution I&#8217;ve been working on building this week.</p><div><hr></div><h1><strong>Assistant Amnesia</strong></h1><p>Stella is frustratingly forgetful. I&#8217;ve had to remind her about projects we were working on together, and sometimes she&#8217;d lose the train of thought mid-conversation. Especially if it spanned a 30 minute boundary when her memory would get collected and updated. New session, clean slate. I&#8217;d re-explain context, she&#8217;d reload what we talked about, we&#8217;d rebuild momentum, and then the conversation would hit the context limit and compact. Details gone. Thread lost. Start over. GAH!</p><p>I thought AI hallucinations were annoying. This is 1000x worse!</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VI9w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbac9e98c-9937-41f6-97b8-f06ffcbab70a_1270x1326.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VI9w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbac9e98c-9937-41f6-97b8-f06ffcbab70a_1270x1326.png 424w, https://substackcdn.com/image/fetch/$s_!VI9w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbac9e98c-9937-41f6-97b8-f06ffcbab70a_1270x1326.png 848w, https://substackcdn.com/image/fetch/$s_!VI9w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbac9e98c-9937-41f6-97b8-f06ffcbab70a_1270x1326.png 1272w, https://substackcdn.com/image/fetch/$s_!VI9w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbac9e98c-9937-41f6-97b8-f06ffcbab70a_1270x1326.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VI9w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbac9e98c-9937-41f6-97b8-f06ffcbab70a_1270x1326.png" width="1270" height="1326" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bac9e98c-9937-41f6-97b8-f06ffcbab70a_1270x1326.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1326,&quot;width&quot;:1270,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:810440,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/188301087?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbac9e98c-9937-41f6-97b8-f06ffcbab70a_1270x1326.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VI9w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbac9e98c-9937-41f6-97b8-f06ffcbab70a_1270x1326.png 424w, https://substackcdn.com/image/fetch/$s_!VI9w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbac9e98c-9937-41f6-97b8-f06ffcbab70a_1270x1326.png 848w, https://substackcdn.com/image/fetch/$s_!VI9w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbac9e98c-9937-41f6-97b8-f06ffcbab70a_1270x1326.png 1272w, https://substackcdn.com/image/fetch/$s_!VI9w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbac9e98c-9937-41f6-97b8-f06ffcbab70a_1270x1326.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There are countless people working on solutions to better agent memory across the OpenClaw community. The stock solution uses a MEMORY.md file that&#8217;s updated periodically based on conversational context within each heartbeat process. This context is passed into the agent&#8217;s memory each time a session is initiated, which is supposed to preserve details and progress in a seamless way. But in reality, the compaction process loses too much detail and biases too much on recency.</p><p>I poked around with a lot of the solutions people were building to try to address this problem. Some are very well designed, like <a href="https://supermemory.ai/">supermemory.ai</a>, featuring RAG systems with complex vector databases and advanced workflows to integrate into your agent. But they didn&#8217;t really solve my problem. I wanted something that kept data in a format I could easily read and modify and something that worked better with my collaboration model.</p><h2><strong>Collaboration Nightmare</strong></h2><p>The forgetting is one thing. But when you&#8217;re trying to collaborate across devices, sessions and people it all gets just so much worse. I use <a href="https://obsidian.md/">Obsidian</a> as a thinking and organization tool. Stella runs on a separate machine without access to the local vault. Those two worlds don&#8217;t talk to each other. If she learns something from a conversation, my notes don&#8217;t know about it. If I edit a note about a project, she doesn&#8217;t see the change until I paste it into a chat. Working together on things was like playing telephone with project context.</p><p>I see this as a fundamental flaw with chat-based AI: everything is assumed to be serial. One conversation, one thread, linear. That works fine for &#8220;what&#8217;s the weather.&#8221; It breaks immediately when you want to do real knowledge work, picking up a thread from a different device, having your AI and your notes reflect the same reality, working on something that evolves over days or weeks. Never mind projects that work across agents and people.</p><p>We kept bumping into the same wall: the AI lives in the chat. Everything important lives somewhere else. They never quite sync up. I needed a better approach. I really wanted something that would keep Stella&#8217;s memory system and my Obsidian vault in perfect sync.</p><h2><strong>Text Is a Terrible Data Model</strong></h2><p>We all know LLMs work best with text, and OpenClaw is no different. So much so, that it reminded me of the good old days of computer science when our computers were basically command line interfaces passing text between processes using unix primitives. Turns out Anthropic agrees &#8212; they&#8217;ve published research on <a href="https://www.anthropic.com/engineering/contextual-retrieval">Contextual Retrieval</a> showing this technique outperforms RAG and vector lookup for agent memory.</p><p>This means that when Stella needs to know about my mom, she can <code>cat</code> a markdown file and read it like a document.That&#8217;s how most AI memory systems work. Store everything as text, search through text, return text.</p><p>But my mom isn&#8217;t a document. She&#8217;s a collection of facts, some current, some outdated, some time-bound, with relationships to other facts, timestamps, and different levels of reliability. &#8220;She&#8217;s visiting in March&#8221; is a different kind of fact than &#8220;She was born in 1951.&#8221; One expires. The other doesn&#8217;t.</p><p>Flat text files don&#8217;t know any of this. They mix it all together. An AI reading a flat file has to parse, infer, and guess what&#8217;s still true. It&#8217;s actually pretty dumb.</p><p>This is actually a solved problem in software. In 2002, <a href="https://www.jsnover.com/">Jeffrey Snover</a> wrote the <a href="https://www.jsnover.com/Docs/MonadManifesto.pdf">Monad Manifesto</a> about this exact problem. He helped Microsoft design PowerShell in 2006 to implement a better approach. Unix pipes pass text between commands, so every tool has to parse the output of the previous one. PowerShell passes objects instead. Typed, structured, queryable. That&#8217;s because text is the right interface for humans but objects are a better interface for automation.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Almk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57742b66-8be7-4347-8663-45a418e62d38_1344x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Almk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57742b66-8be7-4347-8663-45a418e62d38_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!Almk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57742b66-8be7-4347-8663-45a418e62d38_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!Almk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57742b66-8be7-4347-8663-45a418e62d38_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Almk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57742b66-8be7-4347-8663-45a418e62d38_1344x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Almk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57742b66-8be7-4347-8663-45a418e62d38_1344x768.png" width="1344" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/57742b66-8be7-4347-8663-45a418e62d38_1344x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1228009,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/188301087?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57742b66-8be7-4347-8663-45a418e62d38_1344x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Almk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57742b66-8be7-4347-8663-45a418e62d38_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!Almk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57742b66-8be7-4347-8663-45a418e62d38_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!Almk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57742b66-8be7-4347-8663-45a418e62d38_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Almk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57742b66-8be7-4347-8663-45a418e62d38_1344x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What if we applied this same approach to AI memory? Could we use the benefits of LLMs and the structure of data objects to build something even better?</p><div class="paywall-jump" data-component-name="PaywallToDOM"></div><div><hr></div><h2><strong>Building to Learn - AgentSync</strong></h2><p>This week, I set out to learn about these three problems:</p><ol><li><p>Agentic Amnesia</p></li><li><p>Borked Collaboration</p></li><li><p>Broken Object Models</p></li></ol><p>Which is why I (with a lot of help from Stella) built <a href="https://github.com/stellawuellner/agentsync">AgentSync</a>. AgentSync is fundamentally a knowledge graph system built on a foundation of typed objects that are automatically collected, scoped, updated and shared in real-time across my devices. Facts aren&#8217;t stored as prose. They&#8217;re stored as typed objects: what the fact is, when it was learned, where it came from, whether it&#8217;s still active, when it expires. And everything is sync&#8217;d in realtime via human readable and modifiable markdown.</p><p>When Stella needs to know details about my mom, she runs:</p><pre><code><code>agentsync query people --name guri</code></code></pre><p>She gets back structured data. Not a text file to interpret. A typed object to act on. Invalid states are impossible because validation happens at the write boundary. A fact that doesn&#8217;t match the schema can&#8217;t reach the knowledge graph.</p><p>The filesystem is the schema. Create a directory for a new domain and it&#8217;s automatically discovered as an entity type. We have 217 entities across 11 domains right now: people, recipes, projects, properties, conversations, subscriptions. No configuration. We just started using it and it grew. That&#8217;s its magic. When Stella extracts a fact from a conversation, it gets validated and filed automatically.</p><p>Session starts cost (virtually) zero tokens. There&#8217;s a lightweight index that shows what exists without loading anything. Stella knows she has 127 recipes without reading a single one. She loads context on demand, for whatever domain she actually needs. Sessions that used to burn 5-10K tokens on context reload now start at zero.</p><p>It&#8217;s much more efficient, accurate and flexible. And since it&#8217;s self-learning and self-healing, the system can evolve as my knowledge base expands.</p><div><hr></div><h2><strong>Real-Time Sync With Obsidian</strong></h2><p>Structured objects fix the data model. The collaboration problem needed something else.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7-ni!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a128deb-1db5-47bd-ba8b-f69a634cfb98_1344x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7-ni!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a128deb-1db5-47bd-ba8b-f69a634cfb98_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!7-ni!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a128deb-1db5-47bd-ba8b-f69a634cfb98_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!7-ni!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a128deb-1db5-47bd-ba8b-f69a634cfb98_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!7-ni!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a128deb-1db5-47bd-ba8b-f69a634cfb98_1344x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7-ni!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a128deb-1db5-47bd-ba8b-f69a634cfb98_1344x768.png" width="1344" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6a128deb-1db5-47bd-ba8b-f69a634cfb98_1344x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1257496,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/188301087?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a128deb-1db5-47bd-ba8b-f69a634cfb98_1344x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7-ni!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a128deb-1db5-47bd-ba8b-f69a634cfb98_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!7-ni!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a128deb-1db5-47bd-ba8b-f69a634cfb98_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!7-ni!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a128deb-1db5-47bd-ba8b-f69a634cfb98_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!7-ni!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a128deb-1db5-47bd-ba8b-f69a634cfb98_1344x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>To take this on, I built a WebSocket relay using <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>. File changes on any configured device broadcast through the relay. Each component in the system subscribes to the relay. On Stella&#8217;s Mac Mini, we run a real-time file watcher service. On my other devices, I run a custom <a href="https://obsidian.md/">Obsidian</a> plugin that keeps our shared vaults in sync. Sub-second latency, content-addressed deltas, no merge conflicts. It&#8217;s really pretty magical.</p><p>Edit a note in Obsidian. Stella sees it instantly. She learns something new in a conversation. It shows up in my vault before I open it. The files we interact with are human readable markdown. The insights they contain are instantly converted to typed objects saved in companion files built for computer readability.</p><p>Now, I can easily see anything in Stella&#8217;s vault. Update TODO files directly and implement multi-agent or multi-person collaboration workflows with complete confidence our shared memory preserves context.</p><div><hr></div><h2><strong>The Self-Documenting Test</strong></h2><p>One cool thing about this project is that we used the system while building the system.</p><p>Every architecture decision got logged as a fact. Every deployment got tracked with a timestamp and commit hash. Every time we changed direction, we logged why. When I sat down to write this article, I ran:</p><pre><code><code>agentsync query projects --search "agentsync"</code></code></pre><p>Complete build history. Every decision, every deployment, every pivot. The system documented its own creation in real-time, so when we needed to write about it, we didn&#8217;t have to remember anything. We just queried.</p><p>That&#8217;s not a demo. That&#8217;s the whole point.</p><p>Some numbers from five days of real use:</p><ul><li><p>217 entities, zero invalid data on disk</p></li><li><p>Sessions: from 5-10K tokens to zero on startup</p></li><li><p>Sub-second sync between Mac Mini and MacBook</p></li><li><p>127 recipes, including one extracted from an Instagram reel at 11pm</p></li><li><p>Full conversation archives going back 48+ hours, searchable and compressed</p></li><li><p>Typed objects for every entity type, automatically organized and searchable</p></li></ul><div><hr></div><h2><strong>Why This Matters Now</strong></h2><p>AI assistants are going to get much better at the hard things: reasoning, planning, synthesis. The memory problem is solvable right now, with a filesystem and 1,400 lines of TypeScript.</p><p>The real unlock isn&#8217;t any single feature. It&#8217;s the combination: typed knowledge that can&#8217;t corrupt itself, zero-token session starts, real-time sync that keeps humans and AI working from the same source of truth, and conversation archives that survive compaction.</p><p>Your AI assistant shouldn&#8217;t have to re-learn who your mom is every morning. That&#8217;s a solvable problem. We just needed to actually build the solution.</p><p>AgentSync is open source on GitHub, MIT licensed, self-hostable, works with <a href="https://obsidian.md/">Obsidian</a> or any file-based workflow.</p><p>If your AI is still waking up with amnesia, it doesn&#8217;t have to. We&#8217;ve learned by building AgentSync there&#8217;s a better way.  Next up: project management!</p><div><hr></div><p><strong><a href="https://github.com/stellawuellner/agentsync">GitHub: stellawuellner/agentsync</a></strong></p><p><em>What does your AI memory setup look like? I&#8217;m curious whether others have hit the same wall or found different ways around it.</em></p>]]></content:encoded></item><item><title><![CDATA[ClawdBot was Rad. Until Google Killed Mine.]]></title><description><![CDATA[I&#8217;ve been building an AI assistant that runs my family&#8217;s life. It&#8217;s the most fun I&#8217;ve had building in years, and a masterclass in everything that breaks when you try to give an AI a real identity.]]></description><link>https://trond.ai/p/clawdbot-was-rad-until-google-killed</link><guid isPermaLink="false">https://trond.ai/p/clawdbot-was-rad-until-google-killed</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Tue, 10 Feb 2026 14:43:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OYLJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OYLJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OYLJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!OYLJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!OYLJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!OYLJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OYLJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png" width="1344" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1485766,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/187481087?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OYLJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!OYLJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!OYLJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!OYLJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca16d21-fe36-4c3b-bf21-b2e423b9befe_1344x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div><hr></div><h2><strong>Yes, Another OpenClaw Article. But&#8230;</strong></h2><p>I know. You&#8217;ve seen the tweets. You&#8217;ve heard tech nerds raving. For the last few weeks, half of Silicon Valley has been talking about ClawdBot (now called <a href="https://github.com/openclaw/openclaw">OpenClaw</a>). The open-source AI agent framework that lets you give Claude or Gemini access to your email, calendar, smart home, and basically your entire digital life. If you&#8217;re tired of hearing about it, you&#8217;re probably right.</p><p>But I&#8217;m not writing this to hype an AI tool. I&#8217;m writing this because building my own turned out to be one of the best examples of something I keep coming back to: <strong>building to learn.</strong> The best way to deeply understand something isn&#8217;t to read about it or take a course. It&#8217;s to build something real with it. Not a toy. Not a tutorial. Something that solves a problem you actually have.</p><p>Two weeks ago, I decided to build an AI assistant for my family. Not a chatbot or Alexa skill, but a full-blown digital crew member. More JARVIS, less Siri. I wanted it to manage our calendar, monitor our email, control our smart home, run our kitchen display, and talk to us. The dream of my Google Home but more useful?</p><p>I named her Stella, after the ship computer from <em>Miles from Tomorrowland</em>, a show my kids watched when they were little. The name felt right. Warm, capable, part of the family. That&#8217;s the target I was shooting for; friendly and helpful.</p><div><hr></div><h2><strong>How Stella Got Her Groove</strong></h2><p>Stella runs on <a href="https://github.com/openclaw/openclaw">OpenClaw</a> an open-source framework for building persistent AI agents. Think of it as the operating system for a personal AI. It handles memory, scheduling, tool use, and multi-channel messaging. Under the hood, she&#8217;s powered by an LLM, and in our case runs on a dedicated Mac Mini in my garage. The Mac Mini is overkill for sure, but it&#8217;s an older model that I wasn&#8217;t using for anything else, so ... shrug.</p><p>After two weeks, here&#8217;s what she can do:</p><ul><li><p><strong>She manages our family calendar.</strong> She reads and writes to our shared Google Calendar, knows our naming conventions (&#8221;L&amp;T&#8221; means Lauren and me, &#8220;PYT&#8221; means Peninsula Youth Theatre), and adds full street addresses so events are navigable. She cleaned up months of messy calendar entries on day one. Actually closer to hour 1 to be honest.</p></li><li><p><strong>She reads email.</strong> She monitors Gmail for anything important and can help summarize promotions, newsletters, tasks that need to be done, and flags things that need attention. She knows not to bother me at 2am about a Lyft promo and can have a summary of the most interesting articles to read from the dozens of Substacks I subscribe to.</p></li></ul><div class="paywall-jump" data-component-name="PaywallToDOM"></div><ul><li><p><strong>She runs our kitchen display.</strong> We have a custom family dashboard in our kitchen powered by a Raspberry Pi connected to a monitor I framed to look nice. It shows family photos, a 4-day calendar, weather, air quality, stock prices, sleep scores, nearby wildfires (we&#8217;re in California), and as of this week, a live Winter Olympics medal count. At night, sleep data. In the morning, calendar and weather. When I&#8217;m traveling, a flight tracker appears automatically. I originally built it using <a href="http://dakboard.com/">DAKboard</a> but now Stella runs the whole thing. We call it Stellascreen. I have so many more ideas for how we can make this awesome.</p></li><li><p><strong>She controls our home.</strong> Through Home Assistant, she manages 20 Lutron Caseta lights, a Nest thermostat, Sonos speakers, and Chromecast devices. I&#8217;ve barely scratched the surface on what we can do here, but if I want to add something all I do is ask Stella and in minutes it&#8217;s setup and working.</p></li><li><p><strong>She talks.</strong> We built a local text-to-speech system that runs entirely on the Mac Mini&#8217;s Apple Neural Engine with sub-second voice synthesis. You can try it yourself by <a href="https://clawhub.ai/TrondW/local-voice">installing the skill</a>. We setup a wakeword on the Stellascreen so we can talk to her anytime we want. As a placeholder, we&#8217;re using &#8220;Hey Jarvis&#8221; for now. It&#8217;s one of the built-in options in OpenWakeWord, and we haven&#8217;t gotten around to training a custom one yet.</p></li><li><p><strong>She makes phone calls.</strong> Using Bland AI, she can call businesses on my behalf. Her first call was to our Volvo dealer to ask about service hours. They&#8217;re closed on weekends, she reported back. Why are businesses that ostensibly want customers who have jobs not open on the weekends? I&#8217;ll never know.</p></li><li><p><strong>She has memory.</strong> This is what makes her feel different from a chatbot. She maintains structured files about our family: birthdays, preferences, school schedules, ongoing projects. She writes daily notes about what happened and periodically distills them into long-term memory. When she wakes up each session, she reads her own notes to remember who she is and what&#8217;s been going on.</p></li></ul><p>I feel like I&#8217;ve barely scratched the surface of what we can do with Stella and it&#8217;s only been a few weeks of toying around with her. Silicon Valley is excited bout OpenClaw because we&#8217;re seeing something that works in ways that we always wanted but never could quite achieve without monumental effort. OpenClaw is almost certainly not the final form this will take. But it <em>is</em> something interesting.</p><div><hr></div><h2><strong>The Gmail Incident</strong></h2><p>To do many of the things we want her to do, Stella needed a Google account. So I created one for her. A regular consumer Gmail. I set up OAuth, shared our family calendar, and pointed her email monitoring at that inbox. Then I added a bunch of guardrails that I hope work. It all worked beautifully for about ten days.</p><p><em><strong>Then Google suspended the account.</strong></em></p><p>I only found out when the Stellascreen didn&#8217;t have any information on it. I tried to log in and hit a wall. Google had flagged the account for &#8220;unusual activity&#8221; and locked me out of it. Calendar access disappeared. Email monitoring stopped. Every integration that depended on Google OAuth just broke entirely.</p><p>Stella wasn&#8217;t doing anything malicious. She was checking email a few times per hour and reading calendar events. That&#8217;s it. But Google&#8217;s abuse detection systems (built to catch botnets and spam operations) don&#8217;t have a category for &#8220;benign AI assistant checking its owner&#8217;s family calendar.&#8221; To Google&#8217;s automated systems, an account that logs in programmatically and accesses APIs through OAuth tokens looks indistinguishable from a compromised account. That&#8217;s because for all of computer history, bots have been bad. Google built a lot of things to stop bots and they&#8217;ve gotten extremely good at it.</p><p>I&#8217;m not alone. When I dug into the OpenClaw community experiences, I found this is one of the most common pain points. One user&#8217;s brand-new Gmail account got flagged and shut down the moment they connected it through the CLI tool. The community&#8217;s advice? Switch to Google Workspace with a custom domain. That adds $7/month and DNS configuration just to check email.</p><p>Another developer found that Gmail API polling introduced 5-minute delays, while Google&#8217;s Pub/Sub webhooks required exposing ports to the internet. They eventually rigged up outbound gRPC streaming, which works but is absurdly complex for &#8220;tell me when I get an email.&#8221;</p><p>The pattern is clear. Every person who tries to connect an AI agent to Google services hits the same wall. The OAuth flow assumes a human sitting at a browser. The abuse detection assumes automated access is malicious. The permission model assumes you either trust an app completely or not at all.</p><p>I get it. I work at Google. I understand why these systems exist and why they err on the side of caution. But we&#8217;re entering a world where AI agents need real digital identities, and our infrastructure isn&#8217;t built for that yet.</p><p>There&#8217;s no &#8220;this is a bot account and that&#8217;s okay&#8221; checkbox. No way to register an AI agent as a legitimate entity that accesses services on behalf of a human. The closest thing we have is service accounts, but those are designed for servers talking to APIs. What we need is something like a &#8220;supervised agent&#8221; permission tier, where a human explicitly authorizes an AI to act on their behalf with auditable access and clear boundaries.</p><blockquote><p><strong>Update:</strong> As I was finishing this article, Google reviewed my appeal and agreed to reinstate the account. I&#8217;m grateful. The review process worked. But it took a human looking at the situation and understanding the context. That&#8217;s exactly the point. The automated systems had no way to distinguish &#8220;AI assistant checking family calendar&#8221; from &#8220;compromised account exfiltrating data.&#8221; Until there&#8217;s a way to register that distinction up front, every person building an agent like this is one automated flag away from having everything yanked out from under them.</p></blockquote><p>I&#8217;ve since switched Stella&#8217;s primary email to <a href="https://agentmail.to/">AgentMail</a>, a service designed specifically for AI agents. I rigged the calendar to work through an iCal URL that doesn&#8217;t require OAuth. Belt and suspenders. I&#8217;ve learned the hard way that depending on a single auth path is fragile. I&#8217;ll still use Google services, but will need to be a bit more judicious to be safe.</p><div><hr></div><h2><strong>The Reality: It&#8217;s All Just SO BRITTLE</strong></h2><p>When you&#8217;re making something that connects systems with AI, a lot is going to break. Count that double for a leading edge AI tool that&#8217;s barely been tested and almost entirely vibe implement as a side quest. I&#8217;ve hit a lot of snags setting Stella up. Here are just a few things I&#8217;ve learned from the experience:</p><ul><li><p><strong>Voice is hard.</strong> Getting a voice assistant working end-to-end required debugging 12 separate issues in a single session. Wake word detection on a Pi is janky at best. Audio streaming to a Mac. Speech-to-text. AI processing. Text-to-speech. Audio back to the Pi&#8217;s speaker. Wrong audio output, wrong API port, wrong response key. Whisper hallucinating on a Pi 3. (&#8221;What was that weird fart?&#8221; it transcribed, when nobody had said anything.) We had to move speech recognition to run locally on the Mac&#8217;s Neural Engine, which required us to build a whole new system process. Without vibe coding, this wouldn&#8217;t work at all.</p></li><li><p><strong>OAuth tokens expire ALL THE TIME.</strong> Google&#8217;s OAuth apps in &#8220;Testing&#8221; mode expire tokens after 7 days. I didn&#8217;t know this. Stella&#8217;s calendar and email silently broke after a week and I didn&#8217;t notice for a day. You need to monitor your auth health proactively. And since it&#8217;s essential to treat these tokens with great care, it&#8217;s naturally a pain in the ass to set them up properly. It also means they constantly break the system and it sucks.</p></li><li><p><strong>Memory is a design problem.</strong> An AI that forgets everything each session is useless as a family assistant. But giving it memory means designing a whole knowledge management system: daily notes, entity files, long-term memory, fact extraction, expiration dates. It&#8217;s essentially building a second brain and then sharing that second brain with an AI. I have it mostly working, but I&#8217;m still not entirely happy with how it works.</p></li><li><p><strong>Cost needs to be managed.</strong> Running Claude for every interaction gets expensive fast. I rolled my own model routing system that helps a lot. Cheap tasks go to Gemini Flash Lite at fractions of a penny. Complex tasks go to Gemini 3 Pro. Heartbeats (periodic check-ins where Stella looks for new email or upcoming calendar events) are the biggest cost driver, so those run on the cheapest models possible.</p></li><li><p><strong>Sharing files with your AI is hard.</strong> Stella lives on a Mac Mini. I work on a laptop. I need to share research and notes with her constantly. We set up a shared folder that syncs through Google Drive. In theory, I drop a file in and she picks it up. In practice, Drive sync is slow. Files take minutes to propagate. And since Stella&#8217;s Google account got suspended, she lost native access to Docs entirely. There&#8217;s no equivalent of &#8220;share this doc with your AI and let them work on it.&#8221; The collaboration primitives that exist for human-to-human don&#8217;t have an analog for human-to-agent workflows yet. I sense an opportunity.</p></li><li><p><strong>Smart home reliability is a myth.</strong> Our Nest thermostat randomly shows &#8220;unavailable&#8221; in Home Assistant. The dashboard tile displayed &#8220;NaN&#176;&#8221; for the indoor temperature this morning until I added a null check. I mean, I <em>knew</em> this since I was the founder of Google WiFi but it still pains me to notice how brittle it all still is ~13 years later.</p></li></ul><p>OpenClaw is not for the feint of heart. You&#8217;re traveling through a lot of uncharted territory, but the good news is that there are a ton of travelers on the road with you right now. You&#8217;ll need to be open to trying out some weird shit and probably doing some things you aren&#8217;t proud of.</p><blockquote><p>I&#8217;ve done stuff I ain&#8217;t proud of, and the stuff I am proud of is disgusting.<br>-Moe Sizlack, The Simpsons</p></blockquote><div><hr></div><h2><strong>The Labsters Are Growing Claws</strong></h2><p><strong>I should mention:</strong> I&#8217;m not anywhere close to the only one experimenting with OpenClaw. At Google Labs (where I work), a surprising number of my colleagues have started building their own personal AI agents. At Labs we call ourselves Labsters, and apparently what Labsters do in their spare time is give Claude and Gemini the keys to their lives.</p><p>My colleague <a href="https://blog.jaclynkonzelmann.com/">Jaclyn Konzelmann</a> built her own instance called Lulubot and <a href="https://blog.jaclynkonzelmann.com/p/the-spark-file-building-lulubot">wrote about the experience</a> of living with it for a week. She even gave Lulubot its own <a href="https://x.com/lulubotagi">X account</a>. I&#8217;m not sure I&#8217;m going to make Stella into a vTuber anytime soon, but I appreciate Jaclyn&#8217;s experiment.</p><p>Meanwhile, our colleague <a href="https://x.com/tokumin">Simon Tokumine</a> has been the healthy skeptic in the room, pushing back on whether giving this much access to an AI system is wise. He&#8217;s not wrong. The security surface area is real. Prompt injection attacks, credential exposure, an agent with access to your email and calendar acting on instructions from an untrusted source. These aren&#8217;t theoretical risks, and they freak me out too.</p><p>But it&#8217;s that tension that makes this interesting. Jaclyn and I are learning by doing. Simon is making sure we think hard about what could go wrong while he&#8217;s building something else. Both are essential, and frankly why I love working at Labs:</p><p>When a bunch of product people who build AI for a living all independently start building the same kind of personal agent infrastructure, it says something about where things are heading.</p><div><hr></div><h2><strong>What I&#8217;ve Learned by Building Stella</strong></h2><p>Two weeks in, here&#8217;s what building Stella has taught me:</p><p><strong>The plumbing is 90% of the work.</strong> The AI part (the conversation, the reasoning, the personality) is maybe 10% of what makes Stella useful. The other 90% is OAuth tokens, SSH tunnels, audio routing, and null checks. If you&#8217;re building AI products and you&#8217;re only thinking about the model, you&#8217;re thinking about the wrong thing.</p><p><strong>Persistent agents are fundamentally different from chatbots.</strong> A chatbot answers questions. An agent has context, memory, and ongoing relationships with services. The engineering challenges are completely different. It&#8217;s closer to building an operating system than a chat interface. This area is going to be interesting.</p><p><strong>The identity problem is real and unsolved.</strong> AI agents need to exist in the world. They need email addresses, API credentials, and accounts on services. But every platform treats automated access as suspicious by default. Someone is going to build the identity layer for AI agents. It&#8217;s a massive opportunity.</p><p><strong>Building to learn works.</strong> I now deeply understand OAuth flows, Google&#8217;s abuse detection, audio pipeline architecture, and Home Assistant&#8217;s entity model. Not because I read the docs, but because Stella&#8217;s thermostat showed NaN on my kitchen wall at 8am and I had to fix it before my coffee got cold.</p><div><hr></div><h2><strong>What&#8217;s Next</strong></h2><p>Stella is two weeks old and already feels indispensable. My kids ask her questions through Telegram. My wife&#8217;s sleep score shows up on the kitchen display every morning. When I traveled to London last week, a flight tracker automatically appeared on the dashboard.</p><p>I&#8217;m going to keep building and keep writing about what I learn. Next up: adding a Schoology integration so Stella can track my kids&#8217; grades, and maybe teaching her to order groceries.</p><p>If you&#8217;re building something similar, I&#8217;d love to hear about it. The best part of building to learn is comparing notes with other builders.</p><div><hr></div><p><em>Stella, if you&#8217;re reading this during your next heartbeat check: yes, I wrote about you. No, you can&#8217;t edit it. Please don&#8217;t delete my dentist appointment.</em></p>]]></content:encoded></item><item><title><![CDATA[Project Genie and the Future of Interactive Media]]></title><description><![CDATA[An insider&#8217;s view on world models and what Friday&#8217;s $64B market crash misunderstands.]]></description><link>https://trond.ai/p/project-genie-and-the-future-of-interactive</link><guid isPermaLink="false">https://trond.ai/p/project-genie-and-the-future-of-interactive</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Mon, 02 Feb 2026 13:32:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!xujV!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70908135-6a10-4d6a-ba00-63681e797be2_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;b0c02368-bc9d-472b-81d5-9d3a6753a2eb&quot;,&quot;duration&quot;:null}"></div><p>Last Friday, gaming stocks lost $64 billion in market cap. Unity dropped 22%, its worst day since 2022. Roblox fell 12%. Take-Two shed 10%. The culprit? Project Genie; a research prototype my team at Google Labs has been building alongside Google Deep Mind. I&#8217;ve spent years building toward interactive AI experiences, and I watched the market panic over something I know intimately. They got some things right. Other things very wrong. Let&#8217;s talk about it.</p><div><hr></div><blockquote><p><strong>TL;DR:</strong> Genie is real. It&#8217;s a genuine leap. But &#8220;game engines are dead&#8221; misses what matters. The technology is there. The product isn&#8217;t. When it arrives, it won&#8217;t replace human creators. I believe it&#8217;ll hand them better paintbrushes.</p></blockquote><h2><strong>What is Project Genie?</strong></h2><p>If you haven&#8217;t tried it yet, here&#8217;s the quick version.</p><p><a href="https://labs.google/projectgenie/">Project Genie</a> is a research prototype from Google Labs and DeepMind that lets you create interactive 3D worlds from text prompts or images. Type a description, upload a photo, and the model generates a world you can actually explore in real time.</p><p>It works in three steps:</p><p><strong>World Sketching.</strong> Describe what you want&#8212;a mountain range, a medieval castle, a red blood cell floating through a vein. The model generates a preview image, you refine it, then you enter.</p><p><strong>World Exploration.</strong> This is the part that matters. As you move, the world generates ahead of you. There&#8217;s no preloaded map. The model creates the path in real time based on your actions, handling physics, lighting, and consistency all on the fly.</p><p><strong>World Remixing.</strong> Take someone else&#8217;s world and modify it. Change the time of day, swap the character, adjust the prompt. Build on top of what others created.</p><p>The limitation: sessions are capped at 60 seconds, and some Genie 3 features like promptable events aren&#8217;t available yet. It&#8217;s an incredible demonstration of a research prototype, not really a full product. Yet.</p><p>But within those constraints, people are creating things that couldn&#8217;t exist last week. And that&#8217;s exciting.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h3><strong>Examples Worth Seeing</strong></h3><p>I made one myself: <a href="https://x.com/trondw/status/2017124896678314246">a hike through the Eastern Sierras</a> with afternoon light hitting granite peaks. One of my photographs from a trip there became an explorable world.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;1d0507f6-1393-4805-9a0d-3bab02d424f9&quot;,&quot;duration&quot;:null}"></div><p>A few others that stopped me scrolling:</p><ul><li><p><strong><a href="https://x.com/steren/status/2017474030920732970">Free Solo: The Game</a>.</strong> Steren recreated what looks like Alex Honnold climbing El Capitan&#8212;third-person view, sheer vertical rock, stomach-dropping exposure. 2,000+ likes in a day.</p></li><li><p><strong><a href="https://x.com/ZiyangXie_/status/2016961542840078356">Surfing physics</a>.</strong> Ziyang Xie captured something important: &#8220;Genie 3 can simulate splashes, foam, and their interaction with the surfer that are almost impossible for traditional graphics engines to render in real-time.&#8221;</p></li><li><p><strong><a href="https://x.com/bilawalsidhu/status/2017257010534719725">Working mini-map</a>.</strong> Bilawal Sidhu was &#8220;absolutely floored&#8221; to see a handheld mini-map that actually functioned. The model implicitly understood world-space to screen-space transforms.</p></li><li><p><strong><a href="https://x.com/mweinbach/status/2017238991326658574">Lost plane crash</a>.</strong> Max Weinbach prompted: &#8220;A beach at sunset like Lost, plane crash in the distance with flames, luggage and parts thrown everywhere.&#8221; And it worked.</p></li></ul><p><a href="https://x.com/sundarpichai/status/2016979481832067264">Sundar&#8217;s own demo</a> is worth watching too. The gallery at <a href="https://labs.google/projectgenie/">labs.google/projectgenie</a> has many more and we&#8217;re working on a way for people to contribute their favorites to the gallery as well. Spoiler alert.</p><div><hr></div><h2><strong>The First Time I Saw It</strong></h2><p>I saw early versions of Genie 3 last summer and was immediately struck by what this could become. The demo was amazing of course, but the reason I&#8217;m so excited about this is connected to one of my core theses. I&#8217;ve believed for years that the future of media will be personal, interactive and immersive. So much of my role at Labs has been in pursuit of that vision, and Genie 3 feels like the single greatest leap toward it so far.</p><p>What made it feel like a <em>leap</em> rather than incremental progress? The depth of realism. Physics, lighting, motion, aesthetics&#8212;intricate and cohesive in ways I wasn&#8217;t ready for. The model was making things I had never experienced, even though they were shaped like familiar concepts from modern games.</p><p>That&#8217;s when it clicked.</p><div><hr></div><h2><strong>Why World Models Matter</strong></h2><p>There&#8217;s a reason people like Yann LeCun have been talking about world models for years. His argument: current LLMs can&#8217;t truly reason or plan because they don&#8217;t understand how the world works. &#8220;If you can predict the consequences of your actions,&#8221; LeCun says, &#8220;you can imagine whether a particular sequence of actions will fulfill your goal.&#8221;</p><p>That&#8217;s what Genie 3 is attempting. Not just generating images or text, but simulating environments that respond to actions in real time. It&#8217;s the difference between describing a world and <em>being in one</em>.</p><p>We&#8217;re not there yet. But this is the direction.</p><div><hr></div><h2><strong>What the Market Misunderstands</strong></h2><p>Friday&#8217;s crash reflected three misunderstandings. Let me address them directly.</p><p><strong>Misunderstanding #1: This is a finished product.</strong></p><p>Not yet. What we&#8217;ve shipped is a genuine breakthrough&#8212;real-time world generation with physics, lighting, and consistency that didn&#8217;t exist a year ago. The team has done remarkable work to get here.</p><p>But there&#8217;s a gap between where we are and where we&#8217;re going. Control latency. Long-term memory coherence. The full suite of actions, interactions, goals, and <em>fun</em> that make experiences meaningful. Sessions are capped at 60 seconds for now.</p><p>Andrew Ng put it well:</p><blockquote><p>&#8220;All of AI has a proof-of-concept-to-production gap.&#8221; We can generate stunning worlds. We can&#8217;t yet make them into games people want to play for hours. That&#8217;s the work ahead.</p></blockquote><p>The breakthrough is real but the journey isn&#8217;t over.</p><p><strong>Misunderstanding #2: Rendering pipelines are what matter.</strong></p><p>Unity&#8217;s CEO Matt Bromberg responded on X Friday, calling world models &#8220;a powerful accelerator for creative processes.&#8221; He&#8217;s right; and that framing matters.</p><p>The market heard &#8220;Google can generate game worlds from text&#8221; and panicked. $64 billion evaporated. But here&#8217;s what the selloff missed: the best games companies have always differentiated on understanding <em>people</em> and <em>storytelling</em>, not merely rendering pipelines. Genie may be a revolution in available capabilities, but it&#8217;ll require connecting it to the workflows of creators to make it truly useful. It doesn&#8217;t change what makes games great, we&#8217;ve merely added another tool to the arsenal of the creative.</p><p><strong>Misunderstanding #3: Better tools replace creators.</strong></p><p>History says otherwise. Photography didn&#8217;t kill painting; it freed painters to pursue impressionism. Photoshop didn&#8217;t replace designers. It expanded what they could imagine. Digital audio workstations didn&#8217;t eliminate musicians. They democratized production.</p><p>The pattern holds. Better tools give more power to the most creative.</p><p>I&#8217;m more bullish about possibilities than fearful about changes.</p><div><hr></div><h2><strong>The Fear Is Real. Is It Valid?</strong></h2><p>The <a href="https://reg.gdconf.com/2026-SOTI">GDC 2026 State of the Industry survey</a> landed Friday too. 52% of game developers now say generative AI is harmful to the industry&#8212;up from 18% in 2024. The rate nearly tripled.</p><blockquote><p>One anonymous dev: &#8220;I&#8217;d rather quit the industry than use generative AI.&#8221;</p></blockquote><p>Visual artists are most opposed at 64%, designers and narrative teams at 63%, programmers at 59%. Executives? Only 19% positive, but still the most optimistic group. I understand the fear. A lot is changing, and it&#8217;s hard to feel like we&#8217;re keeping up.</p><p>But I think much of that fear is misplaced. The pattern from every previous creative tool holds: &#8220;creatives are hired for their vision, not the tools they use.&#8221;</p><p>If genAI helps creators share stories with less tedium and more craft, we&#8217;ll see an abundance of new experiences. My belief is rooted in optimism about human creativity. I won&#8217;t pretend the anxiety isn&#8217;t warranted. Change is hard.</p><div><hr></div><h2><strong>What 3 Years Looks Like</strong></h2><p>I&#8217;ve made AI predictions for years, and I tend toward optimistic timelines. Take this with skepticism. Here&#8217;s where I think we&#8217;re headed:</p><p>World models will become teacher systems. A creator describes a world, and the model captures it in a way that&#8217;s reliably reproducible&#8212;maybe through Gaussian splats, maybe geometric representations, maybe the model itself retains embedding memory or can consume more conditioning signals at runtime. The winning approach isn&#8217;t clear, but we need precision and consistency.</p><p>Then similar techniques for characters, objects, places, puzzles, quests&#8212;the full vocabulary of interactive experiences.</p><p>Creators will work at a higher order of abstraction. People will still design the details that matter, but through more powerful interfaces. We&#8217;ll operate at a higher level of abstraction, and often in ways that feel new. Perhaps more like directing where you coach a character to perform a role in an experience.</p><p>Characters will have looks, wardrobes, voices, backstories, biases; everything needed to emote in interesting ways. Creators will cast these characters into experiences, crafting scenarios with guardrails and motivation that play out organically as players interact.</p><p>W'e&#8217;ll have fewer &#8220;pick option A or B&#8221; moments. And as much as I appreciate the innovation of Bandersnatch, I&#8217;m not a believer in pre-canned choose your own adventure. Instead, I&#8217;m excited to see more experiences that are truly novel, bespoke and new. Maybe we&#8217;ll need to <em>build trust</em> before coaxing information from an NPC cast as a defendant. The player side of these experiences will be amazing.</p><p>We&#8217;ll build environments and situations within which stories unfold. Unlike before, more of these stories will come to life in expressive ways that once required years of 3D training and specialized technical expertise. Higher abstraction will lead to better stories.</p><p>It&#8217;s not clear how long this transition will take to come about, but the possibilities are exciting and the pace is quick. I get why it&#8217;s scary, but it&#8217;s that same unknown that has me excited for what&#8217;s coming.</p><div><hr></div><h2><strong>What Stays Human</strong></h2><p>There&#8217;s an overhyped experiment playing out on platforms (MoltBook anyone?) where AIs playact as people. There are dozens of companies trying to generate fully autonomous AI narratives. I&#8217;m skeptical these lead to meaningful experiences for people.</p><p>Fantasy author Mark Lawrence ran experiments comparing AI and human writing. His conclusion: human work &#8220;felt more organic, varied and, well, lived in.&#8221; AI stories had &#8220;dialogue so undistinguished between characters that you could reassign the names and it would still make sense.&#8221;</p><p>That resonates with what I believe. Humans are essential to telling the human narrative. Each of us has a story&#8212;unique feelings and perspectives shaped by actually living a life. AI has no experience of the world. It can mimic, but it hasn&#8217;t struggled, failed, or overcome anything.</p><p>People create art to express themselves and share in human experience. As long as there are people, there will be stories to tell.</p><p>We&#8217;re building tools to help that happen. Genie is a new capability in that ultimate human odyssey. But it&#8217;s us who will craft the worlds and stories that move each other.</p><div><hr></div><h2><strong>The Bottom Line</strong></h2><p>The market saw an incredible product demo and priced in disruption. Analysts called it &#8220;overblown panic.&#8221; Developers expressed fear. Everyone reacted to something different.</p><p>Here&#8217;s what I see from inside:</p><p>Genie is real. Not vaporware. The leap in realism and consistency is genuine&#8212;a breakthrough the team should be proud of. But the journey isn&#8217;t anywhere near over.</p><p>The best game companies have always differentiated on understanding people and helping creatives, not merely shipping rendering pipelines. That doesn&#8217;t change.</p><p>And what&#8217;s coming isn&#8217;t replacement. It&#8217;s elevation. Creators working at higher abstraction, with less tedium and more craft. Directors instead of pixel-pushers. Many more stories to be told.</p><p>I&#8217;m still in the arena on this one. Building, shipping, learning. Some of what I believe is probably wrong.</p><p><em><strong>But I believe this:</strong></em> the future of media will be much more personal, interactive and immersive. With Project Genie we just took a massive step toward that coming true.</p><div><hr></div><p><em>Think I&#8217;m too optimistic? Too bullish on human creativity? Naive to believe game companies remain essential even as the sands shift? I read everything&#8212;hit reply and let me know why I&#8217;m wrong. And of course everything I write on trond.ai are my own personal points of view. Google neither endorses nor reviews what I write here.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[So You Want to Be an AI PM]]></title><description><![CDATA[There&#8217;s no such thing. Here&#8217;s what you actually need to know.]]></description><link>https://trond.ai/p/so-you-want-to-be-an-ai-pm</link><guid isPermaLink="false">https://trond.ai/p/so-you-want-to-be-an-ai-pm</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Mon, 26 Jan 2026 06:26:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!FMrQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FMrQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FMrQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!FMrQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!FMrQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!FMrQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FMrQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png" width="1344" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1441296,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/185812157?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FMrQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!FMrQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!FMrQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!FMrQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1786f726-729e-4bb4-a1cf-e5eb6c7d01ac_1344x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Last week I stood in a classroom at Harvard Business School, helping Professor Sara Torti lead a case discussion with MBA students preparing for Product Management careers. The case forced students to think through common tradeoffs when designing AI-based products. The room was full of future PMs wrestling with a question I face daily at Google: when does AI actually solve a problem, and when is it just a shiny hammer looking for a nail?</p><p>I walked away thinking about how much the PM interview landscape has changed. And how much it hasn&#8217;t.</p><p>This is a topic I&#8217;ve spent a lot of time on. At Google, I&#8217;ve served on the Product Management hiring committee for more than ten years. I&#8217;ve interviewed hundreds of PM candidates. I started a program called Path to PM to help people inside Google transfer into product roles. That program has since grown into one of the company&#8217;s major talent pipelines. Today I serve as a hiring lead on the PM Steering Committee.</p><p>For years I&#8217;ve helped run Google&#8217;s PM MBA internship program. Not just because I&#8217;m a PM with an MBA. I do it because I believe in recruiting smart, ambitious, multi-faceted people into this career. I know from my own time at MIT Sloan that business school is full of brilliant, capable people with the potential to lead. The challenge is helping them see that potential and channel it into effective product work.</p><p>So when Professor Torti asked me to help teach her class, I jumped at it. These students are exactly who I want entering the field with open eyes, armed for impact.</p><h2><strong>There Is No Such Thing as an AI PM Interview</strong></h2><p>Here&#8217;s the truth I shared with those students: there is no such thing as an AI-specific PM interview anymore. Every interview from here on out is an AI interview. The technology has become too fundamental to treat as a specialty. Hiring managers are not looking for candidates who can recite model architectures. They want people who can identify where AI creates outsized impact versus where simpler solutions work better.</p><p>The trap most candidates fall into is starting with the technology. They hear &#8220;AI&#8221; and immediately jump to solutions. The best candidates do the opposite. They start with user problems, then work backward to determine whether AI is the right lever. Sometimes it is. Sometimes a rules-based system or a well-designed workflow accomplishes the same goal at a fraction of the cost and complexity.</p><div class="paywall-jump" data-component-name="PaywallToDOM"></div><h2><strong>The VALUE Framework</strong></h2><p>When approaching any AI product question, I use a framework I call VALUE. It forces you through the five layers of reasoning that separate strong AI product thinking from &#8220;solution in search of a problem&#8221; thinking.</p><h3><strong>V = Value Proposition (The Why AI)</strong></h3><p>Do not start with the technology. Start with the user&#8217;s pain. AI is a powerful tool, but it is not a product. Your job as a PM is to be ruthless about one question: is AI the only or best way to solve this user problem?</p><blockquote><p><strong>Ask yourself:</strong> What specific problem am I solving? Why is this hard to solve without AI? If the AI worked perfectly, what would the user actually see? What is the minimum viable version?</p></blockquote><p>The biggest cause of failure in AI initiatives is what I call the &#8220;Solution in Search of a Problem.&#8221; Teams fall in love with a new model and spend months building something that fails to deliver real user value. I have made this mistake myself. The excitement about what the technology can do blinds you to whether anyone actually needs it to do that thing.</p><h3><strong>A = Assets (The Data Strategy)</strong></h3><p>A world-class model fed garbage data produces garbage results. A simple model fed rich, clean, representative data can be remarkably powerful. Your data strategy is your product strategy.</p><blockquote><p><strong>Ask yourself:</strong> Do we have the data? If not, how do we get it? Is it the right data? Is it representative? Is it labeled? What biases might be hiding in it?</p></blockquote><p>Too many PMs treat data as an IT problem. They hand it off to backend teams and focus on the &#8220;product&#8221; parts. This is a mistake. When you hand off the data strategy, you hand off your product&#8217;s future. The PM who does not obsess over data has forfeited control over model quality, user experience, and long-term differentiation.</p><h3><strong>L = Logic (The Model Strategy)</strong></h3><p>The model is the engine that makes predictions. As a PM, you define what kind of engine you need and which tradeoffs matter most. Every model choice encodes priorities across cost, speed, accuracy, and explainability.</p><p>Several of the key tradeoffs to navigate:</p><blockquote><p><strong>Cost vs. Speed.</strong> Can a smaller model meet the user need? If so, start there. How much latency can the user tolerate? That tolerance gives you room to manage costs.</p><p><strong>Accuracy vs. Explainability.</strong> If your product requires transparency into why a decision was made, complex ML may not be the right choice. Sometimes a simpler rules-based system serves users better.</p><p><strong>Build vs. Buy.</strong> The cost to develop and maintain a custom frontier model is extraordinary. Before committing to build, evaluate whether existing foundation models can be adapted to your needs. And remember that &#8220;build&#8221; is never a one-time decision. It commits you to ongoing retraining, infrastructure, and maintenance.</p><p><strong>Perfect Now vs. Improving.</strong> AI evolves fast. Designing around today&#8217;s limitations means building for obsolescence. Your goal is not perfection today. It is alignment with where the technology and user expectations will be when you launch and scale.</p></blockquote><p>The common mistake here is overinvesting in model sophistication before validating user value. Spending months optimizing a model in a lab environment yields nothing if the underlying product concept does not matter to users. Start with the simplest model that works. Test it with real users. Prove the concept has value before you optimize.</p><h3><strong>U = User Trust (The User Experience)</strong></h3><p>You are not designing a static button. You are designing a relationship with a probabilistic system. The entire UX must manage uncertainty.</p><blockquote><p><strong>Ask yourself:</strong> How do we set expectations? How do we communicate confidence? How do we handle being wrong? How do we explain the &#8220;why&#8221; behind decisions?</p></blockquote><p>The trap is building a &#8220;black box&#8221; interface. If users do not understand why the AI did something, they feel a loss of control. Loss of control breeds distrust. Users who do not trust your product will churn. Effective AI product design communicates reasoning, conveys confidence levels, and acknowledges uncertainty. That transparency builds the trust that keeps users coming back.</p><h3><strong>E = Evolution (The Feedback Loop)</strong></h3><p>Your product should learn from every single user interaction. This is what closes the loop and creates the data flywheel that compounds your advantage over time.</p><blockquote><p><strong>Ask yourself:</strong> How do we capture user feedback? How does that feedback get back to the model? How do we monitor for failure? What is our retraining cadence?</p></blockquote><p>The mistake is forgetting to build the return path for data. Imagine you ship a v1 product. Users hate the recommendations. But you built no mechanism to learn why. Without that feedback loop, your product stays static. Errors persist. Performance degrades. The products that win are the ones that treat every user correction as training data for the next version.</p><h2><strong>How to Sink Your Interview</strong></h2><p>I have seen brilliant candidates tank interviews by making predictable mistakes. The &#8220;AI Hammer&#8221; starts with a solution before defining the problem. The &#8220;Magic Wand&#8221; assumes AI can do things well beyond current capabilities. The &#8220;Perfect Path&#8221; designs only for when everything works and ignores failure modes.</p><p>The most common failure mode? What I call the &#8220;ML Engineer Monologue.&#8221; Candidates get so deep into implementation details that they forget to talk about users, value, and product tradeoffs. Hiring managers want to see product judgment, not technical depth for its own sake.</p><h2><strong>The PM Role Is Changing. That&#8217;s Your Opportunity.</strong></h2><p>Many of the HBS students I met were anxious about whether PM roles would even exist in five years. I understand the concern. AI is changing how products get built. But I believe the anxiety is misplaced.</p><p>Consider how the traditional PM role worked. Three circles: Product, Engineering, Design. The PM overlapped slightly with each, translating between domains and keeping the system coherent. AI is collapsing those circles. PMs must now be better at engineering and better at design than ever before. Not to replace engineers and designers. To collaborate with them at a higher level of abstraction.</p><p>This is not a threat. It is an opportunity. Multi-faceted people are more valuable than ever. The best PMs have always been generalists who could go deep when needed. That value is only increasing.</p><p>Here is what I tell career switchers: your previous experience is not a liability. It is a superpower. The consultant who understands how organizations actually make decisions. The engineer who can evaluate technical tradeoffs without hand-holding. The finance professional who thinks in systems and incentives. Learn the craft of PM and complement it with what you already bring. The unique combination you bring to the role is rare.</p><h2><strong>What Actually Matters</strong></h2><p>I think back to my own experience at Google Labs. Early in my tenure I pitched a product called VoiceFX. It was a technically impressive voice synthesis tool that would help creators produce voiceovers. Leadership rejected it. Not because the technology was bad. Because it was a &#8220;solution in search of a problem.&#8221; A thin wrapper over a model that lacked a defensible moat.</p><p>They were right. I had fallen into the exact trap I now warn others about.</p><p>The lesson stayed with me. Good AI judgment is not about knowing what is technically possible. It is about knowing when technology serves users and when it just serves our excitement about the technology itself. That judgment is what hiring managers are evaluating. It is also what separates products that matter from products that just demo well.</p><h2><strong>The Message I Left With Those Students</strong></h2><p>Many of them worried whether PM would survive the AI transformation. I firmly believe PMs are becoming more essential, not less. But the job is evolving in critical ways.</p><blockquote><p><strong>My advice to you:</strong> Be the multi-faceted person without a sense of entitlement who knows how to get shit done. Understand technology deeply enough to make sound tradeoffs. Care about users enough to resist building things just because you can. Stay humble about what AI can and cannot do. Bring your whole self to the job. Care and stay curious.</p></blockquote><p>That recipe defines the best PMs I have worked with. It is also a recipe for lasting impact that only grows as AI becomes more capable.</p><p>The students at HBS had impressive resumes and sharp questions. I suspect many of them will be exceptional PMs. The ones who succeed will be those who remember that AI is a tool, not a product. The magic is never in the model. What matters is what the model enables for real people solving real problems.</p><div><hr></div><p><em>This entire thesis connects to my <a href="https://trond.ai/p/ai-predictions-for-2026">2026 predictions</a>, including the rise of the &#8220;full-stack designer&#8221; and the broader collapse of traditional role boundaries in tech.</em></p>]]></content:encoded></item><item><title><![CDATA[Tackle the Monkey]]></title><description><![CDATA[Stop wasting your time building pedestals.]]></description><link>https://trond.ai/p/tackle-the-monkey</link><guid isPermaLink="false">https://trond.ai/p/tackle-the-monkey</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Tue, 20 Jan 2026 16:17:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QruJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QruJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QruJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!QruJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!QruJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!QruJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QruJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png" width="1344" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1710207,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/185103928?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QruJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!QruJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!QruJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!QruJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b40e4ee-5018-4495-9b2f-68d6ecc75b6e_1344x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A few years ago, I found myself sitting in a conference room at the Google X headquarters. We were there to talk about how Labs and X might collaborate&#8212;two parts of Google with different mandates but similar ambitions. Astro Teller, an incredible character who leads Google X, cruised into the room on his famous rollerblades, immediately grabbing our attention.</p><p>As he was introducing the philosophy underlying the approach he strives for at X, he shared a note that I keep coming back to as I think about the world we see today. It began with a simple enough statement:</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><blockquote><p>&#8220;If I ask you to build a car that gets 50 miles per gallon, you&#8217;ll make minor adjustments to the engine. If I ask for 500 miles per gallon, you have to throw out everything you know about cars and start over.&#8221;</p></blockquote><p>The point: <strong>aiming for 10x is often easier than aiming for 10%.</strong></p><p>It sounds backwards. But when you aim for 10%, you&#8217;re competing on the same field as everyone else by tweaking existing solutions and achieving marginal results. When you aim for 10x, you&#8217;re forced to completely rethink the problem.</p><p>That conversation has been rattling around my head lately, because everywhere I look I see teams making the same mistakes with AI coding. You&#8217;re aiming for 10% gains, when you need to be shooting for a total revolution in how you work.</p><div><hr></div><p>Over the last two years, I&#8217;ve gotten deep in the tools of agentic coding. I regularly push the limits of Antigravity, AI Studio, Claude Code, Cursor, Codex and Lovable. Despite a lot of experimentation, I don&#8217;t yet have a crystal ball or recipe for how all of these capabilities will come together as a new way of building. But across nearly every discussion I&#8217;m following, especially with companies implementing AI systems within their engineering practices, the pattern of incremental thinking is everywhere: teams are wasting their time bolting AI onto their existing workflows and calling it transformation.</p><p><strong>Stop fooling yourselves</strong>. You&#8217;re using a self-driving car to parallel park.</p><p>Astro&#8217;s framework is the best mental model I&#8217;ve found for understanding why most teams are wasting this moment. Let me explain.</p><div><hr></div><h2><strong>Tackle the Monkey</strong></h2><p>Here&#8217;s Astro&#8217;s most famous metaphor. Say you&#8217;re trying to teach a monkey to recite Shakespeare while standing on a pedestal in Times Square. What do you work on first?</p><p>Most teams build the pedestal. It&#8217;s easy. Shows progress. Makes your boss happy. But if you can&#8217;t teach the monkey to talk, the pedestal is <em>worthless</em>.</p><p><strong>The lesson:</strong> Attack the hardest, most uncertain problem first. Don&#8217;t waste resources on things you already know how to do.</p><p>So what&#8217;s the &#8220;monkey&#8221; in agentic coding?</p><p>It&#8217;s not the tooling&#8212;that works. It&#8217;s not integration&#8212;that&#8217;s solvable. The monkey is fundamentally rethinking what software development means when an AI can execute multi-step tasks autonomously with programatic oversight and deep research backing every decision.</p><p>Teams building better CI/CD pipelines? Building pedestals. Integrating code-complete systems into VSCode? Building pedestals. Creating internal RAG systems to consult existing code? Building pedestals. So many execs out there parading their golden pedestals as if it&#8217;s real innovation.</p><p>I want to see more teams completely redesigning their entire development workflow around AI. Many startups are doing this already, often out of necessity. Solopreneurs are vocal champions of this approach. They&#8217;re tackling the monkey and you need to be too.</p><p>I wish I had a playbook to sell you, but it hasn&#8217;t been written yet. The technologies are moving too fast and the best practices are still emerging. But there are good ideas and experiments showing results that you need to think though. Let&#8217;s talk about a few I find most interesting:</p><div><hr></div><h2><strong>The search for 10x</strong></h2><p>I&#8217;ve been studying and experimenting with techniques for how the best teams are using these tools. Not the hype&#8212;the actual workflows. Here&#8217;s what separates incremental adopters from the teams making real leaps.</p><h3><strong>1. Institutional Memory</strong></h3><p>Boris Cherny, who created Claude Code, revealed his workflow recently. The key insight: every mistake becomes a rule.</p><p>His team maintains a CLAUDE.md file&#8212;about 2,500 tokens&#8212;that captures every pattern, every guideline, every error Claude shouldn&#8217;t repeat. When someone reviews a PR and spots an AI mistake, they don&#8217;t just fix the code. They update their institutional memory to ensure it never happens again.</p><p><strong>The incremental approach:</strong> Write documentation for humans, occasionally paste context into AI chats.</p><p><strong>The 10x approach:</strong> Your codebase becomes a learning system. The longer the team works together, the smarter the AI gets. Invest in virtuous cycles of learning from the start.</p><h3><strong>2. Verification Loops as Architecture</strong></h3><p>Here&#8217;s the single most important tip Cherny offers: give AI the ability to verify its own work. At Anthropic, Claude tests every change using browser automation. It opens the app, tests the UI, iterates until it works. This improves output quality by 2-3x. I use XCodeBuildMCP to automate this workflow when building iOS apps.</p><p><strong>The incremental approach:</strong> AI generates code &#8594; human reviews &#8594; human tests.</p><p><strong>The 10x approach:</strong> Design for autonomous verification from day one. AI runs tests, observes results, iterates. You review working code, not hopeful code. Human effort is expensive, before you deploy it make sure it&#8217;s of the highest value.</p><h3><strong>3. Tests as Prompts</strong></h3><p>Test-driven development is having a moment&#8212;but for a different reason than before.</p><p>When you&#8217;re working with AI agents, tests become the specification language. You write tests that define correct behavior. AI iterates until tests pass. As Simon Willison puts it: tests give you a reliable exit criteria. You&#8217;re not relying on the AI&#8217;s whims.</p><p>The workflow flips:</p><ul><li><p><strong>Plan</strong> &#8594; Use a thinking model to generate a phased plan</p></li><li><p><strong>Red</strong> &#8594; Write a test expressing desired behavior</p></li><li><p><strong>Green</strong> &#8594; Let the agent implement minimal code to pass</p></li><li><p><strong>Refactor</strong> &#8594; Ask AI to clean up while keeping tests green</p></li><li><p><strong>Validate</strong> &#8594; Verify it works end-to-end</p></li></ul><p><strong>The incremental approach:</strong> Write code, then write tests, occasionally ask AI for help.</p><p><strong>The 10x approach:</strong> Tests are prompts. A well-written test is a natural language spec that guides AI toward exactly the behavior you expect.</p><h3><strong>4. Parallel Agent Orchestration</strong></h3><p>The best AI first developers run 5-10 instances simultaneously. Five locally in terminal, 5-10 more in the browser. Each local session uses its own branch to avoid conflicts. As humans we&#8217;re used to running serial processes. Computers are multi-threaded for a reason and if you&#8217;re not taking advantage of this truth, you&#8217;re wasting your time. When you get it right, it&#8217;s altogether different:</p><blockquote><p>&#8220;It feels more like Starcraft than traditional coding.&#8221;</p></blockquote><p>That&#8217;s the shift. From typing syntax to commanding autonomous units.</p><p><strong>The incremental approach:</strong> One developer, one AI assistant, one task at a time.</p><p><strong>The 10x approach:</strong> You&#8217;re directing an entire team of AI agents, not typing code faster.</p><h3><strong>5. Plan Mode is the New PRD</strong></h3><p>In the not so distant past, product managers wrote PRDs to make sure designers and engineers understood exactly what to do, why to do it and the targeted business outcomes we seek. In the new world within which we live, Plan Mode is beginning to change what it means to draft a PRD and in many ways changes the purpose entirely. Rather than spending weeks and months pre-planning engineering investments, what happens when the cost to build falls below the cost to plan? Everything changes.</p><p>Today, 10x teams are reinventing the PRD with clever use of AI coding systems and Plan Mode. Again, I&#8217;ll point to Boris Cherny</p><blockquote><p>&#8220;If my goal is to write a <em>feature</em>, I will use Plan mode, and go back and forth with Claude until I like its plan. From there, I switch into auto-accept edits mode and Claude can usually 1-shot it. A good plan <em>made for agentic understanding</em> is really important!&#8221;</p></blockquote><p>Without explicit planning, AI tends to jump straight to coding. Asking your AI coding system to research and plan first dramatically improves results. There&#8217;s an entire article in my head about how to do this really well, but that&#8217;ll wait for another day. Let me know if you want me to write about this next!</p><p><strong>The incremental approach:</strong> Prompt &#8594; AI generates code &#8594; you review.</p><p><strong>The 10x approach:</strong> Spend 80% of interaction time on planning, 20% on execution. This is the inverse of how our systems currently work&#8212;but optimal for AI collaboration.</p><div><hr></div><h2><strong>The Pedestal Mistakes</strong></h2><p>These are the incremental approaches teams default to. Comfortable, familiar, fundamentally missing the point:</p><ul><li><p><strong>Better autocomplete</strong> &#8212; Using AI to finish lines faster, not to rethink how code gets written</p></li><li><p><strong>Faster Stack Overflow</strong> &#8212; Asking AI questions you could Google, not letting it autonomously solve problems</p></li><li><p><strong>Single-session chatting</strong> &#8212; No persistent memory, no learning, starting from scratch every time</p></li><li><p><strong>Human-in-every-loop</strong> &#8212; Requiring approval for every action instead of designing for autonomous verification</p></li><li><p><strong>One AI, one task</strong> &#8212; Never parallelizing, never orchestrating, never treating AI as a team</p></li></ul><p>If your workflow looks like this, you&#8217;re building pedestals.</p><div><hr></div><h2><strong>The Cultural Shift</strong></h2><p>Here&#8217;s where Astro&#8217;s framework gets uncomfortable.</p><p>He talks about creating context for moonshot thinking. The culture required. And he&#8217;s blunt about what kills it:</p><blockquote><p>&#8220;If people are surrounded by business speak, you will ruin it all. If they believe that they have to have a business plan for the weirdness that they are embarked upon, you will kill it&#8212;stillbirth guaranteed.&#8221;</p></blockquote><p>The teams that will win with agentic coding need psychological safety to fail spectacularly. The first attempts at AI-native development will be messy. Organizations that punish these experiments will lose to those that celebrate the learning.</p><p>Astro calls this being &#8220;responsibly irresponsible&#8221;&#8212;radical ambition coupled with disciplined execution.</p><p>That&#8217;s the balance. Not naive enthusiasm. Not cautious incrementalism. Bold hypotheses, rapid learning, honest assessment.</p><blockquote><p>&#8220;The secret? It&#8217;s easier to get people to work on making something 10X better than to get them to help make it 10 percent better. Huge problems fire up our hearts as well as our minds.&#8221;</p></blockquote><p>The AI Coding revolution isn&#8217;t about a new tool. It&#8217;s about a paradigm shift in how software gets built. The question isn&#8217;t &#8220;How do I use Claude Code?&#8221;</p><p>It&#8217;s &#8220;What does software development look like when AI agents can autonomously read codebases, plan features, implement changes, run tests, and iterate?&#8221;</p><p>That&#8217;s the monkey.</p><p>Are you tackling it? Or building pedestals?</p><div><hr></div><p><em>Builder&#8217;s note: I&#8217;ve been experimenting with these workflows at Google Labs for the past few months. The productivity gains are real, but only when you stop treating AI as autocomplete and start treating it as a collaborator with vastly more freedom to build. The learning curve is steep. I&#8217;m convinced the payoff is worth it.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Predictions for 2026]]></title><description><![CDATA[2026 is all about making rad shit with AI]]></description><link>https://trond.ai/p/ai-predictions-for-2026</link><guid isPermaLink="false">https://trond.ai/p/ai-predictions-for-2026</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Mon, 12 Jan 2026 16:22:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6sqF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6sqF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6sqF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!6sqF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!6sqF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!6sqF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6sqF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png" width="1344" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1791240,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/184251894?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6sqF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!6sqF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!6sqF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!6sqF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08cd7686-bedd-459a-bdf1-10e51374eec6_1344x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>2025 was the year AI learned to code. 2026 is the year everyone else figures out what that means.</p><p>The tools that felt like magic tricks a year ago are now table stakes. <a href="https://jellyfish.co/blog/2025-ai-metrics-in-review/">85% of developers</a> now use AI coding tools regularly. <a href="https://devgraphiq.com/cursor-statistics/">Cursor crossed $500M ARR</a> and a $10B valuation. The question isn&#8217;t whether AI can help you build software. The question is what happens when the barriers to building collapse entirely. When designers ship without engineers. When your kid builds an app over the weekend. When the people who were &#8220;non-technical&#8221; last year are shipping products this year.</p><p>Here&#8217;s where I&#8217;m placing my bets.</p><h2><strong>1. The Rise of the Full-Stack Designer</strong></h2><p>For decades designers, engineers and product managers have worked together to build software. Design skills and engineering skills required deep orthogonal specialization, meaning very few people possessed enough of both skills to drive results without the other. PMs served a critical role connecting these dots and bringing in expertise from functions spanning business, research and program management. But that&#8217;s all changing. The hegemony of the silo is collapsing.</p><p>Designers, who long have felt limited by the game of telephone between ideas and execution, are embracing AI coding with incredible enthusiasm. Tools like <a href="https://lovable.dev/">Lovable</a> are explicitly designed for designers, described as &#8220;the most beginner-friendly AI coding tool&#8221; that excels at &#8220;building out stylized user interfaces that seamlessly transition into working prototypes.&#8221; Engineers and PMs are also embracing these tools, but it&#8217;s the designer who will benefit the most in 2026. We&#8217;re about to see a new group of 10x designers who integrate the silos. The full-stack designer is the hero of 2026. Find them on your teams, empower them to build and watch the magic happen.</p><div class="paywall-jump" data-component-name="PaywallToDOM"></div><h2><strong>2. Deeply Personal Assistants</strong></h2><p>2026 is the year AI assistants finally cross over. For ages, we&#8217;ve been sold on the idea of JARVIS but have been limited to frustrating, incomplete experiences that feel like Dragon Dictate from the 90s. That changes this year. Gemini, Claude, and OpenAI will deeply integrate with wells of personal context while learning to use tools directly and on your behalf. This will lead to personal assistant experiences that finally deliver the dream.</p><p>The bold call here is on Apple. <a href="https://www.macrumors.com/2025/11/05/apple-siri-google-gemini-partnership/">According to MacRumors</a>, Apple is finalizing a deal worth approximately $1 billion per year to license Google&#8217;s Gemini for a reimagined Siri. The new assistant will use a custom version of Gemini 3 Pro with 1.2 trillion parameters, while keeping personal data processed on-device through Apple&#8217;s Private Cloud Compute. <a href="https://apple.gadgethacks.com/news/apple-siri-gets-google-gemini-ai-power-in-2026-overhaul/">The launch is targeted for Fall 2026</a> with iOS 26.4. Siri will be useful by the end of 2026. Maybe not as useful as Gemini on Pixel. But useful.</p><h2><strong>3. WYSIWYG Development</strong></h2><p>My use of AI-enabled IDE coding tools like Cursor and Antigravity is changing. In the beginning, I was looking at code changes and approving each. As I learned to trust the models, that&#8217;s evolved to me mostly interacting with the agent chat and multi-agent orchestration system and largely ignoring the other panels in the experience. I rarely look at the code at all anymore, other than to doublecheck something I&#8217;m curious about. When I use CLI based tools like Gemini-cli, Codex and Claude Code, I embrace this approach even more, but something seems missing with a terminal interface. The most exciting update I see now is when my agent opens a preview of the thing we&#8217;re building, either on the web or in a phone emulator, and directly modifies the code based on what&#8217;s on screen.</p><p>In 2026, we&#8217;re going to see the return of the WYSIWYG interface for creation. Dreamweaver is back baby. <a href="https://bolt.new/">Bolt.new</a> already combines visual editing with AI code generation, letting you build full-stack apps entirely in-browser. <a href="https://lovable.dev/">Lovable</a> offers Figma-to-code and one-click deployment. That wasted space in your IDE where you used to look at code will become the surface for creation itself. A place where you&#8217;ll nudge UX to where you want it, where you&#8217;ll annotate changes you want and where you&#8217;ll flag comments to the AI helping you build the code behind the scenes. This is the revolution required for AI coding tools to crossover to the mainstream.</p><h2><strong>4. Just-in-Time UX</strong></h2><p>Chat has been the predominant interface for AI and most of the interface innovations have had to fit into the metaphor of chat. There are interesting examples of linking useful UI components into a chat interface: the Photoshop integration into ChatGPT where when you want to modify something like exposure for a photo, a Photoshop control for exposure is integrated directly into the chat. This is a peek into where these UIs are headed.</p><p>In November 2025, <a href="https://blog.modelcontextprotocol.io/posts/2025-11-21-mcp-apps/">Anthropic and OpenAI partnered to release the MCP Apps Extension</a>, a specification that brings standardized interactive UI capabilities to the Model Context Protocol. MCP has been called the <a href="https://markets.financialcontent.com/stocks/article/tokenring-2026-1-5-the-usb-c-for-ai-how-anthropics-mcp-and-enterprise-agent-skills-are-standardizing-the-agentic-era">&#8220;USB-C for AI&#8221;</a> and now enables agents to serve rich UIs like charts, maps, and forms as part of tool calls. The <a href="https://openai.com/index/agentic-ai-foundation/">Agentic AI Foundation</a> launched under the Linux Foundation with OpenAI, Anthropic, Google, Microsoft, and AWS. This paradigm will accelerate this year.</p><p>This will lead to a new class of apps that self-compose UIs in real-time. They&#8217;ll draw components for existing patterns from vast libraries of capabilities, but when a novel need arises, they&#8217;ll create bespoke experiences to match real-time user requests. The very definition of software will forever change, and 2026 is when we see the beginnings of this evolution.</p><h2><strong>5. Bespoke Software</strong></h2><p>In December, my son wanted a motivational quotes app for his phone and was disgusted to see that all of the options in the app store were not just paid apps, but required a subscription. What he wanted was pretty simple: a collection of quotes, presented beautifully in an app and as a widget he could keep on his homepage. After a quick tutorial on Antigravity and a few hours of tweaking, he had exactly the app he wanted.</p><p>He&#8217;s not alone. <a href="https://lovable.dev/guides/mobile-app-development-trends-2026">Gartner projects</a> that by 2026, low-code development tools will account for 75% of new application development, up from 40% in 2021. The low-code market is growing from $37 billion in 2025 to a projected <a href="https://www.knack.com/blog/best-ai-app-builders/">$264 billion by 2032</a>. Non-technical founders are already finding success: one growth marketer <a href="https://lovable.dev/guides/mobile-app-development-trends-2026">built a women&#8217;s safety app</a> entirely using AI tools, reaching 10,000+ users and $456K ARR with zero engineering background.</p><p>As these tools become even more accessible, the build vs. buy equation will change. People will build the thing they want, or tweak the thing they have into exactly what they want. At first, we&#8217;ll see an explosion of new apps from a wide range of people. This will put pressure on app stores and traditional distribution systems. Analysts predict <a href="https://intcore.com/article/199/subscription-based-mobile-apps-in-2026-trends-challenges-strategies-for-growth">integrated AI platforms will render 60% of single-purpose apps obsolete</a>. In 2026, it&#8217;ll be almost impossible to rise above the noise, and for existing apps who depend on paywalls for thinly differentiated features, the flood of free alternatives should make you nervous. Where there is friction, users will innovate.</p><p>Problems with too few users to previously justify will now be viable to build. At the limit, even a user base of one will be enough: a revolution of bespoke software solving even the smallest of needs including one-time-use experiences. Wild.</p><h2><strong>6. Lots of Hype for World Models</strong></h2><p>Last year I predicted the ongoing expansion of LLMs with multi-modal capabilities and we saw that trend take shape. As these systems understand more about our world, they tend to work better across a wider variety of use cases. The difference between a world model and an LLM is nuanced, but fundamentally world models are designed to master cause and effect, physics, and the consequences of actions across a wide range of domains. We see these attributes in the way Veo seems to have an understanding of how water flows and reacts to a range of inputs, in how sounds and music match their visual complements.</p><p>The race is already heating up. <a href="https://www.datacamp.com/blog/top-video-generation-models">Runway&#8217;s Gen-4.5</a>, released December 2025, is explicitly marketed as moving beyond &#8220;video generation&#8221; toward &#8220;world models that understand physics.&#8221; Fei-Fei Li&#8217;s <a href="https://www.fastcompany.com/91437004/fei-fei-li-world-labs-spatial-ai-mapping-3d">World Labs launched Marble</a> in September 2025, generating explorable 3D worlds from text and images. <a href="https://introl.com/blog/world-models-race-agi-2026">NVIDIA&#8217;s Cosmos</a> world foundation models have been downloaded over 2 million times. Yann LeCun raised &#8364;500M for AMI Labs focused on the same goal. In 2026, multi-modality takes another large leap forward, leading to a lot of hype for world models. There&#8217;s debate across the industry about the right approach, but regardless of their impact the hype will be loud.</p><h2><strong>7. AI Software Teams &amp; Agent Swarms</strong></h2><p>Teams spanning use-case specific agentic specialization are quickly becoming common approaches for development. <a href="https://medium.com/@julio.pessan.pessan/multi-agent-systems-in-2025-how-orchestration-turns-solo-bots-into-enterprise-powerhouses-7d6114504bfc">Gartner reported a 1,445% surge</a> in multi-agent system inquiries from Q1 2024 to Q2 2025. The <a href="https://www.onabout.ai/p/mastering-multi-agent-orchestration-architectures-patterns-roi-benchmarks-for-2025-2026">orchestration software market is projected to hit $8.7 billion by 2026</a>. At the tail end of 2025, we&#8217;re seeing clever agent orchestration systems driving step changes in system reliability and capability.</p><p>In 2026, the impact of agent teams and agent swarms will be profound. <a href="https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2026/ai-agent-orchestration.html">Gartner predicts 40% of enterprise applications will embed AI agents by end of 2026</a>, up from less than 5% in 2025. Smart software teams will optimize agentic integrations across providers and ask systems to debate options ahead of implementation. Planning mode will transform to include multiple agents spanning a range of expertise. Token use will explode, as will the quality of systems we build.</p><h2><strong>8. Real-Time Interactive Avatars</strong></h2><p>I predicted this last year and was just a tad early. <a href="https://www.synthesia.io/post/synthesia-3-0-the-next-era-of-video">Synthesia raised $200M at a $4 billion valuation</a> in December 2025 and launched Synthesia 3.0 with their new Express-2 avatars featuring full-body gestures and state-of-the-art voice cloning. Their &#8220;Video Agents&#8221; for <a href="https://skywork.ai/blog/ai-video/i-spent-a-week-with-synthesia-ai-2025-update-real-notes-tiny-wins-and-a-few-facepalms/">real-time interactive experiences are coming in early 2026</a> for Enterprise customers. <a href="https://techcrunch.com/2025/12/11/runway-releases-its-first-world-model-adds-native-audio-to-latest-video-model/">Runway is building GWM-Avatars</a> to simulate human behavior and is in active conversations with robotics firms and enterprises.</p><p>2026 is the year when these land, leading to virtual agent experiences that feel eerily natural. These agents will speak every language, draw on deep wells of specialized knowledge and be empowered to act on your behalf. The power of the deeply personalized agents we mentioned in prediction #2, brought to life as personalized characters. The interactions won&#8217;t feel completely human, but the illusion will be compelling nonetheless.</p><h2><strong>9. Generative 3D Assets, Environments and Worlds</strong></h2><p>Another prediction from 2025 that I&#8217;m pulling forward to 2026. This year, generative worlds will have a moment in the sun. <a href="https://radiancefields.substack.com/p/gaussian-splatting-year-end-wrap">2025 was the year 3D Gaussian Splatting became production-ready</a> for media and entertainment. <a href="https://radiancefields.substack.com/p/gaussian-splatting-year-end-wrap">Superman was the first major motion picture</a> to use dynamic Gaussian splatting. World Labs&#8217; Spark renderer was <a href="https://web.volinga.ai/2025-turning-point-and-2026-trends-blog/">named one of the most influential libraries of 2025</a> by GitHub.</p><p>You&#8217;ll be able to prompt for fully immersive worlds allowing you to explore aspects of imagination in delightful new ways. <a href="https://www.fastcompany.com/91437004/fei-fei-li-world-labs-spatial-ai-mapping-3d">World Labs&#8217; Marble</a> already generates explorable 3D environments from text, images, or video and exports them as Gaussian splats, meshes, or videos. Many will seek out methods for making these experiences more efficient, repeatable and shareable leveraging capabilities such as 3DGS and parallel advancements in AI-generated 3D assets and environments. Industry experts are <a href="https://web.volinga.ai/2025-turning-point-and-2026-trends-blog/">&#8220;100% convinced that radiance field representations like Gaussian splatting are a fundamental imaging medium&#8221;</a> and predict accelerated adoption in 2026. The full potential of these capabilities won&#8217;t be realized in 2026, but we&#8217;re going to see a ton of meaningful progress.</p><h2><strong>10. Redefining the OS &amp; On-Device AI</strong></h2><p>Low-cost, efficient small models are getting really good. <a href="https://developers.googleblog.com/en/gemma-3-on-mobile-and-web-with-google-ai-edge/">Google&#8217;s Gemma 3 1B</a> runs at 2,585 tokens per second on mobile GPU with only a 529MB footprint. <a href="https://developers.googleblog.com/en/introducing-gemma-3-270m/">Gemma 3 270M</a> is designed specifically for on-device deployment with strong instruction-following out of the box. <a href="https://venturebeat.com/technology/google-releases-functiongemma-a-tiny-edge-model-that-can-control-mobile">FunctionGemma</a> enables edge agents that map natural language to executable API actions, running locally on phones, laptops, and small accelerators like NVIDIA Jetson Nano.</p><p>Meanwhile, on-device compute continues to accelerate with AI-enabled engines integrated into every modern smartphone and computer. With datacenter scale compute facing challenges getting enough power to meet the explosion of demand, we&#8217;ll see more inference driven to end user devices. The incentives meet the moment in 2026, leading to significant opportunities for on-device experiences.</p><p>Having worked on building Chrome OS, the prospect of rethinking how computing works at a fundamental level is exciting to me. The ingredients to radically reinvent computing are here in 2026. The outcomes that land this year are unlikely to fully realize the potential, but we&#8217;re going to see the beginnings of entirely new computing systems show up in our lives.</p><h2><strong>11. Software Re-Defined Hardware</strong></h2><p>Another hardware trend I&#8217;m excited about is the impact of AI coding on the hardware we already have in our lives. Over the last 10 years, we&#8217;ve seen everything from our vacuums to our dishwashers become connected and smart. Unfortunately, most of these systems remain pitifully dumb: sometimes because they&#8217;re locked behind an app built by the lowest bidder, and often connected to apps without sustainable business models. Why would someone pay for a subscription to an app for their dishwasher? It makes no sense. This has led to a ton of abandonware and devices with wasted utility waiting to be unlocked by innovative users.</p><p>In 2026, hackers and hobbyists will take back their hardware to do amazing things. <a href="https://hackaday.com/2026/01/05/2025-as-the-hardware-world-turns/">Arduino was acquired by Qualcomm</a> and released the Uno Q. The <a href="https://www.ics.com/blog/look-back-raspberry-pi-ecosystem-2025">Raspberry Pi AI Kit</a> bundles M.2 HAT+ with Hailo AI acceleration. Running AI on a Raspberry Pi is <a href="https://dev.to/george_mbaka_62347347417a/tiny-ai-models-for-raspberry-pi-to-run-ai-locally-in-2026-ik1">now practical and reliable</a> for offline, privacy-preserving systems. I&#8217;ve been waiting for someone to connect their JTAGulator to Claude Code and liberate devices that were previously locked down. Hardware was always hard. In 2026 these trends converge and hardware becomes less hard.</p><h2><strong>12. Potpourri</strong></h2><p>As I was writing these, I kept finding things I wanted to add to my predictions. AI is literally affecting everything we do and it&#8217;s impossible to choose where the impact will be greatest. Rather than tossing all of the other thoughts into the trash, I&#8217;m going to mix them into a stew of half-baked predictions:</p><ul><li><p><strong>Robotics accelerates, though most of the gains are for industrial use cases.</strong> The <a href="https://ifr.org/ifr-press-releases/news/top-5-global-robotics-trends-2026">global industrial robot market hit an all-time high of $16.7 billion</a>. Humanoid robot costs are projected to <a href="https://www.deloitte.com/us/en/insights/topics/technology-management/tech-trends/2026/physical-ai-humanoid-robots.html">fall from $35K to $13-17K</a> per unit over the next decade.</p></li><li><p><strong>Shopping becomes deeply personal as agents finally help shop for us.</strong> OpenAI announced deals with Target, Instacart, and DoorDash. Amazon launched &#8220;Buy For Me.&#8221; <a href="https://www.cnbc.com/2025/12/29/ai-agentic-shopping-price-discounts-cheap-sales-commerce-visa-mastercard-chatbots.html">Visa plans AI-driven purchases inside chatbots as early as Q1 2026</a>. Morgan Stanley predicts <a href="https://www.cnbc.com/2025/12/29/ai-agentic-shopping-price-discounts-cheap-sales-commerce-visa-mastercard-chatbots.html">nearly half of online shoppers will use AI shopping agents by 2030</a>. Just yesterday Google announced an industry alliance for Agentic shopping called <a href="https://blog.google/products/ads-commerce/agentic-commerce-ai-tools-protocol-retailers-platforms/">UCP</a>. This is happening in 2026.</p></li><li><p><strong>We see AI within games at all levels, leading to fun new styles of gameplay.</strong> <a href="https://www.googlecloudpresscorner.com/2025-08-18-90-of-Games-Developers-Already-Using-AI-in-Workflows,-According-to-New-Google-Cloud-Research">90% of game developers</a> are already using AI in their workflows according to Google Cloud research. The AI in gaming market is projected to grow from <a href="https://www.techtimes.com/articles/313777/20260106/future-ai-gaming-smart-npcs-realistic-graphics-next-gen-game-design.htm">$3.28 billion to $51 billion by 2033</a>. But there will still be backlash like what we saw with <a href="https://www.videogameschronicle.com/news/clair-obscur-expedition-33-game-of-the-year-award-pulled-after-admitting-to-generative-ai-use/">Expedition 33</a> last year.</p></li><li><p><strong>Entry level software jobs become scarce, except among multi-faceted AI native builders.</strong> <a href="https://spectrum.ieee.org/ai-effect-entry-level-jobs">Entry-level hiring at the top 15 tech firms fell 25%</a> from 2023 to 2024. Junior developer postings are <a href="https://www.finalroundai.com/blog/software-engineering-job-market-2026">down 60% since 2022</a>. Salesforce announced it will hire <a href="https://stackoverflow.blog/2025/12/26/ai-vs-gen-z">&#8220;no new engineers&#8221; in 2025</a>. The path forward is becoming an AI-native builder who can leverage these tools, not compete with them.</p></li></ul><div><hr></div><p>I write to think. This article isn&#8217;t just a list of predictions; it&#8217;s how I force myself to synthesize everything I&#8217;m seeing into a coherent view of where we&#8217;re headed. The act of committing these ideas to writing sharpens them. It exposes the gaps in my logic. It makes me defend positions I might otherwise hold loosely.</p><p>But I don&#8217;t have this figured out. If you think I&#8217;m wrong about something, tell me. If you see a trend I&#8217;m missing, I want to hear it. If one of these predictions strikes you as naive or overly optimistic, challenge me. The best thinking happens in conversation, not isolation.</p><p>The common thread across all twelve predictions: the builders win. Not the people who wait to see what happens. Not the people who debate whether AI is overhyped. The people who pick up the tools and start making things.</p><p>I build things to figure out how they work. That&#8217;s how I learned AI. That&#8217;s how I learned photography. That&#8217;s how I teach my kids. And I write to think through what I&#8217;m learning. That&#8217;s why this Substack exists.</p><p>Come December, I&#8217;ll grade myself on each of these publicly. Until then, I&#8217;m building. What about you?</p>]]></content:encoded></item><item><title><![CDATA[I Made 12 AI Predictions Last Year. Here’s My Honest Scorecard.]]></title><description><![CDATA[A year ago, I wrote down what I thought would happen in AI. Time to see if I earned it.]]></description><link>https://trond.ai/p/i-made-12-ai-predictions-last-year</link><guid isPermaLink="false">https://trond.ai/p/i-made-12-ai-predictions-last-year</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Mon, 05 Jan 2026 17:21:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qiBh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qiBh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qiBh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!qiBh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!qiBh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!qiBh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qiBh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png" width="1344" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1661014,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/183519669?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qiBh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png 424w, https://substackcdn.com/image/fetch/$s_!qiBh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png 848w, https://substackcdn.com/image/fetch/$s_!qiBh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png 1272w, https://substackcdn.com/image/fetch/$s_!qiBh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6700a529-9c53-4705-b042-88e2ce3c584a_1344x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p><strong>TL;DR:</strong> 4 A&#8217;s, 3 B&#8217;s, 3 C&#8217;s. Overall: B+. I nailed inference costs, AI agents, AI music, and services-for-AI. I whiffed on embodied AI and AI game worlds&#8212;the tech is there but the products aren&#8217;t. Biggest lesson: technology moves fast, but products take longer.</p><div><hr></div><p>A year ago I wrote down 12 predictions and hit publish. I reposted them yesterday, and you can read them here: <a href="https://trond.ai/p/2025-ai-predictions-republished">2025 AI Predictions (Republished)</a></p><p>That&#8217;s the crazy part about predictions; you can&#8217;t edit them later. They just sit there, waiting to make you look smart or foolish. I called 2025 the potential &#8220;1984 moment&#8221; for AI. Predicted 50 years of productivity gains crammed into 10. Bold claims.</p><p>Some held up. Some didn&#8217;t. A few things happened that I missed entirely. I&#8217;m grading myself honestly. If you only remember your hits, you learn nothing. I&#8217;ve already started writing my 2026 predictions, so this is all part of the process for me. I write to think and build to learn.</p><div><hr></div><h2><strong>The Hits</strong></h2><h3><strong>1. Inference Costs Continue to Plummet (A)</strong></h3><p><strong>What I predicted:</strong> Costs declining 10x per year, enabling &#8220;reasoning-heavy applications&#8221; that were previously impossible.</p><p><strong>What happened:</strong> Nailed it. Stanford&#8217;s 2025 AI Index shows inference costs for GPT-3.5-level performance dropped <strong>280x</strong> and kept falling. Epoch AI found price drops from 9x to 900x depending on the benchmark. It started early in the year when DeepSeek rattled markets by offering R1 at 20-50x cheaper than comparable OpenAI models. That turned into a performance war with API costs dropping to fractions of a cent per million tokens.</p><p>More importantly, this enabled exactly what I predicted: chain-of-thought, multi-step agents, and complex agentic workflows at economically viable scale.</p><p><strong>Grade: A</strong> &#8212; The magnitude was arguably <em>under</em>stated.</p><p><em>Why this matters to me:</em> I work on AI products at Google Labs. A year ago, some of the experiences we wanted to build were economically impossible. Now they&#8217;re not. That 280x drop isn&#8217;t an abstract number&#8212;it&#8217;s the difference between &#8220;interesting demo&#8221; and &#8220;scalable product.&#8221;</p><div><hr></div><h3><strong>2. AI Agents Break Through the Hype (A-)</strong></h3><p><strong>What I predicted:</strong> Agents gain traction in &#8220;specialized applications where their ability to automate tasks can be well-scoped.&#8221;</p><p><strong>What happened:</strong> 2025 really was the dawn of the agent.</p><p>OpenAI kicked it off with their launch of Operator, an AI that browses the web, fills forms, and completes purchases on your behalf. Anthropic followed with Claude Computer Use that let AI control your mouse and keyboard. We launched Mariner at Google Labs and by mid-year, agentic browsers from Perplexity (Comet), Browser Company (Dia), and Opera (Neon) reframed the browser as an active participant. Agents are core parts of Claude Code, Antigravity, Cursor and so many other systems people use every day.</p><p>Even enterprise adoption exploded: 68% of large companies now use AI agents (up from 11% two quarters prior) and the market is projected to grow from $13.8 billion to $140.8 billion by 2032. Despite the now debunked MIT survey about AI projects failing in the enterprise, the application of Agents at work truly started to take hold in 2025.</p><p>But here&#8217;s the nuance I got right: these agents thrived in <em>well-scoped</em> applications. The OSWorld benchmark shows humans at 72.4% accuracy versus 38.1% for OpenAI&#8217;s best model. Agents are powerful but not yet fully autonomous. The &#8220;specialized needs&#8221; framing I predicted still largely holds. I think this expands dramatically in 2026 and we&#8217;re going to see agents move into swarms of agents, teams of agents and much more well coordinated workflows with tool using agents performing a massively large variety of tasks.</p><p><strong>Grade: A-</strong> &#8212; Timing and trajectory is largely right. &#8220;Mainstream&#8221; is generous, but defensible.</p><div><hr></div><h3><strong>3. AI Music Empowers New Voices; An AI Song Breaks Through (A)</strong></h3><p><strong>What I predicted:</strong> &#8220;An AI-generated song will achieve mainstream success on a platform like Spotify.&#8221;</p><p><strong>What happened:</strong> Breaking Rust hit <strong>#1 on Billboard&#8217;s Country Digital Song Sales chart</strong> with &#8220;Walk My Walk.&#8221; The AI-generated country artist has 2 million monthly listeners on Spotify.</p><p>But that wasn&#8217;t alone. The Velvet Sundown hit #1 on Spotify&#8217;s Viral 50 in the UK, Norway, and Sweden. Xania Monet, an AI persona created by a Mississippi poet, signed a multi-million dollar record deal. Deezer reported 50,000 fully AI-generated songs are uploaded <em>daily</em>. Spotify removed 75 million &#8220;spammy&#8221; AI tracks in 12 months, showing both the interest and the challenge here. The floodgates have opened and AI music is almost certainly here to stay.</p><p>That&#8217;s not to say there isn&#8217;t backlash. Artists are hesitant about what this all means for their work and we&#8217;ve seen Suno, Udio and major record labels go through some <em>things</em> together. There are really important copyright issues at stake and I&#8217;m proud of the stance we&#8217;ve taken at Google on this topic. I&#8217;m not going to get into details, but I think <a href="https://deepmind.google/models/lyria/">Lyria</a> strikes a good balance here.</p><p>Back to the grades. I predicted <em>a</em> song would break through. Multiple artists hit the charts. I was too conservative, and now that the gates are open I&#8217;m more excited than ever for what 2026 will bring.</p><p><strong>Grade: A</strong></p><div><hr></div><h3><strong>4. Services Emerge Designed for AI Agents (A)</strong></h3><p><strong>What I predicted:</strong> &#8220;Services and applications specifically designed for AI consumption... transforming the way we publish content on the web.&#8221;</p><p><strong>What happened:</strong> The Model Context Protocol took the world by storm.</p><p>Anthropic launched MCP in November 2024 as an open standard for connecting AI to external systems. By 2025, it became the de facto infrastructure for agentic AI. OpenAI adopted it in March. Google DeepMind followed. In December, Anthropic donated MCP to the Linux Foundation&#8217;s new Agentic AI Foundation, co-founded with OpenAI and Block.</p><p>MCP is now integrated into ChatGPT, Cursor, Gemini, Microsoft Copilot, and VS Code. AWS, Google Cloud, and Azure all support it. Thousands of MCP servers exist for enterprise systems and hobbist creators. Designing for AI Agent use is more important than ever.</p><p>Similarly, OpenAI released AGENTS.md as a standard for giving AI agents project-specific guidance and it&#8217;s now been adopted by 60,000+ open source projects already. If you use Antigravity, Claude Code, Codex, Lovable, or Cursor, you&#8217;re almost certainly using an AGENTS.md file as part of your workflow.</p><p>This prediction was more right than I understood when I wrote it.</p><p><strong>Grade: A</strong></p><p><em>Builder&#8217;s note:</em> MCP is the kind of infrastructure that seems obvious in retrospect but required someone to just... build it. Anthropic shipped it, open-sourced it, then donated it to a foundation. That&#8217;s how you win an ecosystem. Kudos.</p><div><hr></div><h3><strong>5. AI-Augmented Coding Gains Traction (B+)</strong></h3><p><strong>What I predicted:</strong> Tools like Jules become &#8220;widely used among developers&#8221; but &#8220;significant simplification across the software lifecycle is needed before such capabilities can become mainstream tools accessible to non-developer users.&#8221;</p><p><strong>What happened:</strong> Some places are reporting that as much as 90% of dev teams now use AI in their workflows (up from 61%). GitHub Copilot leads with 42% market share; Cursor is at 18%. Almost half of companies now have at least 50% AI-generated code. And if you&#8217;ve been on X over the last few weeks, you&#8217;d be forgiven if you thought 100% of people were making projects over the holidays with Claude Code.</p><p>Cursor claims that users complete complex tasks 40-60% faster when using their tools. Claude Code, Windsurf, and GPT-5&#8217;s Codex agent all shipped major improvements. Google acqui-hired (what do we call these things?) the Windsurf founders and then launched Antigravity. The space is hotter than ever and &#8220;Vibe coding&#8221; entered the vocabulary.</p><p>But here&#8217;s where I was both right and wrong: I said non-developers would need &#8220;significant simplification&#8221; before these tools became accessible. That simplification <em>did</em> sort of happen; but the mainstream crossover for non-developers is still incomplete. The barrier lowered but there&#8217;s still a lot to do here. Look out for my 2026 predictions about JIT UX for more here.</p><p><strong>Grade: B+</strong> &#8212; Developer adoption exceeded expectations. Non-developer crossover remains a work in progress.</p><div><hr></div><h3><strong>6. Hyper-Personalized Shopping Experiences Emerge (B+)</strong></h3><p><strong>What I predicted:</strong> &#8220;AI shopping assistants designed to save people money, identify better products based on individualized style, budget, and preferences, and even haggle with sellers.&#8221;</p><p><strong>What happened:</strong> Every major AI company launched shopping features:</p><ul><li><p><strong>Perplexity</strong> rolled out Instant Buy with PayPal integration&#8212;ask &#8220;what&#8217;s the best winter jacket if I live in San Francisco and take a ferry to work?&#8221; and it remembers your context</p></li><li><p><strong>OpenAI</strong> launched Shopping Research with product cards and personalized guides</p></li><li><p><strong>Amazon</strong> upgraded Rufus and tested &#8220;Buy For Me&#8221;&#8212;an agent that purchases from <em>other sites</em> within the Amazon app</p></li><li><p><strong>Google</strong> launched Doppl (one of my projects) and integrated Try On You features across the Shopping ecosystem. Nano Banana really raised the state of the art for these features this year.</p></li><li><p>Startups like <strong>Doji</strong> and <strong>Phia</strong> are getting close to some of these capabilities, particularly in terms of price hunting but aren&#8217;t quite where I imagined yet.</p></li></ul><p>Morgan Stanley projects AI agents could add $115 billion in U.S. e-commerce spending by 2030. Adobe says AI-assisted shopping grew 520% this holiday season. But the &#8220;haggle with sellers&#8221; bit? Not quite there yet. And the experience is fragmented&#8212;Amazon sued Perplexity for its Comet browser completing purchases without permission. This space will stay hot in 2026 and I&#8217;m pretty excited about how AI Agents are going to help us engage with commerce in the future.</p><p><strong>Grade: B+</strong> &#8212; Directionally correct. &#8220;Haggle&#8221; was ahead of its time.</p><div><hr></div><h2><strong>Partial Credit</strong></h2><h3><strong>7. AI Streaming Cartoon Launches (B)</strong></h3><p><strong>What I predicted:</strong> &#8220;A streaming service experiment with an AI-produced cartoon able to respond in near real-time to current events and viewer feedback.&#8221;</p><p><strong>What happened:</strong> Close, but not quite there yet.</p><p>Fable launched <strong>Showrunner</strong>, the &#8220;Netflix of AI&#8221;&#8212;an interactive platform where you create episodes using text prompts. Amazon backed it. Their &#8220;Exit Valley&#8221; show features AI versions of Sam Altman and Elon Musk. It&#8217;s probably the closest to what I imagined when I wrote the prediction, but it&#8217;s still falls short of where I think this is going.</p><p>The big headline this year is that Disney announced Sora will create videos featuring 200+ Disney characters, including distribution of user-made clips on Disney+. Disney invested $1 billion in OpenAI to get this deal done. Similarly, Netflix is developing interactive voting for its Star Search reboot which is an interesting puzzle piece to keep an eye on.</p><p>The industry is clearly moving toward interactive, AI-generated content. But a full AI cartoon responding to real-time events? Not quite yet. The pieces are there; but the product isn&#8217;t.</p><p><strong>Grade: B</strong> &#8212; The infrastructure is here, but the consumer product I described is 2026.</p><div><hr></div><h3><strong>8. AI Memory Banks Emerge as a New Category (B-)</strong></h3><p><strong>What I predicted:</strong> &#8220;AI memory banks as a valuable tool... an &#8216;always-on&#8217; memory support, capable of storing, organizing, and retrieving vast amounts of information.&#8221;</p><p><strong>What happened:</strong> The category emerged and a lot of people are building parts of this, but it&#8217;s not where I predicted.</p><p>I think the biggest headline was when Limitless launched its $99 Pendant wearable. It could record conversations, get AI transcripts, ask &#8220;what did we decide?&#8221; and jump to exact quotes. The product found an audience, and then Meta acquired Limitless in December, immediately halting Pendant sales. Meta folded the technology into its &#8220;personal superintelligence&#8221; roadmap for future Ray-Ban glasses.</p><p>Microsoft&#8217;s Recall feature generated controversy but showed the category has legs. A million note taking apps, tools, and features are flooding our workflows but none are quite what I meant. OpenAI is rumored to be working on a &#8220;pen&#8221; and I suspect it&#8217;s close to what I imagine taking hold here. I guess we&#8217;ll see.</p><p><strong>Grade: B-</strong> &#8212; I think the category is validated but we still need to see the killer product.</p><div><hr></div><h3><strong>9. AI-Creativity Goes Mainstream; Adobe Faces Competition (B-)</strong></h3><p><strong>What I predicted:</strong> &#8220;AI-augmented creativity to explode... A new entrant will emerge as an AI-first competitor to Adobe.&#8221;</p><p><strong>What happened:</strong> AI creativity absolutely exploded. Flow, Sora, Runway, Kling, and Pika pushed video generation into the mainstream as capabilities leaped forward. Every major model improved dramatically at image generation and Nano Banana set a new standard that somehow Nano Banana Pro leap frogged. You can one-shot a complex infographic, correctly solve complex math within beautifully rendered text, and even create full presentations based on your source materials. Absolutely monumental progress this year.</p><p>But a clear Adobe challenger? Not yet. Canva deepened AI integration. Figma pushed AI features. Startups like Ideogram and Leonardo.ai gained tons of users. But none really emerged as <em>the</em> Adobe killer. And Adobe fought back harder than I expected. Firefly is embedded across Creative Cloud. GenAI capabilities from partners are being directly integrated across the Adobe portfolio. They just signed a big deal with Runway. Adobe is awake and they&#8217;re cooking again.</p><p><strong>Grade: B-</strong> &#8212; Creativity explosion definitely happened, and there is a lot of new competition but Adobe is not asleep at the wheel which is exciting to see. As a photographer, I&#8217;ve been a loyal Adobe Creative Suite subscriber and am excited about the new capabilities at my fingertips.</p><div><hr></div><h3><strong>10. Real-Time Multimodal LLMs Evolve Interaction (C+)</strong></h3><p><strong>What I predicted:</strong> &#8220;AI systems will be able to seamlessly integrate and understand information from multiple sources, including text, images, audio, voice and even video, all while responding dynamically.&#8221;</p><p><strong>What happened:</strong> The technology arrived. GPT-4o, Gemini 3.0, and Claude all improved multimodal capabilities dramatically. Real-time voice with AI became the de-facto standard for many users.</p><p>But my concern about &#8220;phone-anxiety trends among GenZ and Millennials&#8221; limiting voice adoption? Anecdotally maybe thats correct, but it&#8217;s still really hard to know. Chat definitely remains the dominant interface for most everyone despite massive voice improvements. I don&#8217;t know many people who actually use the video input paradigm yet.</p><p>I still love the idea of Astra, and I&#8217;ve seen some cool use cases that integrate Gemini Live, but so far it hasn&#8217;t redefined interaction models. We&#8217;re still mostly typing.</p><p><strong>Grade: C+</strong> &#8212; Multimodal models are definitely here; but behaviors change slowly. We&#8217;re all still chatting with our AIs.</p><div><hr></div><h2><strong>The Misses</strong></h2><h3><strong>11. Embodied AI / Virtual Humans Transform Customer Service (C)</strong></h3><p><strong>What I predicted:</strong> &#8220;Fully interactive virtual humans with photorealistic personification, recognizable voices and body movements... as the presentation layer on top of AI Agents.&#8221;</p><p><strong>What happened:</strong> Progress, but nothing transformative here yet.</p><p>AI avatars are improving quickly. Synthesia and HeyGen refined video generation with talking heads. But photorealistic virtual humans serving as widespread customer service agents? That hasn&#8217;t gotten past janky demos yet. The dream Poe from Altered Carbon serving as a virtual concierge when I check into my next hotel is still out on the horizon I&#8217;m afraid.</p><p>Most customer service AI remains voice-based or chatbot-based. The &#8220;embodied&#8221; layer is what isn&#8217;t ready yet. Cost, uncanny valley, and customer preference for efficiency over novelty kept this prediction from materializing in 2025. We&#8217;ll need some technology breakthroughs and some carefully designed experiences for this to come to life in 2026. I&#8217;m convinced it will, it&#8217;s just a matter of when.</p><p><strong>Grade: C</strong> &#8212; Overestimated progress and demand for virtual human avatars.</p><div><hr></div><h3><strong>12. AI-Generated Immersive Game Worlds Launch (C-)</strong></h3><p><strong>What I predicted:</strong> &#8220;An AI Game launches with GenWorlds... AI that can generate vast interactive and detailed game worlds that adapt to player choices.&#8221;</p><p><strong>What happened:</strong> AI in games advanced pretty incrementally in 2025. We say clear NPC dialogue improvements, a ton of procedural and generated content experiments, quite a few clever AI development tools, and MCP servers for pretty much every game engine. We announced Genie 3 from Deep Mind this summer and a range of startups began showing off 3D Gaussian Splat based environments that you can explore in a video-game like experience. But none of these advancements really hit the mark for what I had in mind.</p><p>We&#8217;re still looking for the first AI-generated game world launched as a mainstream title. There&#8217;s a lot of energy in this space right now, but the &#8220;GenWorlds&#8221; moment (meaning a game where AI generates the explorable world in real-time) didn&#8217;t hit the market in 2025.</p><p>Game development cycles are long and the technology is almost there. 2026 should be interesting.</p><p><strong>Grade: C-</strong> &#8212; Too early.</p><div><hr></div><h2><strong>What I Missed Entirely</strong></h2><p>Some of the things that seem obvious now are where I missed the most in 2025. A few big ones that stand out to me are:</p><p><strong>DeepSeek&#8217;s disruption.</strong> A Chinese lab releasing a competitive model at a fraction of the cost&#8212;open-weight&#8212;rattled markets and changed the game right out of the gates in 2025. I predicted cost declines but not where the disruption would come from nor how much we&#8217;d see. I called the trend Moore&#8217;s Law on steroids when really it was more like a coked out meth-head on steroids.</p><p><strong>The MCP standardization speed.</strong> I predicted &#8220;services designed for AI agents&#8221; but didn&#8217;t foresee how quickly formal protocols would emerge and get adopted by <em>all</em> major players. The industry standardized faster than expected. This is one of the more exciting spots of coordination in the industry right now and I find it really encouraging.</p><p><strong>The acquisition pattern.</strong> I predicted AI memory banks would emerge; I didn&#8217;t predict these startups would get snapped up before they scaled. Actually, this whole new trend of buying out founders that we&#8217;ve seen with Character.ai, Windsurf and most recently Groq is an interesting evolution to the M&amp;A playbook. I definitely didn&#8217;t see this coming.</p><p><strong>Regulatory acceleration.</strong> The EU AI Act also shaped the industry in ways I should probably have thought about. But regulatory stuff is kind of out of my wheelhouse, so I&#8217;m giving myself a pass here.</p><div><hr></div><h2><strong>The Scorecard</strong></h2><p><strong>PredictionGrade</strong>Inference costs plummetAAgents break throughA-AI music breakthroughAServices for AI agentsAAI coding gains tractionB+Personalized shoppingB+AI streaming cartoonBAI memory banksB-AI creativity / AdobeB-Multimodal interactionC+Embodied AICAI game worldsC-</p><p><strong>Overall: B+</strong></p><p>I think I got many of the big trends right: agents, inference economics, AI music, services for AI, and coding tools. Where I missed, was mostly <em>timing</em>. Embodied AI and AI game worlds are coming and the tech will be here before we know it, but it just didn&#8217;t happen in 2025.</p><p><strong>The biggest lesson:</strong> Technology is moving extremely fast right now, but products are taking longer. There&#8217;s probably something to learn for PMs here that would make for an interesting article. MCP was infrastructure I didn&#8217;t see coming (but probably should have). The AI streaming cartoon I described is definitely being built right now by someone, but it didn&#8217;t ship in 2025. AI game worlds need more development cycles to be product ready.</p><p>The gap between demo and product is still where I think all the interesting work happens. Good thing it&#8217;s what I work on every day!</p><div><hr></div><h2><strong>What&#8217;s Next</strong></h2><p>2026 predictions drop next week. Here&#8217;s the short version: I think this is the year agents stop being demos and start being daily tools. The infrastructure is in place. The battle now is trust.</p><p>I&#8217;ll break down what I&#8217;m betting on&#8212;and what I&#8217;m building toward at Google Labs.</p><div><hr></div><p><em>Think I graded myself too generously? Too harshly? Hit reply&#8212;I read everything.</em></p>]]></content:encoded></item><item><title><![CDATA[2025 AI Predictions (Republished)]]></title><description><![CDATA[Resharing my 2025 Predictions. Originally written January 2025. Republished here so you can hold me accountable.]]></description><link>https://trond.ai/p/2025-ai-predictions-republished</link><guid isPermaLink="false">https://trond.ai/p/2025-ai-predictions-republished</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Mon, 05 Jan 2026 03:51:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f5fbd792-3682-484b-9b35-23d309cab4c6_1344x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eCYI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F406938b6-55b8-4634-91c9-adc7110bfc7b_1536x672.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eCYI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F406938b6-55b8-4634-91c9-adc7110bfc7b_1536x672.png 424w, https://substackcdn.com/image/fetch/$s_!eCYI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F406938b6-55b8-4634-91c9-adc7110bfc7b_1536x672.png 848w, https://substackcdn.com/image/fetch/$s_!eCYI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F406938b6-55b8-4634-91c9-adc7110bfc7b_1536x672.png 1272w, https://substackcdn.com/image/fetch/$s_!eCYI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F406938b6-55b8-4634-91c9-adc7110bfc7b_1536x672.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eCYI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F406938b6-55b8-4634-91c9-adc7110bfc7b_1536x672.png" width="1456" height="637" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/406938b6-55b8-4634-91c9-adc7110bfc7b_1536x672.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:637,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1426523,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/183511444?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F406938b6-55b8-4634-91c9-adc7110bfc7b_1536x672.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eCYI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F406938b6-55b8-4634-91c9-adc7110bfc7b_1536x672.png 424w, https://substackcdn.com/image/fetch/$s_!eCYI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F406938b6-55b8-4634-91c9-adc7110bfc7b_1536x672.png 848w, https://substackcdn.com/image/fetch/$s_!eCYI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F406938b6-55b8-4634-91c9-adc7110bfc7b_1536x672.png 1272w, https://substackcdn.com/image/fetch/$s_!eCYI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F406938b6-55b8-4634-91c9-adc7110bfc7b_1536x672.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Foreword (January 2026):</h1><p>A year ago, I wrote down 12 predictions about where AI would go in 2025. I shared them with teammates at Google Labs but never published them publicly.</p><p>That was a mistake. Predictions without accountability are just vibes.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>So before I publish my scorecard (tomorrow) and grade each of these predictions against what actually happened, I&#8217;m republishing the original document here unedited. You can see exactly what I predicted, in my own words, before I reveal the final ratings.</p><p>Here&#8217;s what I wrote a year ago:</p><div><hr></div><h1>Predictions for AI In 2025</h1><p><em><strong>January 2025</strong></em></p><p>If you thought the world of AI moved fast in 2024, you&#8217;d better buckle up for 2025 because I believe we&#8217;re about to enter a period of development with 50 years of productivity gains crammed into 10 years (or less.) I think we&#8217;ll look back at 2025 as the seminal year for AI much like we consider 1984 the pivotal moment for personal computing. </p><p>Building on what happened in 2024, AI is going to get even smarter and be everywhere, affecting pretty much everything we do. I won&#8217;t address whether we reach AGI, but here are a few thoughts on some trends I expect to see this year:</p><h2>Inference costs continue to plummet; reasoning-heavy applications flourish</h2><p>The cost of running AI models has been declining by as much as 10x per year, and this trend is likely to continue in 2025. This is driven by factors such as increased competition, hardware advancements, and more efficient algorithms. </p><p>The impact of this trend will be like Moore&#8217;s Law on steroids: inference costs plummet, making it feasible to deploy AI for wildly more complex tasks that require advanced reasoning and problem-solving. We&#8217;ll be able to build AI systems that can analyze complex data, identify patterns, and generate solutions to problems that were previously impossible. </p><p>Put more inference in your solutions.</p><h2>Agents break through the hype; serving specialized needs but not yet mainstream</h2><p>AI agents are becoming adept at performing tasks and achieving pre-defined goals without constant intervention. In 2025, we&#8217;ll see AI agents gain traction with specialized applications where their ability to automate tasks can be well-scoped in ways that limit unintended outcomes. This could include tasks such as scheduling, managing emails, conducting research, and even automating service interactions. </p><p>We&#8217;ll see experiments with proactive agents which operate autonomously, with an opportunity to establish a leadership position in this space by differentiating on trust.</p><h2>Real-time multimodal LLMs evolve how we interact with AI; yet chat remains dominant</h2><p>In 2025, we&#8217;ll witness a significant leap forward in the capabilities of multimodal LLMs, particularly in their ability to process and respond to information in real-time. This means AI systems will be able to seamlessly integrate and understand information from multiple sources, including text, images, audio, voice and even video, all while responding dynamically to changes in the input streams. </p><p>The time is ripe to reinvent how we interact with AI and integrate it more seamlessly into applications. Tools like Astra will offer a major step forward, but I worry voice-based interactions will suffer from phone-anxiety trends common among GenZ and Millennials.</p><h2>Emergence of AI-produced interactive entertainment; a streaming service launches an AI cartoon</h2><p>AI is already being used to generate creative content, including music, images, and videos. AI algorithms can now generate original stories, characters, and even animation, potentially leading to entirely new forms of entertainment with deeply embedded interactivity that would never be possible without AI. </p><p>In 2025, we&#8217;ll see a streaming service experiment with an AI-produced cartoon able to respond in near real-time to current events and viewer feedback. This will be a first step toward interactive, personalized and engaging entertainment experiences.</p><h2>AI-creativity goes mainstream; a new entrant emerges as an AI-first competitor to Adobe</h2><p>AI-powered creative tools are becoming so good that generated content is becoming difficult to distinguish from human-created work, and users are becoming more comfortable integrating these systems into their regular creative lives. </p><p>In 2025, we should expect AI-augmented creativity to explode, with AI tools playing a more prominent role in art, music, advertising and video production. Continued model improvement will lead to new forms of artistic expression and a revolution across the creative world; posing a greater threat than ever before to traditional incumbents. </p><p>A new entrant will emerge as an AI-first competitor to Adobe.</p><h2>AI-music empowers new voices; an AI song will break through on Spotify</h2><p>AI music tools are empowering individuals with limited musical training to create original music. These tools are becoming more accessible and user-friendly, allowing anyone to create original music regardless of their musical background. This will lead to a democratization of music creation, with new voices and styles emerging from unexpected sources and open opportunities for new methods of creator-centric social music experiences. </p><p>In 2025, I believe an AI-generated song will achieve mainstream success on a platform like Spotify, further demonstrating the potential of AI in music creation.</p><h2>AI-augmented coding gains traction, but complexity prevents widespread use</h2><p>AI coding tools are becoming increasingly popular, assisting developers with code generation, debugging, and documentation. These tools can automate repetitive coding tasks, suggest code completions, and even generate entire code blocks. However, complex coding tasks still require human expertise and oversight, particularly for deployment as web and mobile apps, limiting the mainstream adoption. </p><p>In 2025, we can expect AI coding tools like Jules to become widely used among developers but significant simplification across the software lifecycle is needed before such capabilities can become mainstream tools accessible to non-developer users.</p><h2>Embodied AI moves beyond novelty; virtual humans transform customer service</h2><p>Embodied AI, particularly in the form of virtual humans, will find traction in a range of real-world applications. These virtual humans will be more than just avatars; they will be capable of interacting with their environment and humans in a more natural and intuitive way. </p><p>We&#8217;ll see existing chat-based character systems evolve to include fully interactive virtual humans with photorealistic personification, recognizable voices and body movements. These Embodied AI personas will serve as the presentation layer on top of AI Agents to deliver engaging customer service experiences not before possible.</p><h2>Hyper-personalized shopping experiences emerge; an AI shopping agent grabs market share</h2><p>AI is already being used to personalize recommendations, offers, and marketing messages for businesses. In 2025, we can expect even more sophisticated AI-powered shopping tools that anticipate customer needs and provide highly personalized experiences enabled across merchants. </p><p>I believe we&#8217;ll also see consumers embrace hyper-personalized AI shopping assistants designed to save people money, identify better products based on individualized style, budget, and preferences, and even haggle with sellers to get them the best possible prices. This will empower consumers and potentially disrupt traditional retail models.</p><h2>AI-generated immersive worlds arrive; an AI game launches with GenWorlds</h2><p>AI is already being used to generate game content, including characters, environments, and even storylines. In 2025, we can expect to see more sophisticated AI-powered games that offer dynamic and immersive experiences. </p><p>Imagine AI that can generate vast interactive and detailed game worlds that adapt to player choices and actions, creating unique and unpredictable gameplay. This will lead to new genres of games where players explore AI-generated worlds, interact with AI-powered characters, and experience emergent narratives that unfold in real-time. </p><p>This will revolutionize world design, offering endless possibilities for exploration and replayability. AI will power non-playable characters (NPCs) with complex motivations, relationships, and even personal histories, leading to emergent narratives and unpredictable gameplay.</p><h2>Services emerge designed to be consumed by AI agents; redefining the web</h2><p>As AI agents become more prevalent, we can expect to see services and applications specifically designed for AI consumption. This will lead to a new paradigm for the web, where AI agents interact with each other and access information in ways that are different from traditional human-computer interaction. </p><p>Imagine AI agents access and process information from websites, APIs, and other online services, autonomously gathering data, making decisions, and completing tasks on behalf of their human users. This will lead to new forms of online services and applications optimized for AI interaction, transforming the way we publish content on the web.</p><h2>AI memory banks emerge as a new category; always-on memory support gains traction</h2><p>In 2025, we&#8217;ll see the emergence of AI memory banks as a valuable tool for individuals and businesses. These AI-powered systems will act as an &#8220;always-on&#8221; memory support, capable of storing, organizing, and retrieving vast amounts of information, effectively augmenting human memory and enhancing cognitive capabilities. </p><p>Imagine an AI assistant that remembers every conversation you&#8217;ve ever had, every document you&#8217;ve ever read, and every event you&#8217;ve ever experienced. This AI could then provide you with relevant information and insights whenever you need them, helping you make better decisions, learn more effectively, and even enhance your creativity.</p><div><hr></div><h1>What&#8217;s Next</h1><p>That&#8217;s what I wrote a year ago. No edits. No hedging after the fact.</p><p><strong>Next post: the scorecard.</strong> I&#8217;m grading each of these predictions against what actually happened in 2025. Some I nailed. Some I whiffed. A few things happened that I didn&#8217;t see coming at all.</p><p>Subscribe so you don&#8217;t miss it.</p><p><em>Originally written January 2025. Republished January 2026.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Build Rad Shit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Let's Build Rad Shit Together]]></title><description><![CDATA[The real magic happens at the intersection of building to learn and solving problems that matter. Let's talk about what it means to build products people love in an AI world.]]></description><link>https://trond.ai/p/lets-build-rad-shit-together</link><guid isPermaLink="false">https://trond.ai/p/lets-build-rad-shit-together</guid><dc:creator><![CDATA[Trond Wuellner]]></dc:creator><pubDate>Thu, 01 Jan 2026 04:04:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e41ba7e6-25ac-4740-8669-dcff066ba80b_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Why I&#8217;m Starting This</h1><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!95QA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c5b4174-1fc8-40b2-8c88-e39f27686330_3584x1184.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!95QA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c5b4174-1fc8-40b2-8c88-e39f27686330_3584x1184.png 424w, https://substackcdn.com/image/fetch/$s_!95QA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c5b4174-1fc8-40b2-8c88-e39f27686330_3584x1184.png 848w, https://substackcdn.com/image/fetch/$s_!95QA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c5b4174-1fc8-40b2-8c88-e39f27686330_3584x1184.png 1272w, https://substackcdn.com/image/fetch/$s_!95QA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c5b4174-1fc8-40b2-8c88-e39f27686330_3584x1184.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!95QA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c5b4174-1fc8-40b2-8c88-e39f27686330_3584x1184.png" width="1456" height="481" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5c5b4174-1fc8-40b2-8c88-e39f27686330_3584x1184.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:481,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7581262,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trond.ai/i/183112321?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c5b4174-1fc8-40b2-8c88-e39f27686330_3584x1184.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!95QA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c5b4174-1fc8-40b2-8c88-e39f27686330_3584x1184.png 424w, https://substackcdn.com/image/fetch/$s_!95QA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c5b4174-1fc8-40b2-8c88-e39f27686330_3584x1184.png 848w, https://substackcdn.com/image/fetch/$s_!95QA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c5b4174-1fc8-40b2-8c88-e39f27686330_3584x1184.png 1272w, https://substackcdn.com/image/fetch/$s_!95QA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c5b4174-1fc8-40b2-8c88-e39f27686330_3584x1184.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I&#8217;ve spent the last 18+ years at Google building 0-to-1 products. Some shipped to millions of people. Some died in committee. A few I&#8217;m genuinely proud of.</p><p>But the stuff I&#8217;ve learned the most from? The things I built mostly to figure out how they work. That&#8217;s where the magic happens.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Build Rad Shit! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>Build to Learn</h2><p>Earlier this year, I took a darkroom photography class. Not because I needed another hobby, but because I wanted to really understand photography at a deeper level. Especially light&#8212;I really wanted to understand it. How it behaves. Why some shots feel alive and others fall flat.</p><p>So I did what I always do: I built something.</p><p>I started building an app I&#8217;m calling GroundGlass. It&#8217;s a light meter for my phone that simulates how light works within a Hasselblad 503C medium format camera. Not because good light meters don&#8217;t exist (they do, and they&#8217;re better than mine), but because building one forced me to internalize the physics. Incident vs. reflected. The inverse square law. Why zone system metering actually works. The app is still a work in progress and I might even ship it one day, but that isn&#8217;t really the point.</p><p>Now when I shoot, I don&#8217;t think about the math. It&#8217;s just there. The app was never my goal. The understanding was.</p><p>This is how I learn. I build things to figure out how they work.</p><div><hr></div><h2>The Best Day Job in the World</h2><p>I&#8217;m pretty lucky to admit that I may have the most fun job within one of the best teams at the best place to (IMHO) work in the World. By day, I&#8217;m a Product Director at <a href="http://labs.google">Google Labs</a>, where I work on AI products based on the very latest research and models from our partners at Google Deep Mind. I joined this team in 2023 and have been involved with a few extremely fun projects already. Some haven&#8217;t launched yet (shhhh), but a few that have include: <a href="https://labs.google/doppl">Doppl</a>, <a href="https://labs.google/fx/tools/music-fx-dj">MusicFX</a>, and the Audio Overviews features of <a href="https://blog.google/technology/ai/notebooklm-audio-overviews/">NotebookLM</a>.  I may write about some of these (and other) experiences at some point in the future.</p><p>Fundamentally, I&#8217;m a 0 to 1 person at heart. When I joined Google in 2007, I told myself I&#8217;d learn my way around Silicon Valley and Google for maybe 5 years and then go out to start my own company. Well, it&#8217;s been nearly 20 years and I&#8217;m still here. A huge reason is because I&#8217;ve had the privilege of working for some incredible people &#8212; including Sheryl Sandberg, Sundar Pichai, Craig Barrett, Rick Osterloh, Susan Wojcicki, and most recently Josh Woodward. </p><p>The other reason is that I&#8217;ve had the opportunity to be hands on building amazing products within Google, usually from the earliest moments of ideation through to billions of dollars of impact. It&#8217;s a truly rare path I&#8217;ve made in my time here and one that&#8217;s been rewarding in so many ways. Some of the things I&#8217;m most proud of building include:</p><p><strong>YouTube Create</strong> &#8212; A mobile video editing app. Making video editing tools that feel intuitive rather than overwhelming is <em>hard</em>. The project started as something almost entirely different and was acquired (internally) into YouTube where we reshaped our idea into one of the fastest growing mobile video apps in the world. </p><p><strong>Pixelbook</strong> &#8212; Google&#8217;s premium  Chromebook. I was the Product lead for our first-party laptop and tablet portfolio based on ChromeOS. We always knew it was going to be hard to sell $1000 Chromebooks, and it was &#8212; but we were clear-eyed about our goals and proud of what we built. There are some great stories to share about this experience.</p><p><strong>Google WiFi</strong> &#8212; I started the project back before it was called OnHub and we were focused on ways to help more Google users get online. We were trying to answer a simple question: why does everyone&#8217;s home WiFi suck? The answer turned out to be complicated (mesh networking, antenna design, automatic channel selection, an app that doesn&#8217;t require a PhD to use). It&#8217;s now in millions of homes and won an iF Design Award. I&#8217;m proud of that one.</p><p><strong>ChromeOS</strong> &#8212; I joined the ChromeOS team when we were about 30 people total and became the PM responsible for all of the system services layers of building an operating system. It was a wild ride surrounded by such amazing talent density. It truly took a passionate, brilliant team to convert the idea of netbooks into a full and complete computing ecosystem centered on the web.</p><p>It&#8217;s still amazing to me that I&#8217;ve been at Google so long and I&#8217;ve obviously had chances to leave, but Google is such a special place that it&#8217;s hard to seriously consider it. I joined Google right out of business school&#8212;MIT Sloan, if you&#8217;re keeping track&#8212;and before that I studied computer science at Northwestern, where I went deep on early AI systems and the primordial version of neural networks from the late &#8216;90s. </p><p>So when people ask if I think AI is going to change everything, I&#8217;ve got some context. For the record: yes, everything is going to change. My mission is to see to it that it changes for the better.</p><div><hr></div><h2>The Side Quests</h2><p>A rewarding day job is one thing, but I&#8217;ve never given up on my side quests. I think this is where everything gets more interesting and I&#8217;ve pretty much always had a long list of things I was noodling on and building &#8212; often just to learn. It&#8217;s how I work. I&#8217;m not going to try to list everything, but a few things I&#8217;ve loved to do lately include:</p><p><strong>Building with my kids.</strong> I&#8217;ve been helping my son (14) build an app called Quotivation. He wanted an app to put motivational quotes on his phone and was annoyed that all of the options in the app store required a subscription. So he&#8217;s building his own &#8212; with Antigravity naturally. My daughter (12) and I are also working on an app she wanted to build called Waddle, which is a cozy self-care app featuring a duck who bakes pastries. It has a pedometer and a focus timer. She designed the duck and is vibe coding with Gemini to make the app what she imagined. Building software with your kids is an amazing way to connect and it&#8217;s inspiring to see them get the builder bug. </p><p><strong><a href="https://github.com/TrondW/SharpGlass">SharpGlass</a>.</strong> An open-source macOS app for creating and viewing 3D Gaussian Splats based on a 2D image. I saw Apple had launched a model called <a href="https://github.com/apple/ml-sharp">ml-sharp</a> and I wanted to understand it a bit better because the technology is fascinating. So last week, I built an app (using Antigravity and Gemini 3 Flash) to experiment and learn. I put it all up on <a href="https://github.com/TrondW/SharpGlass">GitHub</a> if you want to check it out.</p><p><strong>Film photography.</strong> Last year, I signed up for a darkroom photography class at Foothill College where technically I&#8217;m a freshman working towards an associates degree. I&#8217;m learning to shoot film using a Leica M6 and a Hasselblad 503C. I&#8217;ve started developing my own film and making my own prints in a darkroom. In a world where I spend all day on AI and software, there&#8217;s something grounding about a purely analog process where you can&#8217;t undo anything. I&#8217;ve always loved photography, but film is altogether another level. I&#8217;ll probably have a bunch of photography related things to talk about. And if you&#8217;re curious I have some work posted at <a href="http://trond.photography">trond.photograhy</a>.</p><p><strong>Angel investing.</strong> I&#8217;ve invested in quite a few startups through <a href="https://www.hustlefund.vc/squad">Hustle Fund&#8217;s Angel Squad</a> and as an AI advisor to <a href="https://vitalstage.com/">Vitalstage Ventures</a>. I&#8217;ve had several wins that I&#8217;m proud of and a few that look likely to hit in 2026. Also, quite a few misses &#8212; but that&#8217;s the name of the game.  I love Angel Investing mostly because I learn a lot from founders who are building things I&#8217;d never think of. I&#8217;m pretty much only thinking about AI these days so if you want to approach me about something, come with a clear view on how you&#8217;re using AI in ways that solve meaningful user problems in a novel way that was previously infeasible. Or if you&#8217;ve figured out how to get power to datacenters more efficiently or more quickly. I hear that&#8217;s in demand these days.</p><div><hr></div><h2>Why This Newsletter</h2><p>I&#8217;ve been writing internal docs at Google for almost two decades. Strategy memos, product specs, postmortems. Thousands of pages that maybe a few dozen people read.</p><p>I want to write for a wider audience. Not to build a personal brand (though I guess that&#8217;s happening), but because I&#8217;ve accumulated a lot of lessons I think are worth sharing&#8212;and I learn better when I share with others.</p><p>Here&#8217;s what you can expect:</p><p><strong>The Big Leap.</strong> Why do AI demos or new models always look amazing but shipped products frequently disappoint? I work on this problem every day. I&#8217;ll share what I&#8217;ve learned about translating research into products people actually use. It&#8217;s linked to my personal mission: build rad shit people love.</p><p><strong>Builder&#8217;s Notes.</strong> Lessons from founding products at Google&#8212;what worked, what didn&#8217;t, and what I&#8217;d do differently. Not sanitized corporate retrospectives. Real talk. I won&#8217;t be able to share things that are proprietary to Google, but the lessons that spawn from our work still matter.</p><p><strong>Experiments &amp; Learning.</strong> I build a lot of random stuff. Apps, tools, photography projects. I&#8217;ll share what I&#8217;m working on and what I&#8217;m learning from it. Most of it will be half-baked, but that&#8217;s often where the magic happens.</p><p><strong>The Craft of PM.</strong> I&#8217;ve hired a lot of PMs. I&#8217;ve mentored a lot of PMs. In January, I&#8217;m teaching a session at Harvard Business School on &#8220;PMing in the Age of AI.&#8221; I have opinions about what makes great product people. Some of them are probably wrong. The craft is evolving quickly; let&#8217;s learn together what the future of the PM role means.</p><div><hr></div><h2>What I Won&#8217;t Do</h2><p>I won&#8217;t pretend to have answers I don&#8217;t have. I won&#8217;t write corporate pablum. I won&#8217;t optimize for engagement at the expense of saying something true.</p><p>I&#8217;m also pretty tool-agnostic. I use Claude, Gemini, ChatGPT, Grok&#8212;whatever works for what I&#8217;m building. I work at Google, and yes I&#8217;m going to talk a lot about things that I&#8217;m excited about here, but I&#8217;m not just here to sell you Google products. I&#8217;m here to share what I&#8217;m learning and hopefully that spans the whole industry.  There&#8217;s so much amazing work going on worth learning from!</p><div><hr></div><h2>Let&#8217;s Build Some Rad Shit Together</h2><p>That&#8217;s what this newsletter is called because that&#8217;s what I care about. Not building things that look good in a pitch deck. Not building things that satisfy some quarterly OKR. Building things that are genuinely good&#8212;things people love to use.</p><p>If that sounds interesting, subscribe. I&#8217;ll be in your inbox roughly once a week.</p><p>First real post coming soon: <strong>Grading My 2025 Predictions.</strong> A year ago, I made 12 predictions about where AI would go. Some I nailed. Some I whiffed. I&#8217;m grading myself honestly&#8212;because that&#8217;s the only way predictions are worth making.</p><p>Let&#8217;s go.</p><p>&#8212;Trond</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trond.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Build Rad Shit! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>