<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Long Commit]]></title><description><![CDATA[Weekly articles on software careers, AI, and the long game in tech. For developers who want the honest take, not the polished one.]]></description><link>https://newsletter.thelongcommit.com</link><generator>Substack</generator><lastBuildDate>Tue, 21 Apr 2026 02:30:46 GMT</lastBuildDate><atom:link href="https://newsletter.thelongcommit.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Juan Cruz Martinez]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[longcommit@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[longcommit@substack.com]]></itunes:email><itunes:name><![CDATA[Juan Cruz Martinez]]></itunes:name></itunes:owner><itunes:author><![CDATA[Juan Cruz Martinez]]></itunes:author><googleplay:owner><![CDATA[longcommit@substack.com]]></googleplay:owner><googleplay:email><![CDATA[longcommit@substack.com]]></googleplay:email><googleplay:author><![CDATA[Juan Cruz Martinez]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Fear Is Justified, I Just Keep Building]]></title><description><![CDATA[The conversation about AI is split between panic and policy. Most of us just want to work and get things done.]]></description><link>https://newsletter.thelongcommit.com/p/the-fear-is-justified-i-just-keep</link><guid isPermaLink="false">https://newsletter.thelongcommit.com/p/the-fear-is-justified-i-just-keep</guid><dc:creator><![CDATA[Juan Cruz Martinez]]></dc:creator><pubDate>Tue, 14 Apr 2026 11:10:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wOpg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wOpg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wOpg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!wOpg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!wOpg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!wOpg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wOpg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1470836,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://newsletter.thelongcommit.com/i/194173255?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wOpg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!wOpg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!wOpg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!wOpg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8f37ab1-8fa4-4ee7-af00-4afa836539f1_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Last Friday, someone threw a Molotov cocktail at Sam Altman&#8217;s house at 4 in the morning. Two days later, there were gunshots. A 20-year-old guy flew from Texas to San Francisco with kerosene, a lighter, and a document about AI causing humanity&#8217;s extinction.</p><p>Altman posted a photo of his family. He wrote that the fear and anxiety about AI is justified. Then OpenAI published a 13-page paper proposing a robot tax and a four-day workweek.</p><p>I read all of this on my phone while my kids were eating breakfast.</p><p>I don&#8217;t know what to do with any of it. Not really. I work in tech. I&#8217;ve been in this industry for over twenty years. I use AI tools every single day. I manage a team that creates content about authentication and security, and half of our workflows now involve some form of AI. I&#8217;m not a bystander watching this from the outside. I&#8217;m in it.</p><p>And I think most of you are too.</p><p>The anxiety is real. I feel it. Not the Molotov cocktail kind. The kind where you&#8217;re reviewing your team&#8217;s work and you realize the thing that took someone three days last year took an afternoon this week. The kind where you&#8217;re good at your job, you&#8217;ve been good at it for a long time, and you can feel the ground shifting under you in ways you can&#8217;t fully predict.</p><p>And honestly, I don&#8217;t even need to think twenty years out. I can&#8217;t tell you what the market looks like in three. But I have kids. Young kids. And when they were eating their cereal while I was scrolling through photos of a firebombed gate, the thing I felt wasn&#8217;t some abstract concern about the future of work. It was simpler than that. I want them to grow up in a world where they can contribute something, where they can find work that means something to them, where they can live a decent, healthy, happy life. That&#8217;s all. And I can&#8217;t promise them that right now. The parent version of this fear sits different. It&#8217;s quieter and it doesn&#8217;t go away when you close the tab.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://newsletter.thelongcommit.com/subscribe?"><span>Subscribe now</span></a></p><p>The only thing I can actually do for them is not freeze. So I keep building.</p><p>That&#8217;s always been my move when things get uncertain. When I was at Siemens and the optimization team I was on got restructured, I kept building. When I started a side project that grew to 100,000 readers a month and then I shut it down, I kept building. When I moved my family across continents and had to start over in a new country, I kept building.</p><p>But here&#8217;s the part I don&#8217;t say out loud very often: every other time, the pace of change gave me room to adjust. I could see the restructuring coming months out. I chose when to shut down the project. Moving countries was our decision, on our timeline. This time the ground is moving and I didn&#8217;t set the speed. Nobody did.</p><p>The conversation right now is split between billionaires proposing policy papers and people who are so afraid they&#8217;re lighting things on fire. And in between those two extremes, there are millions of us going to work. Figuring out how to use the new tools without losing the instincts we spent decades developing.</p><p>I manage people who are excellent at what they do. When I think about what I owe them, it&#8217;s not a grand theory of AI. It&#8217;s honesty. And the honest thing is that &#8220;I don&#8217;t know&#8221; used to feel like humility. Now some days it feels like I&#8217;m running out of time to figure it out.</p><p>I don&#8217;t actually believe that. Most days. But the feeling visits, and I think if you&#8217;re being honest with yourself it visits you too.</p><p>Altman says the fear is justified. Okay. I believe him. But he also has security guards and an $852 billion company. His version of &#8220;justified fear&#8221; and mine are not the same thing. Mine looks like updating my skills at 40, like writing this newsletter on weekends because I want to have something that&#8217;s mine outside of any employer, like watching my industry change faster than any period I&#8217;ve lived through and deciding, every single week, that I&#8217;m going to stay in the game anyway. Not because I&#8217;ve calculated that it&#8217;s the right bet. Because it&#8217;s the only bet I know how to make.</p><p>I&#8217;ve been doing this for twenty years and I plan to do it for thirty more. I don&#8217;t have a framework for navigating what&#8217;s coming. I have a disposition. Show up, do the work, pay attention, adjust. It got me this far. It might not be enough this time. But the alternative is to stand still, and I&#8217;ve never been any good at that.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://newsletter.thelongcommit.com/subscribe?"><span>Subscribe now</span></a></p><p>The world is figuring out what AI means. People are scared. Most of that fear isn&#8217;t making headlines. Most of it is just sitting quietly in the chests of people like you and me, who read the news, take a breath, and open their laptops.</p><p>But it&#8217;s Sunday night as I write this, and Monday doesn&#8217;t care about any of this.</p>]]></content:encoded></item><item><title><![CDATA[Everything I Learned About Productivity Disagrees With How AI Wants Me to Work]]></title><description><![CDATA[I built my productivity system over fifteen years. AI stressed it in six months.]]></description><link>https://newsletter.thelongcommit.com/p/everything-i-learned-about-productivity</link><guid isPermaLink="false">https://newsletter.thelongcommit.com/p/everything-i-learned-about-productivity</guid><dc:creator><![CDATA[Juan Cruz Martinez]]></dc:creator><pubDate>Tue, 07 Apr 2026 11:35:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fe87e8ac-d65c-4963-914f-3d67e45f07c8_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Gz7v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0659c55-5783-4c5e-8ccf-b1ad292ecbe2_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Gz7v!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0659c55-5783-4c5e-8ccf-b1ad292ecbe2_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!Gz7v!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0659c55-5783-4c5e-8ccf-b1ad292ecbe2_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!Gz7v!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0659c55-5783-4c5e-8ccf-b1ad292ecbe2_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Gz7v!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0659c55-5783-4c5e-8ccf-b1ad292ecbe2_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Gz7v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0659c55-5783-4c5e-8ccf-b1ad292ecbe2_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e0659c55-5783-4c5e-8ccf-b1ad292ecbe2_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1902684,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://newsletter.thelongcommit.com/i/193200515?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0659c55-5783-4c5e-8ccf-b1ad292ecbe2_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Gz7v!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0659c55-5783-4c5e-8ccf-b1ad292ecbe2_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!Gz7v!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0659c55-5783-4c5e-8ccf-b1ad292ecbe2_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!Gz7v!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0659c55-5783-4c5e-8ccf-b1ad292ecbe2_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Gz7v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0659c55-5783-4c5e-8ccf-b1ad292ecbe2_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I am, at my core, a systems person. Calendar blocks, structured todo lists, weekly reviews. I know what I&#8217;m working on each morning before I sit down because I decided the night before. My best work has always come from long, uninterrupted stretches where I could hold an entire problem in my head at once. Not because I read that in a book somewhere, although Cal Newport would agree. Because I felt the difference. Two hours of deep focus on a single system produced more real progress than a full day of bouncing between tasks. I built my entire workflow around protecting that state, and for years it served me well.</p><p>Then AI tools showed up and asked me to work in a way that contradicts everything I&#8217;d built.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://newsletter.thelongcommit.com/subscribe?"><span>Subscribe now</span></a></p><h2>The new workflow doesn&#8217;t look anything like the old one</h2><p>The AI-native way of working is parallel by design. You kick off an agent to scaffold a service. While it runs, you start prompting a second one to draft documentation. You check back on the first, realize it made assumptions you don&#8217;t agree with, course-correct, then pivot to reviewing what the second one produced. You&#8217;re managing multiple streams of work simultaneously, triaging output, deciding what to keep and what to throw away.</p><p>It looks nothing like a flow state.</p><p>And for about six months, I convinced myself this was better. The sheer volume of stuff happening on my screen felt like progress. Tokens streaming, files appearing, tests being generated. I was doing three things at once and each one was moving forward. How could that not be more productive?</p><p>Except when I looked honestly at what I was shipping, the picture was more complicated than &#8220;AI made me productive.&#8221; I was shipping more, sure. AI was part of that. But so was the fact that I was working significantly more hours than before. The agents made it feel effortless to keep going, so I did. Evenings, weekends, one more pass on something an agent had drafted. I couldn&#8217;t cleanly separate what AI was contributing from what I was contributing by just working harder.</p><h2>The part I&#8217;m still struggling with</h2><p>Here&#8217;s what makes this hard for me specifically. The focused workflow I spent years building is slow to start but efficient once you&#8217;re in it. There&#8217;s a warm-up period where you&#8217;re loading context, understanding the problem, building a mental model. Once you&#8217;re there, decisions come fast and the work flows. The cost is upfront, and the payoff compounds the longer you stay in it. That model fits how my brain works. It&#8217;s inseparable from the systems I built around my time.</p><p>The AI workflow inverts this completely. Starting is almost instant. You describe what you want, an agent produces something, and you&#8217;re looking at output within minutes. The cost comes later, when you review what was generated and realize you need to reshape it, or when you switch to a different agent and lose the thread of what the first one was doing. Instead of one long ramp-up followed by sustained output, you get a series of quick starts followed by fragmented attention.</p><p>I keep getting pulled toward the second model because the tools make it so easy. And every time I give in, I feel it eroding the systematic approach that actually works for me. I didn&#8217;t used to have a context switching problem. I manufactured one for myself by trying to run things in parallel, because the tools made it possible and it felt like the smart thing to do.</p><p>Not everyone loses from this tradeoff. If you&#8217;re an engineering leader whose day is already a patchwork of 30-minute windows between syncs and reviews, you didn&#8217;t have long focus blocks to protect in the first place. Handing a deferred task to an agent and getting it back 80-90% done in one of those gaps is a real upgrade. AI didn&#8217;t add context switching for those people. It filled the dead space that context switching had already created. I see this in my own role on days that are heavy with meetings.</p><p>The parallel model isn&#8217;t wrong. It just doesn&#8217;t fit the systems I&#8217;ve built, or the way I work best when I actually have the capacity and time to think.</p><h2>What AI actually changed (it&#8217;s not speed)</h2><p>Last year I had a monitoring dashboard on my list for weeks. The work itself wasn&#8217;t complicated, but it involved stitching together three different APIs, writing a bunch of boilerplate, and wiring up error handling that I knew would be tedious. I kept pushing it to next week because the thought of grinding through those first two hours before anything interesting happened was enough to make me pick a different task every morning.</p><p>When I finally sat down and used an agent to generate the scaffolding, the whole thing took about the same amount of time it would have taken before. But I didn&#8217;t dread it. The activation energy dropped. Getting a rough first pass and then shaping it felt completely different from staring at an empty file trying to summon the motivation to type the first import statement.</p><p>That pattern keeps repeating. The gain isn&#8217;t speed, it&#8217;s ease. And ease is genuinely valuable even if it never shows up in a velocity chart.</p><p>For prototypes and throwaway experiments, the speed gains are real too. I can get a working proof of concept in an afternoon that would have taken two or three days. But for anything where I care about quality, the gains shrink fast. AI can&#8217;t one-shot what I need. It gets me a first draft, write some code, and then I&#8217;m deep in the work anyway, reworking structure, questioning assumptions, rewriting. The higher the stakes, the more AI becomes a different starting point rather than a shortcut.</p><h2>The other extreme: removing yourself entirely</h2><p>There&#8217;s another response to the productivity gap that I think is worth naming. Some people don&#8217;t adjust how they use AI. They try to remove themselves from the loop altogether. If the bottleneck is human attention, the logic goes, just take the human out. Delegate everything. Let the agents run. Review at the end, if at all.</p><p>But when you remove yourself from the work, you also remove the thing that makes the work yours. Your taste, your context, your understanding of why this particular decision matters in ways a model can&#8217;t see. There&#8217;s a real difference between &#8220;I used AI to explore five approaches I wouldn&#8217;t have had time to consider&#8221; and &#8220;I let AI pick the approach and shipped it.&#8221;</p><p>I wrote about this in <a href="https://newsletter.thelongcommit.com/p/the-quiet-surrender-to-ai">The Quiet Surrender to AI</a>, and I keep seeing it play out. The slide from &#8220;this tool helps me think&#8221; to &#8220;this tool thinks for me&#8221; is gradual enough that you don&#8217;t always notice it happening.</p><p>The productivity question and the autonomy question turn out to be the same question.</p><p>Running five agents in parallel and removing yourself from the output look like opposite problems, but they share the same root: trying to match AI&#8217;s throughput with a human brain. Both of them quietly trade away the thing that made you valuable in the first place.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://newsletter.thelongcommit.com/subscribe?"><span>Subscribe now</span></a></p><h2>Where I&#8217;ve landed (for now)</h2><p>What I didn&#8217;t expect is how quickly AI workflows can erode a system you trust. When agents produce output constantly, the pull is to react to whatever they just generated rather than work on what you decided matters most. You check what an agent finished, get drawn into reviewing it, start a follow-up prompt, and suddenly your morning is gone and you haven&#8217;t touched the thing at the top of your list. The agents don&#8217;t know your priorities. They just keep generating, and if you&#8217;re not careful, their output starts setting your agenda.</p><p>That&#8217;s a strange place to end up for someone who used to decide his entire week in advance. I went from controlling my time to reacting to whatever an AI happened to finish first.</p><p>I&#8217;m trying to take that back. When I use AI now, I try to focus it on one problem at a time, chosen deliberately, not reactively. I use it to get past the blank page, to handle the parts of a task I&#8217;d otherwise avoid, and to explore ideas faster when the stakes are low. For deeper work, I treat it as a thinking partner rather than a production line.</p><p>But I&#8217;d be lying if I said I&#8217;ve figured this out. The temptation to match AI&#8217;s speed is constant. It generates in seconds, and before you know it you&#8217;re trying to keep up, cycling through outputs and decisions at a pace your brain was never built for. You still need time to hold a problem, evaluate an approach, and decide whether the output is actually good or just fast.</p><p>I think the honest productivity gain is somewhere around 10-20% on a good week. It feels like 3x almost every day. I don&#8217;t fully trust either number yet. What I do know is that the person I was before these tools, the one who lived by his calendar and trusted his systems, had something right that I don&#8217;t want to lose in the rush to adopt a new way of working.</p>]]></content:encoded></item><item><title><![CDATA[AI Gave Everyone a Multiplier. Most Used It to Subtract.]]></title><description><![CDATA[I've worked inside the "do the same for less" machine and inside a culture that lets a small team build what used to require departments. AI is forcing every company to pick.]]></description><link>https://newsletter.thelongcommit.com/p/ai-gave-everyone-a-multiplier-most</link><guid isPermaLink="false">https://newsletter.thelongcommit.com/p/ai-gave-everyone-a-multiplier-most</guid><dc:creator><![CDATA[Juan Cruz Martinez]]></dc:creator><pubDate>Tue, 31 Mar 2026 12:23:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6a6d52ed-6d51-4a17-9e86-0a25cef63a58_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I spent a decade building software at Siemens. Shared Services first, then Energy. A company with 180,000 employees where every project I touched had the same underlying question: how do we do this for less? The market was defined. Growth was slow. So the energy went into optimization. Do what we&#8217;re doing, cheaper.</p><p>Then I joined Auth0, now part of Okta. I came in as a developer advocate focused on writing blog posts. With my Siemens conditioning I figured that meant staying in my lane. But the culture kept pulling me wider. Within months I was contributing to SDKs, doing live streams, speaking at conferences. I eventually got promoted to lead the content team, and now a small group of us runs programs that would&#8217;ve required separate departments at my old company. Not because we&#8217;re stretched thin, but because the culture is built to let people operate beyond their job description.</p><p>I&#8217;ve been thinking about these two worlds a lot lately, because of AI. When I see a company hand its teams a productivity multiplier and immediately start cutting headcount, I recognize the reflex. I&#8217;ve worked inside that logic. &#8220;Do the same for less&#8221; is the default when you don&#8217;t have a growth thesis. And I&#8217;m worried that AI is giving a lot of companies permission to act on that default faster than ever. Plenty of people are asking the growth question too. But when both options land on the same leadership table, the cost cut tends to win. It&#8217;s the one you can put in a slide with a dollar figure attached.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://newsletter.thelongcommit.com/subscribe?"><span>Subscribe now</span></a></p><h2>What changes when you point the multiplier at growth</h2><p>I&#8217;ve always carried around more ideas than I had time to pursue. Things I wanted to write, projects I wanted to try. Not because I lacked the skill, but because there are only so many hours in a day and I also like to go home and be with my kids. Those ideas just sat there, some of them for years.</p><p>AI cleared the path for a lot of that. This newsletter is a good example. I&#8217;d been carrying the idea around for a long time, but the activation energy of writing regularly on top of everything else was too high. AI made the process of getting my thinking into something publishable realistic in a way it wasn&#8217;t before.</p><p>My team has felt the same shift. We&#8217;re a creative group with more ideas than we&#8217;ve ever had bandwidth for, and AI gave us the room to actually try some of them. We&#8217;re producing work now that wasn&#8217;t on anyone&#8217;s roadmap six months ago. Not because the roadmap changed, but because things that used to feel out of reach became possible. Problems we&#8217;d been punting on for years, ideas that would&#8217;ve died in a prioritization meeting because nobody had the bandwidth. Auth0&#8217;s culture already encouraged that kind of expansion. AI just widened the door.</p><h2>The part I&#8217;m still working through</h2><p>Here&#8217;s where my thinking gets less clean. Because there&#8217;s a version of this where I&#8217;m wrong, and I want to be honest about it.</p><p>Not every company is in growth mode. I know this firsthand. At Siemens, the addressable market for Shared Services wasn&#8217;t expanding. The product was stable. The customers were internal. If you&#8217;d handed my team a tool that doubled our output, I&#8217;m genuinely not sure what we would have done with the extra capacity. You can invest in quality. You can pay down tech debt. But those investments have diminishing returns. At some point, the honest answer might be that you don&#8217;t need the capacity.</p><p>And there&#8217;s a harder version of this problem that I think most people in tech aren&#8217;t reckoning with yet. What happens if AI increases productivity faster than markets can grow?</p><p>If every company in your space can suddenly produce twice as much, but customer demand hasn&#8217;t doubled, you end up in a world where surplus capacity is the norm. In that world, the companies cutting headcount aren&#8217;t being unimaginative. They&#8217;re being realistic about a market that can&#8217;t absorb what their teams are now capable of producing.</p><p>I don&#8217;t have a good answer for that. It&#8217;s possible we&#8217;re heading into a period where productivity and demand decouple in ways that make &#8220;just build more&#8221; genuinely naive advice. The history of technology is full of moments where automation created abundance that the market took decades to figure out what to do with.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://newsletter.thelongcommit.com/subscribe?"><span>Subscribe now</span></a></p><h2>Before you cut</h2><p>I think most companies are cutting too fast. Not all of them. Some are making hard, honest calls about markets that aren&#8217;t growing. But most of the layoffs I&#8217;m seeing aren&#8217;t that. They&#8217;re the path of least resistance. The first move, not the last resort. And once you cut, the option to explore disappears with the people you let go.</p><p>What I&#8217;d want to see, if I had any say in it, is a company that gets handed a productivity multiplier and spends one quarter asking what its team could build with the extra capacity before deciding to shrink. Just one quarter. That&#8217;s not a big ask. But it almost never happens because the cost savings are right there on the spreadsheet and the upside of exploration is speculative.</p><p>I&#8217;ve worked inside the model where optimization is the only gear, and inside a company where a small team with room to grow will find things worth building that nobody planned for. I can&#8217;t pretend to be neutral about which one I&#8217;d rather build in. But I also can&#8217;t pretend the growth answer is always right. Some markets really don&#8217;t have room. Some companies really are done expanding.</p><p>What I do believe is that most of them haven&#8217;t checked.</p>]]></content:encoded></item><item><title><![CDATA[The Case for Becoming a Manager]]></title><description><![CDATA[I read an article recently arguing that senior engineers shouldn't become managers. Observations are mostly right. The conclusion is still wrong. I made the switch last year and here's what I learned.]]></description><link>https://newsletter.thelongcommit.com/p/the-case-for-becoming-a-manager</link><guid isPermaLink="false">https://newsletter.thelongcommit.com/p/the-case-for-becoming-a-manager</guid><dc:creator><![CDATA[Juan Cruz Martinez]]></dc:creator><pubDate>Tue, 24 Mar 2026 10:58:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/05c70c7d-e58f-49e4-9aba-a26338d1dc1c_3440x1920.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The question of whether experienced engineers should move into management has been on my mind for a while. Not as an abstract career question, but as something I&#8217;ve lived through. I made the switch last year and I&#8217;ve been turning over what I learned from that decision ever since. I kept putting off writing about it because the topic is genuinely complicated and I wasn&#8217;t sure I had a clean take.</p><p>Then I read <a href="https://newsletter.manager.dev/p/dont-become-an-engineering-manager">&#8220;Don&#8217;t become an Engineering Manager&#8221;</a> by Anton Zaides, and it gave me the push I needed. The arguments were sharp: the tech landscape is moving too fast to step away from hands-on work, the management ladder is flattening, and the pay is often lower than what a Staff engineer can command elsewhere. I agree with most of the observations. But the article frames management as a ladder optimization, which track has better odds, where&#8217;s the ceiling lower. I think that framing leads you to the wrong answer. The more interesting question is which skills you want to be building. When you look at it that way, the conclusion changes.</p><h2>Why I switched</h2><p>For most of my career I was an individual contributor. I loved writing code. I thought that was all I wanted to do.</p><p>What eventually pulled me toward management wasn&#8217;t dissatisfaction with IC work. It was impact. No matter how good you get, your output as an individual has a ceiling. I&#8217;d already bumped into that once when I moved from engineering into developer advocacy, and management was the next version of the same realization. If I could enable a team to do their best work, the collective output would be far greater than anything I&#8217;d produce on my own.</p><p>When a leadership gap opened up on the content team I&#8217;d been closest to, I saw my window. I pitched myself for the role before I felt ready for it. I didn&#8217;t know much about management, but I had a genuine connection to what the team was building and enough conviction to figure out the rest on the job.</p><p>That was enough to get started. It was not enough to be good at the job on day one. But even in this first year, it&#8217;s been one of the most rewarding chapters of my career.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://newsletter.thelongcommit.com/subscribe?"><span>Subscribe now</span></a></p><h2>Management is a skill decision, not a title decision</h2><p>The conversation around engineering management almost always focuses on what you give up. Less time writing code. Less freedom to choose how you spend your day. A step off a technical track where demand and compensation are both high. All true.</p><p>What rarely gets discussed is what you gain. Not in title or authority, but in a set of skills that most engineers never build because nothing forces them to.</p><p>The most valuable thing management has taught me is how to communicate with precision when someone else&#8217;s work depends on it. When you&#8217;re an IC, unclear communication slows you down. When you&#8217;re a manager, unclear communication breaks your team. That difference in consequences makes you learn faster than you would any other way.</p><p>I&#8217;ll give you a specific example. A few months into the role, I was briefing a team member on a content project. I had the whole thing mapped out in my head: the structure, the angle, the audience it needed to reach. I started writing it all down, basically handing over a blueprint. And then I caught myself. I was about to do the thing I&#8217;d always done as an IC, solve the problem my way, except now I was asking someone else to execute my solution instead of finding their own.</p><p>So I pulled back. I shared the goal instead. Here&#8217;s who the piece is for, here&#8217;s what it needs to accomplish, here&#8217;s why it matters right now. And what came back was different from what I would have built. It was better in places I hadn&#8217;t considered, because the writer brought their own perspective to a problem I&#8217;d only described the shape of.</p><p>Here&#8217;s the thing: I wouldn&#8217;t have noticed that habit as an IC. When your own thinking is muddled as an individual contributor, you just iterate until it works. Nobody else has to interpret your intent. Management removed that escape hatch. If my team doesn&#8217;t understand what I&#8217;m after, I can&#8217;t quietly fix it myself. I have to actually get better at sharing the why, not just the what.</p><p>And that forced improvement surprised me by showing up everywhere else. It changed how I write, specifically. Running a content team while also writing a newsletter means I&#8217;m constantly testing whether I can articulate what I actually mean, not just what sounds right in my head. A year ago I would have drafted something, felt good about it, and moved on. Now I catch myself asking: would someone else know what to do with this? That question didn&#8217;t exist for me before management put it there.</p><h2>Goals vs. tasks</h2><p>There&#8217;s a distinction I think about constantly now that I never had language for as an IC: the difference between giving someone a task and sharing a goal.</p><p>Theo from <a href="https://t3.gg/">t3.gg</a> recently shared an example that captures this perfectly. He was testing whether an AI coding agent could build a competitive chess engine from scratch. His prompt: &#8220;Build a program with no dependencies that can beat Stockfish level 17.&#8221; Straightforward. The model worked for 30 minutes and came back with something that won consistently. But when he looked at what it actually built, the agent had downloaded Stockfish and used it to play against itself. Task completed. Goal completely missed.</p><p>Once he reframed the prompt to specify intent (&#8221;build your own chess engine from scratch, the goal is to evaluate whether you can implement an engine that competes&#8221;), the model understood. The difference wasn&#8217;t complexity. It was clarity about what success actually meant.</p><p>That content project I mentioned earlier? Same dynamic. When I almost handed over the blueprint instead of the goal, I was about to do exactly what that prompt did: describe implementation instead of intent. In both cases, with people and with AI, the fix is the same: share what you&#8217;re trying to achieve and why, then trust the other side to find the path.</p><p>That self-correction loop is a management skill. Noticing when the output is wrong and asking what you could have communicated differently, instead of just blaming the execution. And right now it&#8217;s becoming increasingly relevant beyond management. Every developer is increasingly managing AI agents. The better you are at articulating intent and separating the goal from the implementation path, the better those agents perform. I didn&#8217;t expect that when I made the switch. But it&#8217;s one of the things I value most about it.</p><h2>The part nobody prepares you for</h2><p>The skills are one thing. The identity shift is another.</p><p>My situation was a bit unusual. I became the manager of people I&#8217;d been working alongside, some of them on the same team. These were colleagues I had close relationships with. We&#8217;d shared frustrations, swapped opinions, been peers in every sense of the word.</p><p>That changes when you become their manager. Not because you want it to, but because the role creates lines that didn&#8217;t exist before. Conversations you used to be part of are now conversations you should probably step back from. Dynamics shift in ways that are subtle but real. And if you&#8217;re anything like me, you don&#8217;t love hierarchy. You resist it. I still do the work. I write. I do social listening. I show up as a team member as much as a leader, because that&#8217;s the only version of this role I&#8217;m interested in doing.</p><p>But I&#8217;d be dishonest if I said the transition was seamless. I&#8217;ve already had to make one of the hardest decisions a manager faces, and it changed how I carry the role. There&#8217;s a weight to it now that I didn&#8217;t fully appreciate from the outside. The relationships haven&#8217;t broken, but they&#8217;ve evolved. Navigating that, being someone your team trusts enough to follow while staying close enough to the work that you&#8217;re not managing from a distance, is a balance I&#8217;m still figuring out.</p><p>Nobody talks about this part when they debate the IC-versus-manager decision. The articles focus on ladders and compensation and market demand. But the actual lived experience of management is more personal than any of that. It&#8217;s about who you become when the job stops being about your output and starts being about everyone else&#8217;s.</p><h2>What about the practical concerns?</h2><p>None of that personal growth erases the practical reality. And the practical arguments against management right now are real.</p><p>Companies are flattening. The path from EM to Director is more competitive than it was five years ago, with fewer Senior EM roles to bridge the gap. And Staff engineers often earn more than first-time EMs when you compare across companies. Zaides mentions his friend could have made 20-30% more staying IC and switching companies. That&#8217;s a real number. I knew when I made the switch that I wasn&#8217;t optimizing for compensation. I made the move anyway because I believed what I&#8217;d gain in skills and perspective would be worth more over time than the salary delta.</p><p>But those arguments assume management is a permanent track. Most people I&#8217;ve seen do it well don&#8217;t treat it that way. They step in, build the skills, and then decide what they want next with far better information than they had before. Some stay and grow into leadership. Some go back to IC work and find themselves significantly more effective for having done it.</p><p>That&#8217;s because management develops instincts that pure IC work never forces you to build: how to align across teams, how to communicate with stakeholders who don&#8217;t share your context, how to evaluate competing priorities when there&#8217;s no obvious right answer. A former manager returning to an IC role isn&#8217;t starting over. They&#8217;re bringing tools most ICs never pick up. And they&#8217;re starting their next negotiation from a higher basis point.</p><p>Then there&#8217;s the advice to wait a couple of years until things settle. I understand the impulse. But the industry isn&#8217;t going to pause and send you a signal when it&#8217;s safe to switch. Waiting means spending two more years building one type of skill while the set of skills management develops sits untouched. And from what I can see, the skills that management forces you to build are the ones with the rising premium right now.</p><h2>Take the opportunity</h2><p>I didn&#8217;t have a five-year plan that said &#8220;become a manager.&#8221; An opportunity appeared that aligned with something I&#8217;d been thinking about for a while. I wasn&#8217;t ready. I pitched myself anyway.</p><p>Conviction but not credentials. That&#8217;s what I walked into that conversation with. And it turned out to be enough, not because I was secretly qualified, but because the willingness to learn the parts I didn&#8217;t know mattered more than already knowing them.</p><p>If a management opportunity is in front of you, and the idea of enabling a team and getting sharper at communicating intent sounds like a genuine challenge you want to take on, take it. You won&#8217;t be great at it immediately. I wasn&#8217;t. But the things you&#8217;ll learn about describing outcomes instead of steps, about catching yourself when your instinct is to just fix it yourself, those stay with you regardless of where your career goes next.</p><p>I&#8217;m still early in this. I&#8217;m still learning how to pull back when I want to prescribe, how to trust the process when it&#8217;d be faster to just do it myself. But I&#8217;m a better communicator, a better writer, and a better collaborator than I was a year ago, and I don&#8217;t think any of that would have happened if I&#8217;d stayed on the IC track and waited for the &#8220;right time&#8221; to make the switch.</p><p>Thanks for reading!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Long Commit is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[I Think a New Role Is Emerging in Tech]]></title><description><![CDATA[It doesn't have a name yet, but it's already reshaping how teams build software.]]></description><link>https://newsletter.thelongcommit.com/p/i-think-a-new-role-is-emerging-in</link><guid isPermaLink="false">https://newsletter.thelongcommit.com/p/i-think-a-new-role-is-emerging-in</guid><dc:creator><![CDATA[Juan Cruz Martinez]]></dc:creator><pubDate>Tue, 17 Mar 2026 16:33:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bcCi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every major shift in developer tooling has eventually changed how teams get organized. Not immediately, and rarely in the ways people predict.</p><p>The full-stack developer is probably the best example. Front-end and back-end are legitimately different disciplines. The mental models don&#8217;t overlap much, the tooling is different, and the failure modes look nothing alike. But frameworks, shared languages, and better tooling created a layer that let one person operate across both sides of the stack. Not as deep as a pure specialist in either, but deep enough to hold the full picture of a feature from database to browser. For enough companies and enough products, the tradeoff was worth it, and the role stuck.</p><p>I think AI is creating the same kind of abstraction, but in a different direction. Not vertical, across the tech stack. Horizontal, across the org chart.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bcCi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bcCi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!bcCi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!bcCi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!bcCi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bcCi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1419149,&quot;alt&quot;:&quot;Diagram comparing two types of abstraction. Top: a vertical bar connects two stacked boxes labeled Front-end and Back-end, representing the full-stack developer working across the tech stack. Bottom: a horizontal bar connects three side-by-side boxes labeled Product, Engineering, and DevRel, representing the emerging role working across the org chart.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://newsletter.thelongcommit.com/i/191068368?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Diagram comparing two types of abstraction. Top: a vertical bar connects two stacked boxes labeled Front-end and Back-end, representing the full-stack developer working across the tech stack. Bottom: a horizontal bar connects three side-by-side boxes labeled Product, Engineering, and DevRel, representing the emerging role working across the org chart." title="Diagram comparing two types of abstraction. Top: a vertical bar connects two stacked boxes labeled Front-end and Back-end, representing the full-stack developer working across the tech stack. Bottom: a horizontal bar connects three side-by-side boxes labeled Product, Engineering, and DevRel, representing the emerging role working across the org chart." srcset="https://substackcdn.com/image/fetch/$s_!bcCi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!bcCi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!bcCi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!bcCi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa79f85f5-ef04-4116-9ecd-e115b5c3feb8_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The full-stack developer was a vertical abstraction across the tech stack. AI is creating a horizontal one across the org chart.</figcaption></figure></div><p>Software teams have been organized around specialization for a couple of decades now. Building software is complex enough that we split the work across product managers, engineers, and developer relations. Each role reflects a genuine body of knowledge that takes years to develop.</p><p>But that split came with coordination overhead we mostly stopped noticing because it became so normal. The spec that&#8217;s outdated by the time engineering reads it. The roadmap review where three teams discover they&#8217;ve been building against different assumptions. The feedback from users that takes two weeks (when lucky) to travel from the DevRel team through product and into a Jira ticket an engineer might see next quarter. The work of keeping the machine aligned sometimes dwarfing the work the machine was supposed to do.</p><p>AI is compressing that overhead faster than most org charts can adapt.</p><p>A product manager can now spin up a working prototype in Cursor or Lovable in a few hours, put it in front of users, and generate real feedback before engineering writes a line of production code. That&#8217;s not the PM &#8220;learning to code.&#8221; That&#8217;s an abstraction layer that lets someone with product judgment operate in engineering&#8217;s territory well enough to validate an idea. An engineer can take a feature they just built and generate documentation, draft user-facing copy, think through how this change should be communicated to the developer community. Not because they suddenly became a technical writer, but because AI handles enough of the execution that their understanding of the system (which was always the hard part) can flow directly into outputs that used to require a different team.</p><p>LinkedIn&#8217;s chief economic opportunity officer recently described what he called the &#8220;full stack builder&#8221; who compresses what used to take days across design, product, and engineering into a single person with AI tools. Walmart now has dedicated agent builder roles that didn&#8217;t exist a year ago, filled internally by employees who crossed traditional role boundaries. And some companies have taken it further than a new title. Boris Cherny, creator of Claude Code, mentioned on the Pragmatic Engineer podcast that everyone at Anthropic carries the same title: Member of Technical Staff. Engineers do research. Researchers write code. People move across what would be departmental boundaries anywhere else. Flat titles aren&#8217;t new, but AI has made the structure more viable by collapsing the distance between functions enough that one person, with the right tools and the right depth, can operate fluidly across them. When the horizontal abstraction layer is thick enough, the case for separate titles gets hard to make.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://newsletter.thelongcommit.com/subscribe?"><span>Subscribe now</span></a></p><p>The word &#8220;<strong>builder</strong>&#8221; is catching on as shorthand for all of this. But I think it flattens something more interesting.</p><p><strong>What&#8217;s actually emerging is a role whose shape is determined by the product, not by the org chart.</strong></p><p>A company building developer infrastructure needs someone whose core is engineering architecture. Someone at the staff or principal level who understands system design deeply enough that when AI extends their reach into product decisions and developer experience, they can evaluate whether the output is actually good. AI can draft a product spec, but knowing whether that spec addresses the right problem for the right user takes judgment formed by years of building and operating these systems.</p><p>A consumer-facing product might need the inverse: someone whose depth is user research and product instinct, with AI extending them into implementation. They can prototype and ship features in ways they couldn&#8217;t before. But the anchor is that they&#8217;ve watched enough users abandon a flow or misunderstand a feature that they can feel when a prototype is solving the right problem versus just looking like it does. AI handles execution. The product instinct tells it where to aim.</p><p>The horizontal abstraction layer is the same in both cases. The anchor point is different. And the anchor point is determined by what the product needs, not by which department someone sits in.</p><p>The full-stack analogy cuts both ways, though, and the uncomfortable part matters. Full-stack developers were controversial for a reason: you often got mediocre work in both domains. The same criticism will show up here. &#8220;You&#8217;ll get someone who&#8217;s a mediocre PM and a mediocre engineer, all in one convenient package.&#8221; It&#8217;s a fair concern. The difference is that AI changes the math. When the full-stack developer emerged, you still had to write the CSS and design the database schema yourself. The abstraction layer was thin. With AI, the gap between &#8220;I understand this domain well enough to direct the work&#8221; and &#8220;I can produce professional-level output&#8221; has narrowed enough to change how teams get structured. Not to zero. But enough.</p><p>And the people best positioned to take advantage of that are the ones who&#8217;ve been deep enough in at least one domain to know what good looks like across the others. A senior engineer can tell when AI-generated code is architecturally sound or just syntactically correct. A seasoned PM can see through a prototype that looks impressive in a demo but isn&#8217;t testing a real hypothesis. You don&#8217;t develop that instinct from AI fluency. You develop it from years of <a href="https://newsletter.thelongcommit.com/p/the-quiet-surrender-to-ai">doing the work without shortcuts</a>, which raises a real question about where the next generation of senior builders comes from when <a href="https://newsletter.thelongcommit.com/p/the-talent-pipeline-is-collapsing">the junior roles that used to build that depth</a> are exactly the ones getting compressed. I don&#8217;t have a clean answer for that. I&#8217;m not sure anyone does yet.</p><p>The career model most of us internalized, pick a specialization, go deep, move up the ladder inside that lane, is getting harder to map onto what&#8217;s actually happening. The lanes are merging. The question that matters now isn&#8217;t &#8220;what title do I want next?&#8221; It&#8217;s &#8220;what product or problem am I deep enough to own end-to-end?&#8221;</p><p>The full-stack developer proved that one person could work across the stack if the tooling was good enough. The tooling just got a lot better. And the stack just got a lot wider.</p><p>I&#8217;m navigating this shift in real time, the same as you. If you want to follow along as I figure out what this new landscape looks like from the inside, with 20+ years of context and zero pretense of having all the answers, subscribe to The Long Commit. I write weekly about developer careers, AI, and the long game in engineering.</p>]]></content:encoded></item><item><title><![CDATA[The Talent Pipeline Is Collapsing. Your Team Will Feel It Next.]]></title><description><![CDATA[The short-term math of not hiring juniors makes perfect sense, until you realize what it costs your seniors, your culture, and your future.]]></description><link>https://newsletter.thelongcommit.com/p/the-talent-pipeline-is-collapsing</link><guid isPermaLink="false">https://newsletter.thelongcommit.com/p/the-talent-pipeline-is-collapsing</guid><dc:creator><![CDATA[Juan Cruz Martinez]]></dc:creator><pubDate>Tue, 10 Mar 2026 22:55:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a06f8aa5-ea6f-4eca-8cf8-da8d327d391d_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Something is breaking in how our industry builds its next generation of engineers. Most of the people responsible for it haven&#8217;t noticed yet. Or if they have, they&#8217;ve decided the short-term math justifies it.</p><p>Over the past two years, companies across the tech sector have been pulling back from hiring junior developers. Some quietly, through budget decisions that never get announced. Some loudly, as strategic positioning. The logic sounds reasonable. AI tools have made senior engineers dramatically more productive, so why invest in someone who needs six months of ramp-up when a well-equipped senior can cover the gap? It&#8217;s a clean story. It&#8217;s also, I believe, a dangerously incomplete one.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Long Commit! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Here&#8217;s what the landscape actually looks like right now.</p><p>At the biggest tech companies, <a href="https://byteiota.com/developer-hiring-crisis-2026-40-worse-junior-drops-73/">new graduates went from roughly a third of all hires in 2019 to somewhere around 7% today</a>. In the US, <a href="https://spectrum.ieee.org/ai-effect-entry-level-jobs">entry-level hiring at the top 15 tech firms fell 25% from 2023 to 2024</a> alone.</p><p>The research paints an even starker picture. A <a href="https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/">Stanford Digital Economy Lab study</a> analyzing millions of payroll records found that employment for software developers aged 22 to 25 declined nearly 20% from its late-2022 peak, while employment for those over 30 held steady or grew. A <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5425555">Harvard study</a> tracking 62 million workers across 285,000 firms found that when companies adopt generative AI, junior employment drops 9 to 10% within six quarters. Senior employment barely moves.</p><p>The trend isn&#8217;t limited to quiet budget decisions either. <a href="https://www.cnbc.com/2026/02/26/block-laying-off-about-4000-employees-nearly-half-of-its-workforce.html">Block cut 40% of its entire workforce</a> just weeks ago, with CEO Jack Dorsey citing AI as the reason. Those weren&#8217;t junior-specific cuts, but the underlying logic is the same one driving this whole shift: smaller teams, more AI, fewer humans. <a href="https://sfstandard.com/2025/02/27/salesforce-marcbenioff-layoffs-tech-agents/">Salesforce announced it would halt engineering hiring entirely for 2025</a>, citing AI agents. Klarna <a href="https://codeconductor.ai/blog/future-of-junior-developers-ai/">froze developer hiring in late 2023</a> (then reversed course when the strategy failed). A LeadDev survey found that 54% of engineering leaders plan to hire fewer juniors, thanks to AI copilots enabling seniors to handle more.</p><p>The reasoning is consistent across every boardroom version of this story: why pay a junior $80-100K plus six months of ramp-up when a senior with AI tools can cover triple the output? The math makes sense. On paper, it looks clean.</p><p>I&#8217;ve been watching this unfold for two years now, and I believe it&#8217;s one of the most short-sighted decisions a generation of engineering leaders is making simultaneously.</p><p>I&#8217;ve been in this industry for over twenty years. What concerns me isn&#8217;t the individual company choosing to slow junior hiring for a quarter or two. It&#8217;s the industry-wide retreat happening all at once, with almost no public conversation about what it costs.</p><p>This isn&#8217;t a story about being nice to new grads. It&#8217;s about what happens to your team (the seniors you&#8217;re leaning on, the culture you&#8217;re building, the org you&#8217;re responsible for) when you cut off the bottom of the ladder and expect the structure to hold.</p><div><hr></div><h2>The weight is shifting upward</h2><p>Here&#8217;s something the &#8220;seniors can do everything&#8221; crowd doesn&#8217;t talk about. Senior engineers need juniors as much as juniors need them.</p><p>Not out of charity. Out of cognitive self-preservation.</p><p>A healthy engineering team has a natural rhythm to it. Complex architectural decisions flow to seniors. Lower-risk tasks (UI tweaks, unit tests, bug fixes, small features) get delegated down. This isn&#8217;t just about efficiency. It&#8217;s a pressure valve. It gives senior engineers the space to think at the level you&#8217;re actually paying them to think at.</p><p>When you eliminate juniors and hand AI the &#8220;simple&#8221; work instead, something breaks. Your seniors don&#8217;t suddenly spend all their time on brilliant architecture. They spend it trying to keep up with an output pipeline that has no natural throttle.</p><p>Here&#8217;s what I mean. AI can produce working code. That&#8217;s not really the issue. The issue is that it produces so much of it, so cheaply, that the traditional model of reviewing code line by line simply doesn&#8217;t scale anymore. When a junior wrote a pull request, a senior could sit with it for twenty minutes, understand the intent, catch the mistakes, and teach something in the process. When AI generates the equivalent of dozens of those in a day, that same review process becomes impossible. There aren&#8217;t enough hours. There aren&#8217;t enough senior engineers. The economics that made it attractive to replace juniors with AI are the same economics that make the output impossible to properly verify at the pace it&#8217;s being produced.</p><p>This means the whole notion of how teams review and maintain quality is changing, whether they&#8217;ve acknowledged it or not. Most haven&#8217;t. They&#8217;re still applying a human-paced review process to machine-paced output, and the gap between those two speeds is where quality quietly erodes.</p><p>I&#8217;ve seen this described in a way that stuck with me: senior developers are becoming &#8220;review bottlenecks instead of innovative contributors.&#8221; They&#8217;re no longer in the creative flow of building systems. They&#8217;re auditing output from a machine that never gets tired but also never truly understands the codebase.</p><p>The tasks that used to train juniors and give seniors breathing room have been automated, but the cognitive load hasn&#8217;t decreased. It&#8217;s shifted upward, onto the people who were already carrying the most complex work. And those people are starting to wear down.</p><p>A LeadDev survey of engineering leaders found that 22% of developers are at critical burnout levels. Seniors, the ones with the most responsibility, report lower job satisfaction than juniors. A <a href="https://www.harness.io/state-of-developer-experience">Harness survey</a> found that 67% of developers spent more time debugging AI-generated code than expected, and 68% spent more time fixing the security issues it introduced.</p><p>I&#8217;ve felt this myself. I&#8217;ve <a href="https://jcmartinez.dev/post/the-real-reasons-why-developers-burnout">written before</a> about how developers rarely burn out from writing too much code. They burn out from everything that prevents them from doing it well. What&#8217;s changed is that AI has introduced a new version of that problem. The days when I&#8217;m most productive on paper (the ones where AI helped me ship the most) are often the days I&#8217;m most drained. The old bottleneck was typing speed and lookup time. The new bottleneck is judgment. And judgment doesn&#8217;t scale the way output does.</p><p>This is what burnout looks like in 2026. Not dramatic flameouts. A slow erosion. An engineer who stops pushing back in design reviews because they don&#8217;t have the energy. Code reviews that become rubber stamps. Architectural choices made by default rather than deliberation.</p><p>The people most likely to burn out are the people hardest to replace. And the thing that would relieve their burden (a layer of junior engineers to share the load, ask good questions, handle the tractable problems) is exactly what you just eliminated from your headcount plan.</p><div><hr></div><h2>The talent market is getting weird</h2><p>There&#8217;s another consequence of this shift that anyone who&#8217;s hired recently will recognize immediately. The market is becoming strangely distorted.</p><p>Open a role for an engineer right now and you&#8217;ll be flooded with applications. One company <a href="https://newsletter.pragmaticengineer.com/p/state-of-the-tech-market-in-2025-hiring-managers">reported getting 600 applications in two days</a> for a single senior frontend position, stopping intake after they couldn&#8217;t process more. <a href="https://ravio.com/blog/tech-hiring-trends">Ravio&#8217;s 2025 Tech Job Market Report</a> found that entry-level hiring dropped 73% year over year, while overall hiring rates only dipped 7%. That gap tells you something. The people who would have entered through junior roles are now competing for whatever&#8217;s one rung up.</p><p>Many of these applicants graduated two or three years ago, built solid skills, but never got the junior role that would have given them the &#8220;mid-level&#8221; reps. They&#8217;re self-taught in the gap. Capable in ways that don&#8217;t fit neatly into traditional leveling. They&#8217;ve been building side projects, contributing to open source, doing contract work. Anything to accumulate the experience that a junior position would have provided naturally. So they apply for mid-level roles because that&#8217;s the closest match to where they actually are, even if the trajectory that got them there looks nothing like what hiring managers expect.</p><p>Now try to hire a senior or staff engineer. Completely different story. <a href="https://www.roberthalf.com/us/en/insights/research/data-reveals-which-technology-roles-are-in-highest-demand">Robert Half&#8217;s research</a> found that 65% of technology hiring managers say it&#8217;s more challenging to find skilled professionals than it was a year ago. <a href="https://survey.stackoverflow.co/2024/">Stack Overflow&#8217;s Developer Survey</a> shows that 67% of senior engineers receive multiple offers before they even post a resume publicly. The pipeline of people growing into those roles has thinned, and the people already there know exactly how valuable they are. Companies are paying retention premiums, handing out counteroffers, restructuring teams around keeping their most experienced people.</p><p>This is the market that the &#8220;we don&#8217;t need juniors&#8221; strategy creates. A bloated middle where companies can&#8217;t differentiate between someone with three years of structured experience and someone with three years of scrappy self-direction. An empty top where every hire turns into a bidding war. And a growing gap between the two that nobody is investing in closing.</p><p>As one hiring expert <a href="https://ravio.com/blog/tech-hiring-trends">put it</a>: &#8220;If you don&#8217;t hire and nurture young talent now, what will your mid-level and leadership positions look like in five years? We&#8217;re heading towards some very difficult and expensive recruitment to fill that gap.&#8221;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://newsletter.thelongcommit.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The knowledge transfer problem nobody&#8217;s modeling</h2><p>There&#8217;s a cost that&#8217;s even harder to see from the planning meeting, and it&#8217;s the one that concerns me the most.</p><p>Every piece of institutional knowledge on your team lives in someone&#8217;s head. How the payment system actually works, not how the docs say it works. Why that service was split in 2021 and why you can never merge it back. The customer edge case that crashes the billing module every February.</p><p>This knowledge has always transferred through a specific mechanism: senior engineers teaching junior engineers by working alongside them. The junior asks a question that feels basic. The senior explains the answer. That explanation forces the senior to articulate something they&#8217;d never written down. The knowledge becomes shared. The bus factor drops.</p><p>When you stop hiring juniors, this mechanism stops. Not immediately. It degrades gradually, which is why it&#8217;s so easy to ignore. But three years from now, when your senior architect leaves for a role that doesn&#8217;t require them to review AI output twelve hours a day, they&#8217;re taking everything with them. And there&#8217;s nobody two levels down who absorbed even a fraction of it, because that person was never hired.</p><p>Bureau of Labor Statistics data shows that 18% of senior developers born between 1970 and 1980 plan to retire before 2027. These aren&#8217;t people you can replace by turning up the AI dial. Their value was never in how fast they typed.</p><div><hr></div><h2>The bet nobody&#8217;s stress-testing</h2><p>I hear the counterargument constantly. AI will just keep getting better. The code it generates will become more reliable. The review burden will decrease. The productivity gains will compound.</p><p>And honestly? That might be true. I am not here to argue that AI won&#8217;t improve.</p><p>But I want to point out something that I think should make every engineering leader uncomfortable. The &#8220;we don&#8217;t need juniors&#8221; strategy only works if AI delivers on its most optimistic trajectory, continuously, for years, without interruption. That&#8217;s not a strategy. That&#8217;s a single point of failure dressed up as a hiring plan.</p><p>Think about what you&#8217;re actually betting on. You&#8217;re betting that AI models will keep getting cheaper, not more expensive. You&#8217;re betting that the productivity gains you&#8217;re seeing today will scale linearly as your codebase grows more complex. You&#8217;re betting that the current wave of investment in AI infrastructure will sustain itself without a correction. You&#8217;re betting that no regulatory shift, no licensing change, no market consolidation will disrupt your access to the tools your entire engineering capacity now depends on.</p><p>That&#8217;s a lot of bets. And if even one of them doesn&#8217;t land the way you expect, what&#8217;s your fallback?</p><p>I&#8217;ve been through enough cycles to know that technology productivity doesn&#8217;t exist in a vacuum. It exists inside a cycle of expectation, adoption, correction, and maturation. The technology rarely disappears. But the gap between what was promised and what gets delivered creates a window where companies suddenly need more human capacity than they planned for.</p><p>If you&#8217;ve spent the last three years hollowing out your junior pipeline, you won&#8217;t be able to rebuild it on a quarterly timeline. The talent pool you chose not to invest in won&#8217;t be sitting around waiting for your call. They&#8217;ll have left the industry, reskilled into something else, or moved to the companies that were still hiring while you were optimizing headcount.</p><p>And even in the best case, where AI continues to improve steadily, the fundamental issue remains. It&#8217;s about people, not code quality.</p><p>AI doesn&#8217;t develop judgment. It doesn&#8217;t grow into an engineering manager. It doesn&#8217;t mentor the next generation. It doesn&#8217;t notice that a teammate is struggling before it shows up in their commits. It doesn&#8217;t carry institutional memory across a decade of architectural decisions.</p><p>The question isn&#8217;t whether AI can do the work that juniors used to do. It clearly can, a lot of it at least. The question is: where do senior engineers come from if you never hire junior ones?</p><p>Every senior developer on your team got good by being bad first. They wrote terrible code that someone reviewed patiently. They broke staging environments and learned why the deploy pipeline exists. They sat in meetings they barely understood and slowly built the context that makes them invaluable now.</p><p>Someone invested in them before they were profitable.</p><p>If the entire industry stops making that investment simultaneously (which is roughly what&#8217;s happening), we&#8217;ll have a surplus of senior talent for a few years, followed by a cliff. The pipeline doesn&#8217;t refill on its own. And the people at the top of it are getting tired.</p><div><hr></div><h2>What this looks like if you actually lead through it</h2><p>I&#8217;m not going to pretend the old model works unchanged. You can&#8217;t hire juniors in 2026 the way you did in 2018 and expect the same outcome. The work has changed. But cutting juniors entirely isn&#8217;t strategy. It&#8217;s surrender.</p><p>I don&#8217;t think anyone has a full playbook for this yet. But here&#8217;s where I think the thinking needs to start.</p><p><strong>Redefine what &#8220;junior&#8221; means on your team.</strong> The entry-level work isn&#8217;t writing boilerplate anymore. It&#8217;s reviewing AI output, writing better prompts, testing edge cases, and building the judgment that AI can&#8217;t provide. The junior developer of 2026 looks different from the one you hired in 2018, and your job descriptions, onboarding, and expectations need to reflect that. Hire for curiosity and critical thinking, not just syntax fluency.</p><p><strong>Protect your seniors&#8217; cognitive load.</strong> If you&#8217;ve removed the delegation layer, you need to replace it with something. That might mean fewer projects running in parallel. It might mean dedicated &#8220;deep work&#8221; blocks where seniors aren&#8217;t reviewing anything. It might mean being honest that 10x output requires 10x recovery time, and adjusting expectations accordingly.</p><p><strong>Make knowledge transfer intentional.</strong> If it&#8217;s not happening through osmosis anymore (and it isn&#8217;t), then it needs to happen through documentation, architecture decision records, pair programming sessions, and structured onboarding. This is operational work that someone needs to own.</p><p><strong>Think in three-year windows, not quarterly headcount.</strong> The decision to not hire juniors saves money this quarter. The decision to have no mid-level pipeline in 2029 costs significantly more. Model it. Show the numbers to your leadership. Make the case.</p><p>Every generation of senior engineers was once a junior someone took a chance on. If we stop taking that chance industry-wide, we&#8217;re not just failing the next generation. We&#8217;re failing the current one, by loading them with everything, relieving them of nothing, and calling it progress.</p><p>The talent pipeline is collapsing. And it&#8217;s not the juniors who&#8217;ll feel it first.</p><div><hr></div><p><em>If you&#8217;re leading a team through this shift, I&#8217;d love to hear how you&#8217;re thinking about it. Reply to this email. I read everything.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Long Commit! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[I Have 30 Years of Career Left. AI Made Me Rethink All of Them.]]></title><description><![CDATA[On judgment, hype, the joy of still building things, and learning to prepare for a future nobody can predict.]]></description><link>https://newsletter.thelongcommit.com/p/i-have-30-years-of-career-left-ai</link><guid isPermaLink="false">https://newsletter.thelongcommit.com/p/i-have-30-years-of-career-left-ai</guid><dc:creator><![CDATA[Juan Cruz Martinez]]></dc:creator><pubDate>Sat, 07 Mar 2026 13:33:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5b452627-4b9c-4402-ad91-4667e7989eee_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;m turning 40 this year. That means, if I&#8217;m lucky, I have roughly 30 more working years ahead of me. Thirty years of building things, making career decisions, and trying to stay relevant in an industry that reinvents itself every five to seven years.</p><p>Until recently, that felt manageable. I&#8217;ve been in software engineering for over 20 years. I&#8217;ve survived the transition from monoliths to microservices, the mobile revolution, the cloud migration wave, the DevOps transformation. Each one felt significant at the time. Each one changed what we built or how we built it. But none of them changed whether we were needed.</p><p>AI does. And that&#8217;s a fundamentally different kind of shift.</p><h2>The part that&#8217;s actually different this time</h2><p>Every previous technology wave I&#8217;ve lived through followed the same pattern: new tools arrived, the work changed shape, and engineers adapted. You learned new frameworks, new paradigms, new infrastructure patterns. The underlying deal stayed the same. Companies needed people to build software, and if you kept your skills current, you&#8217;d be fine.</p><p>What makes AI different isn&#8217;t that it changes the tools. It&#8217;s that it changes the leverage. When one engineer with AI can do the work that used to require three, the math changes at the org level. Companies don&#8217;t just need different engineers. They need fewer of them.</p><p>I watched this play out in real time. Teams getting restructured not because the work disappeared, but because the same work now required fewer hands. Job postings that quietly raised the bar, expecting senior-level output at mid-level headcount. Entire categories of tasks (boilerplate code, documentation drafts, test generation) moving from &#8220;junior engineer&#8217;s job&#8221; to &#8220;AI&#8217;s job&#8221; almost overnight.</p><p>And the hype makes everything worse. AI is genuinely transformative, but somewhere between &#8220;this is a useful tool&#8221; and &#8220;this will replace all engineers within five years,&#8221; the conversation went off the rails. The loudest voices in the room (often the ones furthest from the actual work) started treating AI capabilities as a foregone conclusion rather than a trajectory. CEOs read a blog post about AI agents replacing entire engineering teams and suddenly that&#8217;s the planning assumption. Headcount gets cut not because AI actually replaced those people, but because someone in leadership bought the narrative that it will.</p><p>That&#8217;s the part that keeps me up at night. Not AI itself, but the decisions being made on the back of AI hype by people who don&#8217;t understand what software engineering actually involves. The gap between what AI can do today and what executives think it can do today is enormous, and real careers are getting caught in that gap.</p><p>I sat down one evening and tried to project what my career looks like in 2035, and for the first time in two decades, I had no credible model for it. Not because the technology scared me, but because I couldn&#8217;t predict which version of the story the industry would choose to believe. Not a pessimistic model, not an optimistic one. Just a blank space where the plan used to be.</p><p>That blank space is what got me moving.</p><h2>I&#8217;m betting on judgment, not output</h2><p>What AI can&#8217;t do (at least not yet, and I&#8217;d argue not for a long time) is exercise judgment in context.</p><p>Here&#8217;s what made it click for me. I&#8217;ve been using Claude Code lately, and it&#8217;s good. Not &#8220;neat party trick&#8221; good. Actually good. The kind of good where I ask it to build something and the code that comes back is clean, well-structured, and works on the first run more often than I&#8217;d like to admit. A year ago I could dismiss AI-generated code as a rough draft that needed heavy editing. Now? Now it writes code that looks like something I&#8217;d write. Sometimes better.</p><p>That realization forced a question I&#8217;d been avoiding: if the code itself is no longer the hard part, what am I actually being paid for?</p><p>The answer, I think, is judgment. Knowing which thing to build. Understanding why one technically correct approach is wrong for this particular team, this codebase, this set of business constraints. Seeing the second and third-order consequences of a technical decision before they show up in production. That&#8217;s where experience lives, in the space between &#8220;this works&#8221; and &#8220;this is right for the situation.&#8221;</p><p>So I&#8217;m doubling down there. On understanding business context. On learning domains deeply. On being the person who can evaluate what AI produces and say &#8220;this looks right but it&#8217;s wrong, and here&#8217;s why.&#8221; That instinct doesn&#8217;t come from tutorials or certifications. It comes from watching systems succeed and fail in production for 20 years, from understanding not just how things work but why they were built that way.</p><p>But here&#8217;s the thing about that kind of judgment: it doesn&#8217;t develop in a vacuum. It develops through building things. Which is why I still code, even though my current role doesn&#8217;t require it.</p><p>I&#8217;m working as a developer relations manager focused on content now (which is both terrifying and exciting in equal measure), so I&#8217;m not writing code all day anymore. Most of my work is writing, and I use AI to help with it. But here&#8217;s what&#8217;s interesting: AI can help me find the right words, tighten a paragraph, suggest a better structure. What it can&#8217;t do is decide what&#8217;s worth writing about, or know which angle will resonate with a senior engineer who&#8217;s been through three rewrites of the same system, or recognize when a piece of technical content is subtly misleading in ways that only someone with domain experience would catch. I bring the judgment. AI helps with the execution.</p><p>And the exact same thing applies to coding. I still code because it&#8217;s fun, but also because I&#8217;ve realized the relationship with AI works the same way there. AI can write the code. It can&#8217;t architect the system. It can&#8217;t decide which tradeoffs to make, or know that the elegant solution it just generated will fall apart at scale, or understand why the team chose a boring technology stack on purpose. The person guiding the work, deciding what to build and what not to build, evaluating whether the output actually solves the problem, that&#8217;s where experience lives.</p><p>In both cases, you learn the same thing: how to decompose a vague problem into concrete steps, how to hold a complex system in your head and reason about its edges, how to develop an instinct for where things are likely to break. It&#8217;s not a coding skill or a writing skill. It&#8217;s a thinking skill. And if you don&#8217;t have it, you can&#8217;t meaningfully evaluate what AI gives you. You can look at the output and think &#8220;that seems fine.&#8221; But you can&#8217;t see the subtle N+1 query hiding in the data access pattern, or the race condition that only shows up under load, or the security assumption baked into a convenience method.</p><p>Learn to code. Keep coding. Not because you&#8217;ll write every line yourself for the next 30 years, but because it trains the kind of thinking that makes everything else you do more valuable.</p><h2>I&#8217;m building things that are mine</h2><p>I used to pour everything into my employer. My professional identity, my network, my reputation, my growth, all of it lived inside one company&#8217;s walls. That felt normal. It&#8217;s what everyone around me was doing.</p><p>Then I watched a round of layoffs hit people I respected. People with deep expertise and years of institutional knowledge. And yes, their skills transferred, their experience was real, their ability to do the work hadn&#8217;t changed overnight. But something else had. The ground they were standing on vanished. The internal reputation, the relationships with leadership, the security of knowing where you fit, all of that evaporated in a single meeting. And suddenly they were competing in a market that had gotten significantly more crowded, against people with similar resumes and similar experience, in a hiring landscape where being talented wasn&#8217;t enough anymore. You had to be visible. You had to be connected. You had to be someone the market already knew, not someone it had to discover from a cold application.</p><p>That&#8217;s when I started thinking about professional gravity differently. Not as something your employer gives you, but as something you build that exists independent of any single company.</p><p>I&#8217;ve always been a writer. Blog posts, technical articles, documentation, the kind of writing that lives inside a company&#8217;s content strategy and serves someone else&#8217;s goals. But I&#8217;d stopped writing for myself. So I picked it back up, this time with a different purpose. Not as a hobby, not as a creative outlet, but as a deliberate investment. A newsletter about the things I think about anyway: engineering careers, leadership in the age of AI, the unspoken tensions of navigating a rapidly changing industry with decades of runway still ahead of you. Published thinking that shows people how I reason, not just what I&#8217;ve done. A network of people who know my perspective because they&#8217;ve read it, not because we happened to work on the same Jira board.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Long Commit! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>That same logic extends to money. Income diversification is the area where I&#8217;ve historically been the worst. One paycheck, one employer, one industry. I never seriously thought about what happens if that stream dries up, because it never did. I just wasn&#8217;t wired to think about money strategically, and I suspect a lot of engineers are the same. We talk about total comp and RSU vesting schedules, but we rarely talk about income resilience.</p><p>So I&#8217;m learning (slowly, awkwardly) how to diversify. Talks and workshops where two decades of experience becomes a product instead of just a resume line. A professional network that creates optionality for consulting if I ever need it. None of these produce meaningful income right now. That&#8217;s fine. I have 30 years. The goal isn&#8217;t to replace my salary tomorrow. It&#8217;s to make sure that if something changes suddenly, I don&#8217;t get caught with no options and no runway to react.</p><h2>I don&#8217;t have it all figured out, and that&#8217;s the point</h2><p>I want to be clear about the limits of what I&#8217;m sharing here, because I think the unfinished thinking is more useful than pretending I have a polished playbook.</p><p>I don&#8217;t know how to plan a technical career when the half-life of technical skills is shrinking this fast. I don&#8217;t know what engineering leadership looks like in five years, whether managers become AI-team leads or the role gets compressed because there are fewer humans to manage. I don&#8217;t know if 30 years from now, the career I&#8217;ve built will look anything like what I imagined when I started.</p><p>That used to scare me. It doesn&#8217;t anymore, and here&#8217;s why.</p><p>Every major technology shift in my career has created more opportunity than it destroyed. Not immediately, and not for everyone, but eventually and overwhelmingly. The web didn&#8217;t kill software. Mobile didn&#8217;t kill the web. Cloud didn&#8217;t kill infrastructure. Each wave created entirely new categories of work that nobody predicted from the inside.</p><p>I believe AI will do the same. The possibilities opening up right now are extraordinary. We&#8217;re going to build things in the next decade that we can barely imagine today. Entirely new categories of work will emerge, just like they always have. That&#8217;s not a threat. That&#8217;s what makes this the most exciting time to be working in technology.</p><p>But exciting doesn&#8217;t mean safe. The opportunities will be there. They just won&#8217;t show up automatically at your door.</p><p>I don&#8217;t know what the future will bring. But I know what I&#8217;ll keep doing: coding, teaching, explaining, exploring, and building. Those are the things that got me here, and they&#8217;re the things that still make me want to sit down at my desk every morning. I hope I get to keep doing them as a profession for the next 30 years. I think I will. But in the meantime, I&#8217;m making sure that if the rules change, I&#8217;m not standing still wondering what happened.</p><p>That&#8217;s the bet. I&#8217;m genuinely excited about it. I&#8217;ll let you know how it goes.</p>]]></content:encoded></item><item><title><![CDATA[The Quiet Surrender to AI]]></title><description><![CDATA[We imagined machines would have to overpower us. We didn't imagine we'd just let go.]]></description><link>https://newsletter.thelongcommit.com/p/the-quiet-surrender-to-ai</link><guid isPermaLink="false">https://newsletter.thelongcommit.com/p/the-quiet-surrender-to-ai</guid><dc:creator><![CDATA[Juan Cruz Martinez]]></dc:creator><pubDate>Wed, 04 Mar 2026 17:46:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f58ad32f-da16-40ff-8f8d-2fbd29344873_1536x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><em><strong>&#8203;We imagined machines would have to overpower us. We didn&#8217;t imagine we&#8217;d just let go.</strong></em></p></blockquote><p>For years whenever people talked about AI taking over the world, the image was always the same, Skynet, Terminator like Judgment Day. Machines rising up, overpowering humanity, forcing us into submission. The fear was physical domination, the idea that one day we would have to fight back against something stronger than us to preserve what makes us human. That story assumed resistance. It assumed conflict. It assumed that if our autonomy were threatened, we would defend it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NNul!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f0ea9d9-0236-492e-91f6-18b93f9a6d03_800x414.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NNul!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f0ea9d9-0236-492e-91f6-18b93f9a6d03_800x414.png 424w, https://substackcdn.com/image/fetch/$s_!NNul!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f0ea9d9-0236-492e-91f6-18b93f9a6d03_800x414.png 848w, https://substackcdn.com/image/fetch/$s_!NNul!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f0ea9d9-0236-492e-91f6-18b93f9a6d03_800x414.png 1272w, https://substackcdn.com/image/fetch/$s_!NNul!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f0ea9d9-0236-492e-91f6-18b93f9a6d03_800x414.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NNul!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f0ea9d9-0236-492e-91f6-18b93f9a6d03_800x414.png" width="800" height="414" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f0ea9d9-0236-492e-91f6-18b93f9a6d03_800x414.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:414,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!NNul!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f0ea9d9-0236-492e-91f6-18b93f9a6d03_800x414.png 424w, https://substackcdn.com/image/fetch/$s_!NNul!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f0ea9d9-0236-492e-91f6-18b93f9a6d03_800x414.png 848w, https://substackcdn.com/image/fetch/$s_!NNul!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f0ea9d9-0236-492e-91f6-18b93f9a6d03_800x414.png 1272w, https://substackcdn.com/image/fetch/$s_!NNul!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f0ea9d9-0236-492e-91f6-18b93f9a6d03_800x414.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Terminator 3: Rise of the Machines</figcaption></figure></div><p>What is actually happening is far less dramatic and far more unsettling. There are no machines dragging our minds away from us. No system is coercing us into obedience. No apocalypse is required, no war, no conquest. Instead, we are steadily handing over our thinking because it is easier to let something else do it for us. The trade is simple: less effort, less friction, less discomfort. And most people are taking it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Long Commit! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>I am not afraid of AI. I am far more concerned with what we are doing with it. Used deliberately, AI is extraordinary. It can accelerate research, surface alternatives you hadn&#8217;t considered, generate scaffolding that lets you work at a higher level of abstraction. I use it. I let it handle boilerplate, clean up grammar, automate the mechanical parts of my work. I am not interested in pretending the tool doesn&#8217;t exist.</p><p>But somewhere along the way, something shifted. We stopped using AI to extend our thinking and started using it to avoid thinking altogether. That shift is subtle, and I think it is one of the most important things happening right now.</p><p>&#8203;I feel this most acutely in coding because coding is the craft I care about.</p><p>I started programming as a teenager, and what hooked me was not output or efficiency. It was the struggle. Staring at a problem until it hurt and refusing to move on until it made sense. Debugging something for hours and slowly constructing a mental model of why the system behaved the way it did. It was failing, tracing the failure back to its cause, and earning understanding instead of skipping to the answer.</p><p>That friction was not an obstacle. It was the training itself. It built intuition. It built the ability to hold complexity in my head without collapsing under it. It built what I can only call taste, the sense of when a solution is right, not just functional.</p><p>Now I watch people generate code they cannot explain and ship systems they cannot reason about. If you cannot walk someone through the logic behind what you built without reopening the chat window, you did not build it. You assembled it. And over time, that difference compounds. The muscle you never use is the muscle you lose.</p><p>&#8203;But I want to be honest about something. About a month ago I hit a bug in a system I was writing. The kind of thing that, five years ago, I would have traced methodically for hours, checking assumptions, slowly cornering the defect. Instead, ninety seconds in, I pasted the stack trace into a chat window. The answer came back almost instantly. It was correct. I fixed the bug and moved on.</p><p>&#8203;And I felt something I didn&#8217;t expect: a small, quiet loss. Not because the tool failed. Because it worked. Because the hours I would have spent building a deeper model of that system simply didn&#8217;t happen. I got the fix. I missed the understanding. And I&#8217;m not sure I would have even noticed if I hadn&#8217;t been paying attention.</p><p>&#8203;That is what concerns me. Not the dramatic failures. The invisible ones.</p><p>&#8203;Now, I know the counterargument, and I want to take it seriously, because it is not wrong.</p><p>&#8203;Every generation has this panic. Socrates argued that writing would destroy memory, that people would carry knowledge in notebooks instead of in their minds, and become shallow as a result. He was partly right, actually. We did lose something. Oral cultures had capacities for memory and narrative that most literate people cannot match. But what we gained the ability to build on each other&#8217;s ideas across centuries, to accumulate knowledge beyond what any single mind could hold. It was so transformative that the trade-off was clearly worth it.</p><p>&#8203;Calculators. Google. Wikipedia. GPS. Every time, the fear was that cognitive offloading would make us weaker. Every time, the reality was more nuanced than the panic suggested. So why should AI be different?</p><p>Maybe it isn&#8217;t. Maybe this is just the next turn of the same wheel, and the people warning about cognitive decay are playing the same role Socrates played: correct about the loss, blind to the gain.</p><p>I hold that possibility genuinely. But I think there is something different this time, and it is worth articulating precisely.</p><p>&#8203;Previous tools offloaded <em>information</em>. AI offloads <em>reasoning</em>. A calculator doesn&#8217;t think about the problem for you, it executes a mechanical operation so you can focus on the higher-order question. Google doesn&#8217;t construct an argument, it surfaces sources so you can evaluate and synthesize them. These tools removed <em>mechanical</em> friction while leaving <em>cognitive</em> friction intact.</p><p>&#8203;Large language models are the first tools that remove cognitive friction directly. They don&#8217;t just give you facts. They assemble the argument. They don&#8217;t just retrieve information. They do the synthesis. The thing that previous tools left for you to do, the thinking itself, is precisely what this tool offers to handle.</p><p>&#8203;That doesn&#8217;t make it evil. It makes the question of how you use it genuinely different from any previous technology. The line between &#8220;tool that extends my thinking&#8221; and &#8220;tool that replaces my thinking&#8221; has never been this blurry.</p><p>&#8203;And I want to admit: I don&#8217;t know exactly where that line is.</p><p>This pattern extends far beyond programming. Open X and you are watching bots interact with bots while humans prompt machines to manufacture engagement. Open LinkedIn and everything sounds polished, structured, optimized, safe. Every paragraph feels assembled rather than wrestled with. The voice is technically there, but it feels synthetic. You can almost hear the prompt humming behind the sentences.</p><p>We have more expressive power than ever before, and everything is starting to sound the same. Not because people lack original thoughts, but because the tool they&#8217;re filtering those thoughts through has a center of gravity, and it pulls everything toward it.</p><p>That is not intelligence expanding. That is intelligence flattening. And the loss is not just aesthetic. When everyone&#8217;s output converges on the same median, the signal that used to distinguish deep understanding from shallow fluency disappears. We lose the ability to tell who has actually done the thinking. Including, sometimes, ourselves.</p><p>The hardest version of this problem is generational, and I don&#8217;t think my generation is equipped to talk about it honestly.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Long Commit! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Long Commit! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>I built my intuition through friction because I had no choice. There was no tool to skip the struggle. The hours I spent debugging, the months I spent confused, the years of slowly building mental models, that was the only path available. It is easy for me to say &#8220;do the hard work&#8221; when the hard work was the only option I ever had.</p><p>Someone learning to code today at fifteen faces a fundamentally different landscape. The tool that skips the struggle is right there, it&#8217;s free, and everyone around them is using it. Telling them to artificially impose difficulty is like telling someone to hand-wash their clothes to build character. It might even be right, in some narrow sense. But it is not a serious engagement with the reality they face.</p><p>What I think we actually owe that generation is not a lecture about discipline. It is an honest framework for when to use the tool and when to refuse it. When to let AI carry the load and when to carry it yourself because the carrying is the point. I don&#8217;t have that framework fully worked out. I&#8217;m not sure anyone does yet.</p><p>But I know it matters, because the people who figure it out will develop genuine understanding. And the people who don&#8217;t will spend years producing output that looks competent while building nothing underneath it. And they may not realize what they&#8217;ve lost until they need it and it isn&#8217;t there.</p><p>Here is what I keep coming back to.</p><p>Convenience is not the enemy. It never was. The enemy is convenience <em>unexamined,</em> the slow, comfortable slide from &#8220;this tool helps me think&#8221; to &#8220;this tool thinks for me&#8221; without ever noticing the transition.</p><p>I use AI every day. I am not fighting the technology. I am fighting the gravitational pull it exerts on my own mind, the pull toward ease, toward letting the machine carry weight I should be carrying, toward skipping the part that feels slow and stupid and uncertain.</p><p>I don&#8217;t always win. That concurrency bug I mentioned? I lost. I took the easy path, and I chose not to go back and do the hard work of understanding that system more deeply. I&#8217;m aware of that. I chose convenience, told myself I&#8217;d circle back, and I haven&#8217;t.</p><p>So when I say most people are choosing convenience over thinking, I am not exempting myself. I am describing a gravity I feel every day. Some days I resist it well. Some days I don&#8217;t.</p><p>The difference, the only difference I can claim, is that I am paying attention to the trade. I am trying to notice when I&#8217;m drifting. I am trying to keep the thinking muscle under load even when the tool offers to carry everything.</p><p>Because the uncomfortable truth is that we imagined machines would have to conquer us to take our autonomy. We imagined a fight. We imagined resistance.</p><p>We didn&#8217;t imagine we would just... let go. Quietly. Willingly. Not because we were forced, but because it was easier.</p><p>And the most unsettling part is not that it&#8217;s happening.</p><p>It&#8217;s that most of us won&#8217;t notice until it&#8217;s already done.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.thelongcommit.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Long Commit! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>