Everything I Learned About Productivity Disagrees With How AI Wants Me to Work
I built my productivity system over fifteen years. AI stressed it in six months.
I am, at my core, a systems person. Calendar blocks, structured todo lists, weekly reviews. I know what I’m working on each morning before I sit down because I decided the night before. My best work has always come from long, uninterrupted stretches where I could hold an entire problem in my head at once. Not because I read that in a book somewhere, although Cal Newport would agree. Because I felt the difference. Two hours of deep focus on a single system produced more real progress than a full day of bouncing between tasks. I built my entire workflow around protecting that state, and for years it served me well.
Then AI tools showed up and asked me to work in a way that contradicts everything I’d built.
The new workflow doesn’t look anything like the old one
The AI-native way of working is parallel by design. You kick off an agent to scaffold a service. While it runs, you start prompting a second one to draft documentation. You check back on the first, realize it made assumptions you don’t agree with, course-correct, then pivot to reviewing what the second one produced. You’re managing multiple streams of work simultaneously, triaging output, deciding what to keep and what to throw away.
It looks nothing like a flow state.
And for about six months, I convinced myself this was better. The sheer volume of stuff happening on my screen felt like progress. Tokens streaming, files appearing, tests being generated. I was doing three things at once and each one was moving forward. How could that not be more productive?
Except when I looked honestly at what I was shipping, the picture was more complicated than “AI made me productive.” I was shipping more, sure. AI was part of that. But so was the fact that I was working significantly more hours than before. The agents made it feel effortless to keep going, so I did. Evenings, weekends, one more pass on something an agent had drafted. I couldn’t cleanly separate what AI was contributing from what I was contributing by just working harder.
The part I’m still struggling with
Here’s what makes this hard for me specifically. The focused workflow I spent years building is slow to start but efficient once you’re in it. There’s a warm-up period where you’re loading context, understanding the problem, building a mental model. Once you’re there, decisions come fast and the work flows. The cost is upfront, and the payoff compounds the longer you stay in it. That model fits how my brain works. It’s inseparable from the systems I built around my time.
The AI workflow inverts this completely. Starting is almost instant. You describe what you want, an agent produces something, and you’re looking at output within minutes. The cost comes later, when you review what was generated and realize you need to reshape it, or when you switch to a different agent and lose the thread of what the first one was doing. Instead of one long ramp-up followed by sustained output, you get a series of quick starts followed by fragmented attention.
I keep getting pulled toward the second model because the tools make it so easy. And every time I give in, I feel it eroding the systematic approach that actually works for me. I didn’t used to have a context switching problem. I manufactured one for myself by trying to run things in parallel, because the tools made it possible and it felt like the smart thing to do.
Not everyone loses from this tradeoff. If you’re an engineering leader whose day is already a patchwork of 30-minute windows between syncs and reviews, you didn’t have long focus blocks to protect in the first place. Handing a deferred task to an agent and getting it back 80-90% done in one of those gaps is a real upgrade. AI didn’t add context switching for those people. It filled the dead space that context switching had already created. I see this in my own role on days that are heavy with meetings.
The parallel model isn’t wrong. It just doesn’t fit the systems I’ve built, or the way I work best when I actually have the capacity and time to think.
What AI actually changed (it’s not speed)
Last year I had a monitoring dashboard on my list for weeks. The work itself wasn’t complicated, but it involved stitching together three different APIs, writing a bunch of boilerplate, and wiring up error handling that I knew would be tedious. I kept pushing it to next week because the thought of grinding through those first two hours before anything interesting happened was enough to make me pick a different task every morning.
When I finally sat down and used an agent to generate the scaffolding, the whole thing took about the same amount of time it would have taken before. But I didn’t dread it. The activation energy dropped. Getting a rough first pass and then shaping it felt completely different from staring at an empty file trying to summon the motivation to type the first import statement.
That pattern keeps repeating. The gain isn’t speed, it’s ease. And ease is genuinely valuable even if it never shows up in a velocity chart.
For prototypes and throwaway experiments, the speed gains are real too. I can get a working proof of concept in an afternoon that would have taken two or three days. But for anything where I care about quality, the gains shrink fast. AI can’t one-shot what I need. It gets me a first draft, write some code, and then I’m deep in the work anyway, reworking structure, questioning assumptions, rewriting. The higher the stakes, the more AI becomes a different starting point rather than a shortcut.
The other extreme: removing yourself entirely
There’s another response to the productivity gap that I think is worth naming. Some people don’t adjust how they use AI. They try to remove themselves from the loop altogether. If the bottleneck is human attention, the logic goes, just take the human out. Delegate everything. Let the agents run. Review at the end, if at all.
But when you remove yourself from the work, you also remove the thing that makes the work yours. Your taste, your context, your understanding of why this particular decision matters in ways a model can’t see. There’s a real difference between “I used AI to explore five approaches I wouldn’t have had time to consider” and “I let AI pick the approach and shipped it.”
I wrote about this in The Quiet Surrender to AI, and I keep seeing it play out. The slide from “this tool helps me think” to “this tool thinks for me” is gradual enough that you don’t always notice it happening.
The productivity question and the autonomy question turn out to be the same question.
Running five agents in parallel and removing yourself from the output look like opposite problems, but they share the same root: trying to match AI’s throughput with a human brain. Both of them quietly trade away the thing that made you valuable in the first place.
Where I’ve landed (for now)
What I didn’t expect is how quickly AI workflows can erode a system you trust. When agents produce output constantly, the pull is to react to whatever they just generated rather than work on what you decided matters most. You check what an agent finished, get drawn into reviewing it, start a follow-up prompt, and suddenly your morning is gone and you haven’t touched the thing at the top of your list. The agents don’t know your priorities. They just keep generating, and if you’re not careful, their output starts setting your agenda.
That’s a strange place to end up for someone who used to decide his entire week in advance. I went from controlling my time to reacting to whatever an AI happened to finish first.
I’m trying to take that back. When I use AI now, I try to focus it on one problem at a time, chosen deliberately, not reactively. I use it to get past the blank page, to handle the parts of a task I’d otherwise avoid, and to explore ideas faster when the stakes are low. For deeper work, I treat it as a thinking partner rather than a production line.
But I’d be lying if I said I’ve figured this out. The temptation to match AI’s speed is constant. It generates in seconds, and before you know it you’re trying to keep up, cycling through outputs and decisions at a pace your brain was never built for. You still need time to hold a problem, evaluate an approach, and decide whether the output is actually good or just fast.
I think the honest productivity gain is somewhere around 10-20% on a good week. It feels like 3x almost every day. I don’t fully trust either number yet. What I do know is that the person I was before these tools, the one who lived by his calendar and trusted his systems, had something right that I don’t want to lose in the rush to adopt a new way of working.


