I’m not an AI expert, but in my circle at work and in my personal life, I’m kind of a power user. I use AI a lot, and I think it is genuinely useful. But I keep hearing a framing that sounds more convincing than it really is: “Write your prompt, go for a coffee, and let the agent work.”

Technically, that is true. You really can prompt the agent, step away, and come back later. But what people usually mean by that is much bigger: they mean we can now treat software engineering like free multitasking, and that teams should expect a much higher delivery rate because the machine is working in parallel.

I do not think that conclusion follows.

I have done this myself. I have had more than one clone of the same repository so I could move more than one thing forward at the same time. The problem is that parallel machine execution does not mean parallel human understanding.

My main issue with the async productivity story is that it ignores human context switching. We developers already know that moving from one task to another has a cost. It takes time to reload the problem, remember the tradeoffs, and evaluate whether the output is actually good. We cannot switch between tasks as cleanly as the agent can switch between prompts.

And this is not just a feeling. There is research on task switching showing that moving between tasks has a real cognitive cost source. We tend to lose time, make more mistakes, and leave part of our attention behind on the previous task. So even if the agent keeps working while I am away, my brain does not come back for free when it is time to review the result.

That is the part I think gets missed in workplace conversations. The agent can draft, search, refactor, or explore while I do something else. That is helpful. But the human still needs to review, correct, validate, and decide. If I leave the task and come back later, I still have to rebuild context. If I use that time to jump into another hard problem, I will pay another switching cost there too.

So yes, agentic AI can absolutely help us move faster. It can compress execution time and make a lot of low-level work cheaper. But I think expected delivery gains can become unrealistic when people assume that human attention scales the same way machine execution does. There is also a related trap here: visible activity can look like progress even when we are not moving the most important thing forward. I wrote about that dynamic in Why Task Snacking Feels Productive But Holds You Back.

In software engineering, the hard part is often not only generating output. It is understanding the problem, judging whether the solution fits the system, and noticing what the agent misunderstood. Those are still human bottlenecks.

That is why I think the async promise can be misleading in a work setting. The machine may be working in the background, but the human part of the loop is still expensive. We still need to review, correct, and rebuild context every time we come back.

So the tool is useful. Very useful. But I would be careful with the expectation that async agentic AI should automatically translate into a huge increase in developer delivery. The machine can be async. Our attention usually is not.

If you want a related angle on how agentic AI can affect my motivation and energy levels, I wrote about that in Is Agentic AI Like a Drug?.