I've been trying to write about AI for a long time, but the pace at which things are moving makes a text I've been working on for several days already feel outdated. The speed at which the AI agent space is moving is absurd, and I've already seen a lot of people simply get tired of the topic. Even people who are genuinely interested in it and enjoy it end up exhausted by it. The constant flood of new developments is relentless and, in the end, a bit overwhelming.
I remember when we used to define project timelines before the boom of "agentic LLMs." One of the critical points was putting together a realistic plan for the client, so they would feel comfortable with it and, at the same time, so that we as a team could feel confident that we were able to deliver a quality product and meet all the agreed deadlines.
Research, design, and iteration at those stages rarely created problems, but the risk increased once software development began. At that stage, many things could go wrong: some were entirely our responsibility, such as failing to estimate correctly the number of hours a given feature would require; others depended on the client, for example server failures, infrastructure gaps, or missing permissions that prevented the software from being deployed properly and on time.
The good news that "Agentic Software Engineering" brings is that this critical situation is now much more likely to have solutions. Even so, everything still depends on making the right decisions from the beginning, especially around architecture, design, and understanding the user, your clients, and the market. Iteration may seem cheap now, but when you've already spent months building a complex platform, it doesn't matter if you have LLMs on your side: you're still going to spend unbudgeted time fixing the problem.
The "bottleneck" moved up to a higher level, but the risk of making mistakes is still there. Personally, I think the famous IBM memo remains fully relevant: "A computer can never be held accountable; therefore a computer must never make a management decision."
I'm not saying LLM agents cannot replace jobs. They can, and they will continue to, especially in repetitive tasks with low business impact that do not require much understanding of how the external context changes. But wherever you still need someone to "cut the cake," as we say in Chile, and make decisions that someone will later have to answer for, an AI agent that fundamentally gives you a probabilistic output is not going to replace those roles… yet.