Why AI speed won't save your system feature image

Why AI speed won't save your system

Written by Tasos Piotopoulos
Lead Engineer | MBA Candidate | M.Sc. Software Engineering & Ubicomp

Code reviews are quick now. AI fills in boilerplate, proposes fixes, even writes tests. The pace feels relentless, and conversations about why a change is needed or what risks it carries are harder to keep up with. Decisions that once drew debate slide through under pressure to match the new speed. Delivery charts look good, but technical debt grows quietly in the background and real judgment gets squeezed out of the process.

In my last post, I wrote about psychological safety, the conditions that allow engineers to raise concerns and make better choices together. With AI entering daily work, those conditions become even more important. Judgment is still the core of engineering, and AI raises the stakes.

Note that when I write about AI here, I’m mostly referring to large language model-based tools (LLMs) that suggest or generate code.

Coding isn’t the hard part

Programming looks difficult from the outside. Syntax and tooling create that impression. But the real work has always been decomposing messy problems and clarifying intent while handling tradeoffs that rarely have obvious answers. Some celebrate vibe coding, pasting AI snippets together without much thought. It feels fast, but it isn’t engineering. It’s skipping the very work that makes software reliable.

Programming languages serve multiple purposes. They help structure thinking and communicate intent to other humans and they instruct the machine. Unlike natural language, they demand precision, because small ambiguities become big failures. Even legal text, which tries to remove ambiguity, shows how hard this is. Programming languages evolved to close those gaps completely. You cannot get a clean design without wrestling with ambiguity and aligning priorities. That’s why so much engineering effort happens in conversations, diagrams, documents and whiteboards long before a compiler ever runs.

Misunderstanding what makes programming hard has organizational consequences. If leaders believe typing is the bottleneck, they will measure speed and reward output, rather than encouraging debate and problem framing. That incentive misaligns behavior long before AI enters the picture.

What AI adds, what it misses

As a pattern matcher on steroids, AI accelerates routine work. It drafts functions, scaffolds code and produces boilerplate in seconds, while suggesting next steps after every action. The stream of suggestions raises the tempo of change even when the underlying risk stays the same. This feels liberating next to the grind of implementation.

Yet AI has no taste. It has been trained on the good, the bad, and the ugly of public code. Without a consistent definition of quality in our profession, it cannot learn to prefer clarity over cleverness or resilience over shortcuts. It cannot distinguish between a design that will evolve smoothly and one that will crumble with the next change. It reflects the brilliance and the flaws of the code it was trained on, so every suggestion is plausible, but not every suggestion is wise.

Quality in engineering still comes down to how safely and feasibly a system can change. That’s why software must stay soft to modify. Humans manage this with modularity and abstraction, supported by practices like incremental progress and version control. We use version control to reason about differences and continuous integration to surface surprises early. AI undermines these guardrails when it regenerates code wholesale, erasing continuity and compounding risk. We have seen similar promises before in fourth-generation languages, low-code and model-driven development. Each faltered because they ignored the need for incremental change and round-tripping. AI risks repeating that history at greater speed. Speed becomes a liability when judgment is absent.

When teams are pressured to show constant and quick output, they are more likely to accept AI suggestions uncritically. What looks like technical debt is often organizational debt, in other words, the result of communication shortcuts, unclear ownership and misplaced incentives.

chasing robots

The challenges that remain

AI doesn’t erase the fundamental problems of software engineering. If anything, it magnifies them.

How do you specify precisely what you want? How do you confirm that you got it? And how do you make progress without reckless leaps?

These questions have always defined the work. AI shifts them from implicit concerns to daily realities. Teams can no longer rely on careful typing as a proxy for careful thinking. They need clarity of intent combined with reliable feedback mechanisms that anchor change in evidence.

These are not purely technical challenges, but also organizational ones, as in how teams communicate requirements, how openly they question results and how leaders create space for incremental progress instead of punishing it as a delay.

Executable specifications as a compass

Tests and continuous integration pipelines have always mattered. With AI they become essential. They no longer only verify correctness after the fact, but guide AI toward producing code that aligns with intent in the first place.

Without these safeguards, AI generates output that looks reasonable but hides subtle flaws. With them, engineers can harness AI to extend judgment instead of eroding it.

Executable specifications and strong pipelines act as the new compass, weaving requirements directly into code and tests. They give AI a direction and give teams confidence that progress is real.

The existence of these safeguards, of course, depends on team culture. If tests are treated as optional glue work, AI will simply reproduce that neglect. If leaders value and reward teams for strengthening specifications, the culture shifts toward resilience.

Responsibilities for leaders and engineers

Leaders shape the environment in which AI is used. They set expectations that engineers should question, clarify and challenge what a model produces. They show that asking “why” is not slowing things down, but rather it’s protecting outcomes.

Engineers must engage critically. Defaulting to “the bot said so” is negligence, not collaboration. Reviews and documentation of reasoning remain central to the craft, alongside raising doubts when something feels off.

Organizations need to treat AI as a contributor whose work requires review. Models drift. Suggestions change without warning. Audits and incident reviews should include AI output in the same way they include human work.

This is where leadership behavior cascades. If managers themselves rubber-stamp AI output in demos, teams learn that scrutiny is unsafe. If leaders model questioning and reward it, they set the cultural expectation that judgment comes first. Leadership draws equally from organizational behavior and engineering practice.

Early signs of trouble

Watch for reviews that grow quiet. Suggestions that slide through without discussion. Ownership that fades as incidents are blamed on the tool. These are signals that speed is displacing judgment. Left unchecked, they lead to brittle systems and teams that disengage.

Quiet reviews are not a mere technical risk, but a cultural signal of lost psychological safety.

chased by robots

AI as amplifier, not engineer

AI excels at repetition. It reduces the grind of writing boilerplate and can surface patterns quickly. But it cannot weigh context, anticipate consequences or own accountability.

The value comes when AI frees people to make better decisions. Teams that question its output, lean on executable specifications, and preserve psychological safety will thrive. Those who chase speed without judgment will find themselves buried under fragile systems.

AI doesn’t replace engineering. It amplifies whatever environment it enters. If that environment values clarity and open feedback, AI will extend it. If the environment tolerates passivity, it will multiply the damage.

Closing

Engineering has never been about typing. It has always been about judgment. AI brings speed, but speed without judgment has never been enough. To use it well, leaders must invest in trust and transparency, teams need to keep their critical faculties sharp and organizations should reinforce practices that give AI a clear compass.

AI doesn’t erase the need for psychological safety. It makes it more important than ever.

In the next article, I’ll explore how leaders can balance intuition with evidence from Organizational Behavior research when guiding teams through rapid change. Also, if there’s a specific OB-related topic you’d like me to cover, I’d be glad to hear from you.

Until next time, Tasos