The Illusion of Control

Wayne Grigsby
Software Engineer

I recently cancelled my Cursor subscription.

Not because it didn't work. It worked beautifully. Every keystroke anticipated. Every function auto-completed. Every refactor suggested before I'd even thought to ask. That was precisely the problem.

It always felt like someone was looking over my shoulder. And then, without warning, taking over my keyboard. Not pair programming. Not collaboration between equals. Just an LLM system filling in the gaps for me, watching everything I typed, every file I opened, every mistake I made before I caught it myself. It felt...disjointed.

I switched to Zed and opted out of the ai bits. Clean aesthetic, blazing fast, and most importantly—quiet. Now I work in the terminal with Claude Code and specialized agents I've built. And here's the paradox that keeps me up some nights: I feel like I have less control, but I'm delivering better work.

Welcome to the illusion of control in the age of AI-assisted development.

The Progression Nobody Talks About

It started the way it starts for everyone. Copy and paste to ChatGPT. Ask a question, get an answer, paste it back into my editor. Friction everywhere, but the results were undeniable. Last year, I wrote about how ChatGPT made me 10x more productive back in 2022. That wasn't hyperbole.

Then came the IDE integrations. Cursor. GitHub Copilot. VS Code extensions that promised to make the friction disappear. And they did. The AI was right there, embedded in my workflow. I felt like I had more control because I could see everything happening. I could accept or reject suggestions in real-time. I was still the one driving.

Or was I?

So when I moved to terminal-based Claude Code with specialized agents it felt like an entirely different world. I built an engineer agent that audits code quality, and a writer agent that handles documentation and communication. Infrastructure that persists across every session, every project. I had my workspace and the ai agents had theirs. Rather than both of us squeezing into the same chair at a desk, we each had our separate desks, with our own screens and keyboards.

And that's when the question started gnawing at me: did I just give up control?

The Paradox of Perceived Control

Here's what I realized. With Cursor, I felt in control because I was constantly making micro-decisions. Accept this suggestion. Reject that one. Tweak this line. Override that function. Every moment, I was choosing.

But those choices were reactive. The AI suggested, I responded. The AI filled gaps, I validated. I was busy, engaged, hands-on. It felt like control.

The terminal approach is different. I architect ideas and concepts. The agents execute within the guardrails I've built for them. I don't see every line as it's written. I don't approve every change in real-time.

But here's the paradox: I don't feel like I have to nitpick as much as I used to. I've programmed the agents to follow my best practices. I've distilled my style, structure, and experience into the systems themselves.

Is that more control or less?

The Universal Phenomenon

I know people who just hit accept. Auto-accept everything. They don't even read what Copilot or Cursor suggests anymore. They trust it. I joke that they're turning into "Vibe Coders".

But I think that's just the nature of technology and tooling. As the tools get better, you lean back in your chair more.

That phrase sticks with me. It's a physical manifestation of ceding control. You're not hunched over the keyboard, fingers flying, making every decision. You're relaxed. Observing. Trusting.

When I was grinding through debugging sessions pre-ChatGPT, I was anything but relaxed. I was in control because I had to be. Every line of code was mine because there was no one else to write it. I had to read all of the documentation of libraries and frameworks I used. I had to understand edge cases and potential issues.

Now? I'm the architect of ideas. I know how things should look and be structured because I have that full-stack engineering background. I've implemented that knowledge into the agents that audit the work.

But there are days where I wonder: am I ceding too much control to the AI systems I've built?

The Fear We're Not Discussing

Let me be honest. I'm not worried about delivering bad work any more. Everything I do gets vetted and validated.

The fear is more existential. Did we ever have control?

I think as a society, as a people, we're constantly grappling with the illusion of control. For us, the idea that in some golden era of craftsmanship, developers hand-wrote every line of assembly code and knew exactly what the machine was doing. But that's mythology.

We've always abstracted. We've always built on layers we don't fully understand. How many developers truly grasp everything happening between their JavaScript and the silicon executing it? How many have read the source code for every library they import?

We traded fine-grained control for leverage a long time ago. High-level languages. Frameworks. Package managers. Stack Overflow solutions we copy without fully comprehending.

The difference now is the pace and the intelligence of the abstraction. The tools aren't just executing our instructions—they're anticipating them, suggesting them, sometimes making them for us.

And that feels different. It feels like we're not just standing on the shoulders of giants, but letting the giants carry us.

What Control Actually Means

Here's what I'm learning. Control isn't about touching every line of code. Control is about understanding the system, setting the direction, and ensuring the outputs align with the vision.

IDE-based AI gave me tactical control. Micro-decisions, constant validation, hands-on everything. It felt empowering but became exhausting.

Terminal-based agents (thus far) give me strategic control. I define the architecture. I establish the standards. I build the agents that enforce them. Then I let the system work.

Time not typing equals time thinking. Maybe that's where the real work always was.

The Evolution of Roles

When I wrote about The All Powerful Architect in early 2024, I envisioned a future where mid-to-senior engineers would evolve from granular coding to visionary architecture.

What I didn't realize was how soon I was going to be living it.

The architect doesn't lay every brick. The architect designs the building, ensures structural integrity, and guides the builders. Different kind of control—not over every detail, but over the outcome.

My time is better spent ideating, envisioning, roadmapping. The system handles the execution. I handle the strategy.

The Uncomfortable Truth

The IDE approach felt like more control but was actually less effective. I was busy, engaged, making constant decisions—but many of those decisions were responding to suggestions I wouldn't have needed if the system understood me better.

The terminal approach feels like less control but is delivering better results. The output is tighter, more aligned with my standards, more consistent with my vision.

That's uncomfortable because it challenges the narrative that hands-on equals better.

Sometimes stepping back and letting a well-designed system work is the better choice. But that requires trust. And trust is hard when the tools are so new, evolving so fast, and the implications so unclear.

Wrestling with the Question

I don't have a neat conclusion here. I'm wrestling with this in real-time, just like you probably are.

Some days I feel empowered by the systems I've built. I move faster, deliver better work, have time to think strategically instead of getting lost in syntax.

Other days I feel a creeping unease. Am I still a developer if the agents write most of the code? Am I an architect or just a prompt engineer? What happens when the tools I've built to amplify my capabilities start to define them?

The illusion of control cuts both ways. Feeling in control doesn't mean you are. And feeling less in control doesn't mean you aren't.

The question isn't whether we're ceding control to AI. The question is whether we ever had the kind of control we thought we did, and whether the control we're building now—strategic, architectural, systemic—is actually more valuable.

Maybe we're not pretending to write code. Maybe we're learning what it really means to create software in an age where the tools "think" alongside us.

Or maybe we're just leaning back in our chairs and hoping for the best.

I don't have answers. But I'm still writing. Still building. Still wrestling with what it means to be a developer when the development is increasingly collaborative with systems that feel less like tools and more like colleagues.

And maybe that's the real work now. Not writing the code. But figuring out what role we should play when the machines can write it for us.

Follow

More articles

When Everyone Can Build, What Matters?

When AI can code better than humans, what is left? The ability to know what is worth building. Everyone has taste. The work is knowing yours

Read more

Building a Resilient Stack for Classified Data Operations

How Python, Prefect, and Pandas create a sustainable data standard for air-gapped environments that survives contract rotations, enables mission-critical decisions, and stops the tool carousel.

Read more

Located Near

  • Washington
    District of Columbia
    United States