On AI, Human Judgment, and Building Things That Work

The Uncomfortable Truth About AI

Let me be direct: AI companies are lying to you. Not about what their technology can do—but about what it should be trusted to do.

The marketing says “ask this thing to solve your problem and it will.” This is dangerous nonsense. Neural networks in 2026 are genuinely powerful within narrow, well-defined domains. But expand that domain—ask them to be general-purpose problem solvers—and they struggle. They hallucinate. They give you confident, articulate, completely wrong answers.

The AI might return something brilliant. Or smart. Or correct. Or incorrect. Or entirely fabricated. Or close but missing a critical detail that gets someone killed because they trusted the dosage it recommended. The output is non-deterministic, and pretending otherwise is malpractice.

The AI sounds certain. The student believes it. The professor doesn’t. I’ve seen the “dead internet theory” playing out in real time—AI slop flooding every corner of the web, making it harder to find anything genuine. The enshittification is real, and AI is accelerating it.

These tools, used the way they’re being marketed, are actively harmful. And yet here I am, building my entire business around them.

The Inversion: What Changed in January 2026

Generating code used to be harder. Much more time-consuming. It still is a genuine skill that takes years to develop. As of Claude Code 2.0 with Opus 4.5, the act of writing code—translating logic into syntax—is now largely a commodity.

I can get more done with Claude Code than I ever could working with a small dev team. Not because those developers weren’t talented—they were. But the coordination overhead, the communication gaps, the iteration cycles—all that friction is gone. The cycles are tighter.

This is what’s actually happening right now, in January 2026. Software development is the first industry seeing major transformation, and it happened faster than anyone predicted.

But here’s what the disruption-fetishists miss: Devs are not being laid off en-masse, their jobs are just changing. More code is being built. The bottleneck shifted. It used to be “can you write the code?” Now it’s “do you know what to build, how to build it, and why?”

Implementation is a real skill. Translating requirements into working code, managing complexity, debugging gnarly problems, navigating tradeoffs—that’s genuine expertise.

But implementation alone never shipped a product that mattered. Someone had to understand the business domain deeply enough to know what the actual problem was. Someone had to design the architecture. Someone had to know what ‘good’ looks like before it exists. Someone had to watch the real world collide with the elegant solution and know how to adapt.

I’ve Been Waiting 40 Years for This

I’ve been a computer nerd since the mid eighties, when I got my first Commodore 64. Wargames is still my favorite movie. I’ve been nearly every geek archetype over the decades—the kid trading floppies in ninth grade, the gamer, the tinkerer, the system builder, the enterprise engineer. My 40 years of experience started here. Don’t roll your eyes, coding on an 8-bit system with no internet to look things up, only the system sitting there with a blinking cursor. And a way-too-thin instruction manual. This is why they don’t make tech geeks like us GenX tech geeks any more. My junior high school buddy loved coding too. I loved playing the games. He loved coding them. We even made a lightpen from Radio Shack parts that drew on the TV (his dad helped us with the soldering iron). We had a blast together, doing serious engineering that was actually harder than much of what we do now with all the resources available today!

I have a computer engineering degree from NC State. I learned computer architecture down to the bit. At IBM, I wrote the appliance reprovisioning system for Websphere—I hold a patent on it. I wrote mobile code for WebSphere Mobile—another patent. At ViaVoice, I was the build engineer pulling 25-30 developers’ work into one working product. I was an integration specialist making embedded voice recognition work on hardware for JCI, Honda, and others. Three IBM patents with my name on them.

I know coding. But my real value was seeing the whole system—understanding how the pieces needed to fit together, translating messy business problems into technical architecture, knowing when the “correct” solution wouldn’t survive contact with the real world.

I worked side by side with developers. Not handing specs over a wall—sprinting with them. Reviewing their work. Telling them when they busted something. Pushing back when implementation drifted from intent. Catching the gaps between what I envisioned and what was actually getting built.

It worked. But there was always friction. Translation loss. The tax of explaining what was in my head clearly enough for someone else to build it. Sometimes they saw things I missed. Sometimes I saw things they missed. That’s collaboration. It’s valuable. It’s also slow.

Now the friction is gone. What used to take months takes weeks. What used to take weeks takes days.

Custom software that small businesses always wrote off as “nice to have, not realistic”? The economics changed.

The Process Is The Same (Except I See More Code Now)

My wife asked me a good question: what’s actually different about working with AI versus working with human developers?

The process is identical. Spec up front. Kickoff to align on approach. Implementation. Check-ins. Code review. Testing. Retro. Same discipline whether I’m working with a senior engineer or Claude Code.

What’s different is where I’m looking.

With a team, I trusted the process. I wrote specs, reviewed demos, tested functionality. But I didn’t read most of the code—that’s what code review was for. Other developers caught what I’d miss. That’s the point of a team.

Now I read every line. I watch it get written. The code quality is downright impressive, but I also catch the AI patterns, the hallucinations, the confident wrongness. I’m more hands-on with the actual code than I ever was as a Product Manager.

The fear is that AI means less human involvement in the code. For me, it’s the opposite.

But let’s be honest about the tradeoff: I lost the other sets of eyes. A team catches things a solo operator won’t. Fresh perspectives. Domain expertise I don’t have. The junior dev who asks the “dumb” question that exposes a real problem.

That’s real. I don’t pretend otherwise. I lean on the client’s domain expertise. When a project needs it, I bring in contractors, consultants, partners for review. The option is always there. But for the work I do—practical business tools, integrations, workflow automation—I can move faster solo and bring in help when complexity demands it.

Same rigor. Different configuration.

Why Experience Matters Even More Now

Here’s the paradox nobody’s talking about: AI tools make experience more valuable, not less.

A junior developer with AI tools lets AI design everything. Takes API documentation at face value. Builds the “correct” solution that doesn’t actually work. Gets stuck when reality doesn’t match tutorials.

A senior engineer with AI tools owns the architecture. Questions API limitations. Pivots when reality doesn’t match theory. Ships pragmatic solutions that generate revenue.

I’ve run development shops. I’ve managed engineering teams at IBM. I’ve built the middleware that enterprises depend on. I’ve done nearly every job in tech—most technical, but also project management, helping small businesses leverage technology by understanding their domains well enough to lean into their workflows. I’ve always been great at bringing enterprise capabilities to SMBs.

When I call AI Dev tools a “cursed berserker tool,” I mean it. It’s powerful and dangerous. It will confidently produce complete nonsense if you don’t know how to spot it. Someone who learned to code last year prompting an AI is not the same as someone with forty years of pattern recognition wielding that same tool.

The Uncomfortable Honesty

I’m not sure what we’re doing is a good thing.

The data centers powering these models consume staggering amounts of electricity and water. The environmental math is hard to justify for what is largely just viral garbage, engagement bait, and mediocre cover letters. And what happens to a society that outsources thinking?

I don’t have answers. I have concerns. “Don’t just do it because you can” used to mean something in tech. But our society, capitalism, encourages blindly advancing. The hockey stick is real. The acceleration is real. And I have genuine concerns about where this goes.

But there are too many big powers (countries like the USA and China), too many multi-billion dollar infrastructure investments. The technology is not going anywhere. You can bury your head under a rock, or you can brace and navigate it well.

I choose the latter. Not because I’m naive about the risks—I see them. But because the alternative is irrelevance while others shape the future without me.

For The Artists Who Hate AI

I understand you. I do.

There’s a certain subset of people who care about art—fiction, movies, music—who HATE AI. Because it doesn’t have “feeling.” Because it’s trained on stolen work. Because it threatens livelihoods built on craft.

I’m not trying to change your minds. I’m not making art here. I’m building business tools. Workflow automation. Hardware integration. The boring, practical infrastructure that connects systems that don’t talk to each other. The stuff where “feeling” isn’t the point—functionality is.

If you’re an artist protecting your craft from AI slop, I respect that. We’re fighting different battles on different terrain.

The Summary

I use AI tools. With full accountability for the output.

I don’t trust AI. I review every line. I know when it’s lying or wrong. I catch what others miss because I have forty years of knowing what “right” looks like.

I own my work. The AI is my tool, not my brain. I take responsibility for every decision, every bug, every success.

I’m honest about the risks. AI marketing is dangerous. Most people use these tools wrong. Kids are getting expelled. People are making bad medical decisions. The enshittification is real.

I deliver results anyway. Because the tools, wielded by someone who knows what they’re doing, enable things that weren’t possible before. Lower price and faster delivery. Working software that solves real problems.

This is where we are in January 2026. The transformation happened faster than anyone predicted. If you’re scared of AI, I get it. If you’re angry about it, I get that too. But the train has left the station.

I’m the guy who knows how to drive it without crashing.


40 years building bridges between things that don’t talk. Hardware, software, the stubborn stuff in between.