My Honest Take: Why I’d Pick AI Over (Most) Former Colleagues

If you’ve been following my online activity—YouTube channel, newsletter, posts—you know I’m wholeheartedly against the AI hype. But that doesn’t mean I’m against AI. On the contrary, I’m pretty much invested in it, but I take it with a grain of salt.

There are three AI-related concepts I call BS:

  • AGI
  • prompt engineering
  • vibe coding

AGI

I won’t go down this rabbit hole again. I already shared my opinion on AGI here

Prompt “Engineering”?

Engineering is when you build a bridge spanning 20 km of open water. Engineering is when you calculate trajectories to land a rover on Mars.

Writing a prompt that does the job is not engineering: it’s just common sense. We don’t call writing a Google search query ‘search engineering.’ So why are we calling basic instruction-writing engineering?

“Vibe” Coding

These two words have no business being together. Not even with a period between them.

The “vibe” part basically reduces coding to:

I have no idea what I’m doing, and I don’t even care.

That’s not how you develop software systems. Imagine if a company announced they “vibe-coded” the software running the nuclear facility next to your town.

Or if the doctor winks and says, “Don’t worry! I vibe-coded the app that powers this thing over the weekend.” – gesturing toward the CT scanner before sliding you in.

Trust me, I’m a vibe coder!

Replace “vibe” with “AI-assisted,” and things start to make sense.

AI-assisted coding means: I know my stuff, and I use AI to do the grunt work for me. I master the language(s), understand the architecture, the patterns, the trade-offs. And I use AI to deal with the repetitive grunt work I never enjoyed doing, such as writing unit tests, refactoring the work of stupid people, updating docs (you know what I’m talking about).

AI does this with no complaints, while I focus on the hard problems.

But I never trust AI blindly because, as I discussed in my previous article, there’s mathematical evidence that LLMs can’t be completely free from errors or hallucinations. That’s just how it is, and while things may improve, LLMs won’t ever be perfect.

Speaking of perfection, these models improve at an alarming rate. They digest all the available, free—uhm, sometimes not so free—knowledge. So, no wonder they’ve evolved so fast.

For instance, OpenAI’s early Codex coding model scored 28.8% on HumanEval’s pass@1 benchmark back in 2021 (HumanEval is a coding benchmark with 164 hand-written Python problems where models must complete functions correctly).

Whereas Claude 3.5 Sonnet achieved close to 100% on HumanEval pass@1 just a few years later.

Actually, modern LLMs are so good now that HumanEval has become too straightforward, as they all confidently pass it at pass@1. That’s why they’re now using more complex benchmarks like SWE-bench, which tests real-world software engineering tasks like fixing actual GitHub issues.

I’m Not Against AI

To be honest with you—and this might not sit well with some—I’d rather team up with an AI assistant than most* of my former colleagues.

[*]There are exceptions, of course. If you read this, Peter Nagy , Zsolt Molnar, Pinczes Szilard— I’d happily work with you again anytime. Those were great times: fun, professional, and a fantastic atmosphere… until it all got wrecked by corporate greed, cuts, and bad management decisions. Business as usual.

Here’s why.

AI doesn’t get defensive when I review its code. I made a few enemies because I hurt some egos when, e.g., I pointed out that having five different exit conditions in a for loop may not be the cleanest solution, or that there’s no real reason for a function to span over five pages. Don’t make me go down memory lane…

AI doesn’t push back tasks. “- Sorry, I have an urgent meeting with our tech writer about…whatever.” Yeah, I didn’t make this up either.

AI is available 24/7. I recall when the designer of the 3D objects for my iOS game suddenly stopped replying to my emails. The game was almost complete, and back then, timing was everything for success in the App Store. Turned out his internet provider was to blame. Eventually, I shipped the game on time, but my stress baseline has never been the same ever since.

AI doesn’t care about office politics. “Dan was promoted because he’s been brownnosing the boss.” – None of the AI-assistants I’ve tried ever said anything even close to that.

Sorry guys.

Related Articles

Responses