The Future Is All Vibes

The Future Is All Vibes
Photo by Pawel Czerwinski / Unsplash

"Vibe Coding." Usually said with disdain. AI generates the code, you ship it without reading it. A year ago I would have agreed that vibe coding was a terrible idea, especially given the results generated by the models of that time. Fast forward to early 2026: the capabilities available now have shifted my opinion, and not in the direction I expected.

Generating code with AI is now a viable option for software development, ignore at your peril. It isn't perfect, but it's really good, and understanding how to manage the risks is the new challenge. But realize we're in a transitionary phase with things improving every six months or so. We shoud be considering what things will look like in a year from now, and what changes we'll need to make to adapt.

Human in the loop is losing the race

Right now, there are big questions around process. Should you review AI-generated code? How carefully? What tooling do you need? Teams are building review pipelines and guardrails, and that makes sense given where we are today.

But the gap between how fast AI can generate code and how fast a human can review it is widening. Multiply that by the number of developers generating code, add in a factor for cognitive fatigue of reviewing code you didn't write, and you have a significant problem on your hands.

Now consider where AI model capabilities will be in six months from now, a year, or two years out. Then ask yourself: should we expect machines to continue to generate code in a format designed for human readability?

Code is going opaque

In the 1970s, programmers fed punchcards to mainframes. Assembly language. Direct machine instructions. Then we spent fifty years building high-level languages, frameworks, and libraries to make programming more accessible and efficient for humans. We optimized for human readability, maintainability, and collaboration.

I think we're about to go full circle.

AI models don't need human-readable code as an intermediate step. They could generate something closer to a high-dimensional vector representation of program behavior. Think of it like text embeddings, but for entire functions or features. A 4096-dimensional fizz_buzz(), if you will. Not source code in any language you'd recognize. Just a compressed, machine-native representation that runs. Or perhaps something completely different - probabilistic, non-deterministic, JIT - or something we haven't even imagined yet.

It might sound far-fetched, but there's a lot of that going around in AI land these days. We already have neural networks that operate on learned representations no human can interpret. The weights in a trained model aren't readable, but they work. Extending that principle to generated programs is a smaller leap than it seems.

The hard questions

If code goes opaque, a lot of what we take for granted breaks. Or, do we just need to rethink our assumptions?

How do you debug something you can't read? Probably the same way we debug neural networks now: by observing behavior, testing boundaries, and building tooling that operates at the same level of abstraction as the code itself. On the other hand, will debugging still have the same meaning as it does today? Will there be faulty code, or just code that doesn't "vibe" right?

How do you version control it? Diffing opaque representations is a real problem. Maybe you version the intent (the prompt, the spec) rather than the output. Maybe version control doesn't apply when apps are now created Just In Time (JIT) by the AI based on user needs.

I'm not exactly sure what to expect here, but I do know we should anticipate having all our assumptions about software development challenged. This will allow us to better adapt when the time comes, versus being caught off guard or holding on to old paradigms that no longer apply.

What this means now

If you're considering or building processes around human review of AI-generated code, it's the right thing to do today. But I wouldn't treat it as a long-term investment. The shelf life on human-readable AI output might be shorter than we think.

The teams that will adapt fastest are the ones already thinking about what they actually care about: behavior, correctness, security, performance. Those concerns survive the transition to opaque code. "Can I read the source?" does not. Like the parable of the blind men and the elephant, we need to be prepared to understand the new "elephant" of software development without relying on the old "sight" of human-readable code.

My bet: within 12 months, we'll see the first serious tools that generate opaque, machine-native program representations that outperform human-readable code on speed, efficiency, and correctness. It won't replace traditional code overnight. But it will make the vibe coding debate feel quaint and old-fashioned. The future is all vibes, and we should be ready to embrace it.