The Utopian and Dystopian Future of Vibe Coding.

Lets face it, being an expert in a particular programming language is coming to an end. In the past two months I have used my usual languages, C, SQL and Pascal, ones that I guess I am kind of “expert” in. But I have also wrote and understood code using C++, PHP, Python, Go and (if you really want to be pedantic) RISCV assembly, none of which I use or have bothered to learn beyond the basics of getting the thing I was doing working. AI told me how to set up the IDE’s or environments, gave me the frameworks, helped with the complexities and helped with the testing; This is the basis of Vibe Coding, yet for all this I am still translating English to an intermediary language, that we feed the compiler or interpreter, which is then spitting out machine code.

So let us run this scenario out a little; At any point from the application program (say a Graphical User Interface – GUI) on the top layer, all the way down to the the registers, instruction set architectures and electronics it all runs on at the bottom, my AI can happily help me write the code or design/configure the hardware.

“Can you give me assembly language to initialise the MMU in an ARM SOC (say a raspberry Pi 4)?” “Write me a driver to implement UDP on this network adaptor” “Write me a service that will support multiple connections to this TCP port..” The list goes on, the likelihood (in 2025) is that you will get a 80% or better working solution in whatever language you wanted, C, C++, Rust, ADA or even Assembly Language, and with a little prompting and testing it will probably work just fine.

As a little aside, humans use layers and API’s for lots of very good reasons: The most important I would argue is because most of us cannot (or do not want to) grok the whole stack. Yes there are a lot of smart people out there that if you asked to design a circuit board with an ARM SOC and write a server to return a simple HTML page with “hello world” they could open the SOC datasheet, use CAD to create the board, then use a text editor and GCC to do this. But why would anyone take months re-inventing the wheel, and re-writing every part of this from scratch when you could just buy a dev board, load Linux and Apache, or an RTOS and a TCP stack in ten minutes and know you will have a reliable solution? Who cares if we are now running a few million lines of code, instead of a few thousand, and who cares if it takes 10 milliseconds instead of 10 microseconds, uses 10MB of RAM instead of 10KB, and 1 watt instead of 1 milliwatt to achieve this. Nobody does! You need a solution that works now, not in several months, you need others to understand what you did so they can support it, and you need to do it for the least cost.

The problem I guess is that us developers have become spoiled by the power of our hardware; it is trivial for a modern low cost SOC to run an OS and the most badly written, unoptimized program on top of it, to get what they want in 2025. But it does mean that an iPhone 6 from ten years ago is now landfill, even though it was as powerful as a supercomputer from the eighties. So today we have potentially hundreds of separate modules and programs active, all written in different languages, all with their own bugs and vulnerabilities, all eating machine cycles and power, and all with their own API to call the next layer in the stack. This goes all the way down to the silicon where the binary machine code is executed.

Yet what if there was no stack, no operating system, no API’s, no frameworks, no database or web servers? What if I prompted the AI with the exact specification of my GUI, what I want it to do in English/German/Chinese and tell it to write it in optimised machine code for specific hardware? (an iPhone 6 for instance) What if I then asked it to write all the tests for this, and try every fault people have ever reported on similar devices. Could it do it?

Certainly not today, nor I would say in 2026 or 2027, but I will bet a few dollars that in 2028, or soon after, the AI will be at a stage where it is able to do this; and it is likely by the end of this decade there will be systems out there which use no human written code!

There will also of course be no human who will understand that code! So what about support? you may ask, well why would you want support when you could just enter the change you want or the fault you need to fix directly into the prompt, and then have the AI run the whole process again and spit out another solid binary file incorporating exactly what you want?

The other, I would say existential, danger is what else is the AI or the specifier including in this code, that, lets face it, you will never find? Alignment is seems is everything my friends!

So apologies for ruining your day, I am sure there are many sceptics out there, but whether you think this is utopia or dystopia, or quibble on the date, it is likely to happen sooner than you think. Also, like everything else, it will get better, and better over time, and, with cheap enough chip fabrication or chip printers it may even be able at some point to optimise directly at the hardware level. None of this is Science Fiction, it is simple extrapolation of the current trends, and really is just a case of engineering and time.

If you are interested in this subject I go into the implications and politics of it in a lot more detail the second book of my three-part science fiction series, The Summer of Reasoning.

This entry was posted in AI, AI Alignment, Efficiency, Electronics, Politics, Programming, Singularity, Singularity, Speculation, The Future, The Summer of Reasoning and tagged , , . Bookmark the permalink.