AI, LLMs, Coding, and Software Development

Posted on: March 13, 2025
Yet Another Article About AI (YAAAI). This is what this is. But not really.
There are six well-known levels of "programming languages", so to speak, in computer architecture. The lowest, level 0, is digital logic level, where you deal with logic gates and bits. Then, you have microarchitecture level where you write commands for the CPU so it manipulates memory and the ALU (PCout, MARin, READ, clear Y, set Cin, ADD, Zin etc). You also have ISA, OS, and assembly above it. Finally, you have the problem-oriented level where every programming language you can think of resides (Python, Java, C, C++, C#, Go, Rust, JavaScript, Gleam...). And then you have level 6. This is where I would put "prompt engineering" for software development.
Now here's the first issue: LLM prompts are just plain english. LLMs just take any kind of text content (or nowadays image and video as well) input without any binding structure or rules that need to be followed and tries to understand it and produce an output. And that's why it's great! Great for the general public, great for general purpose. Not great for software development.
Introducing AI agents
AI agents are given context. This context is written in a very specific way (You are a highly skilled software developer...) and is prompted, or "engineered" in a way that the LLM will behave in a more desired way, for example, more like a software developer. It will (hopefully) produce better quality code than a simple "Make me a 2D platform adventure game in python" ChatGPT prompt. But is it good enough, as in good enough to build actual deployable apps and not prototypes? Good enough to replace software developers? At the time of writing with Claude Sonnet 3.7 deep think mode...
No.
Structure, or The Lack Thereof
Every programming language has rigid syntax rules that must be adhered to, or the compiler or interpreter will reject your instructions. When we're attempting to use LLMs to produce code, we are literally attempting to use them as compilers from level 6 to level 5 or lower. Neural nets take input, process the input, and produce output. The input currently has no defined structure, and even with attempts at adding structure to AI agents, the english language is too vague. The "compiler" now has to deal with structureless input and somehow compile and output it into working level 5 code. This is a problem.
Levels of Abstraction
We are now hitting the wall of attempting to abstract level 5 code even further. And here is my argument: plain english is simply too high of an abstraction to be considered level 6. My imagination of what a level 6 language that the AI compiler could compile and produce working code output would look like something like this:
1. MODE = development // tells the LLM to switch mode from general-purpose to specifically
development
2. COMMAND = build // tells the LLM that it will need to build a program. Other modes could
include DEBUG, TEST and DEPLOY
3. STACK = React, Hono, PostgreSQL // tells the LLM what stack to use
4. OBJECTIVE = game // this can be replaced by "website" or "tool" or a variety of objectives
for the program. This parameter is sensitive and has the risk of becoming too vague unless the
objective of the program input is crystal clear to the AI compiler.
5. FRONTEND_DETAILS = 2D, single-player, user controls main character, controls = [move, left,
right, jump,
double-jump, attack, dash, special attack], ui = [health bar, stamina, character level,
inventory, stage level, boss health bar], enemies = [mob1,
mob2,
mob3, mob4, mob5, boss1, boss2, boss3, boss4, boss5], checkpoints // again, can be extremely
sensitive
and complicated, ideally can provide the AI compiler design frames from something like Figma
6. BACKEND_DETAILS = users, save progress, load progress // anything beyond CRUD operations will
become sensitive to input into the AI compiler
Steps 4 and beyond are where the input must be carefully crafted for the AI compiler to compile correctly and output working code. In fact, we should allow the AI compiler to provide feedback such as syntax errors or logic errors at this stage. This is a level 6 language because it no longer deals with data types and data structures (as much as possible, since we still seem to need to use something that looks like an array). There are no functions to be defined or classes to be declared explicitly. And obviously no memory management, no pointers. Like any compiler, the AI compiler will need to provide feedback or clarifications regarding potential gaps in information or error-causing inputs. If all of these are achieved, we now have a level 6 language that uses AI as its compiler. But is this what people want? Perhaps not. Because this still feels like coding. And people don't want to code or engineer, for some reason. But they still want something that's engineered. ¯\_(ツ)_/¯