It’s exhilarating to think that, with the help of generative AI, anyone who can write can also write programs. It’s not so simple.
By: Michael L. Littman
In 2017, Google researchers introduced a novel machine-learning program called a “transformer” for processing language. While they were mostly interested in improving machine translation — the name comes from the goal of transforming one language into another — it didn’t take long for the AI community to realize that the transformer had tremendous, far-reaching potential.
Trained on vast collections of documents to predict what comes next based on preceding context, it developed an uncanny knack for the rhythm of the written word. You could start a thought, and like a friend who knows you exceptionally well, the transformer could complete your sentences. If your sequence began with a question, then the transformer would spit out an answer. Even more surprisingly, if you began describing a program, it would pick up where you left off and output that program.
It’s long been recognized that programming is difficult, however, with its arcane notation and unforgiving attitude toward mistakes. It’s well documented that novice programmers can struggle to correctly specify even a simple task like computing a numerical average, failing more than half the time. Even professional programmers have written buggy code that has resulted in crashing spacecraft, cars, and even the internet itself.
So when it was discovered that transformer-based systems like ChatGPT could turn casual human-readable descriptions into working code, there was much reason for excitement. It’s exhilarating to think that, with the help of generative AI, anyone who can write can also write programs. Andrej Karpathy, one of the architects of the current wave of AI, declared, “The hottest new programming language is English.” With amazing advances announced seemingly daily, you’d be forgiven for believing that the era of learning to program is behind us. But while recent developments have fundamentally changed how novices and experts might code, the democratization of programming has made learning to code more important than ever because it’s empowered a much broader set of people to harness its benefits. Generative AI makes things easier, but it doesn’t make it easy.
There are three main reasons I’m skeptical of the idea that people without coding experience could trivially use a transformer to code. First is the problem of hallucination. Transformers are notorious for spitting out reasonable-sounding gibberish, especially when they aren’t really sure what’s coming next. After all, they are trained to make educated guesses, not to admit when they are wrong. Think of what that means in the context of programming.
Say you want to produce a program that computes averages. You explain in words what you want and a transformer writes a program. Outstanding! But is the program correct? Or has the transformer hallucinated in a bug? The transformer can show you the program, but if you don’t already know how to program, that probably won’t help. I’ve run this experiment myself and I’ve seen GPT (OpenAI’s “generative pre-trained transformer”, an offshoot of the Google team’s idea) produce some surprising mistakes, like using the wrong formula for the average or rounding all the numbers to whole numbers before averaging them. These are small errors, and are easily fixed, but they require you to be able to read the program the transformer produces.
It might be possible to work around this challenge, partly by making transformers less prone to errors and partly by providing more testing and feedback so it’s clearer what the programs they output actually do. But there’s a deeper and more challenging second problem. It’s actually quite hard to write verbal descriptions of tasks, even for people to follow. This concept should be obvious to anyone who has tried to follow instructions for assembling a piece of furniture. People make fun of IKEA’s instructions, but they might not remember what the state of the art was before IKEA came on the scene. It was bad. I bought a lot of dinosaur model kits as a kid in the 70s and it was a coin flip as to whether I’d succeed in assembling any given Diplodocus.
Some collaborators and I are looking into this problem. In a pilot study, we recruited pairs of people off the internet and split them up into “senders” and “receivers.” We explained a version of the averaging problem to the senders. We tested them to confirm that they understood our description. They did. We then asked them to explain the task to the receivers in their own words. They did. We then tested the receivers to see if they understood. Once again, it was roughly a coin flip whether the receivers could do the task. English may be a hot programming language, but it’s almost as error-prone as the cold ones!
Finally, viewing programming broadly as the act of making a computer carry out the behaviors that you want it to carry out suggests that, at the end of the day, you can’t replace the individuals deciding what those behaviors ought to be. That is, generative AI could help express your desired behaviors more directly in a form that typical computers can carry out. But it can’t pick the goal for you. And the broader the array of people who can decide on goals, the better and more representative computing will become.
In the era of generative AI, everyone has the ability to engage in programming-like activities, telling computers what to do on their behalf. But conveying your desires accurately — to people, traditional programming languages, or even new-fangled transformers — requires training, effort, and practice. Generative AI is helping to meet people partway by greatly expanding the ability of computers to understand us. But it’s still on us to learn how to be understood.
Michael L. Littman is University Professor of Computer Science at Brown University and holds an adjunct position with the Georgia Institute of Technology College of Computing. He was selected by the American Association for the Advancement of Science as a Leadership Fellow for Public Engagement with Science in Artificial Intelligence. He is the author of “Code to Joy.”