Prologue
With everyone so hyped about ChatGPT and similar tools, I sometimes feel like I’m missing something really important doing things the old way — without some miracle machine generating code for me. At least a half of my Linkedin feed is someone sharing their favorite agents and prompts, along with AI-generated media. Maybe I really am just a dinosaur whose destiny is to be left behind by the rapidly changing industry? For the better or for the worse, it currently doesn’t look anything like that. In this post, I’ll try to argue that today’s “AI” tools, as impressive as they are, still won’t be able to provide any meaningful help in creating software.
Not as good as it seems
No AI, just LLMs
The biggest disappointment about today’s artificial intelligence is that none of it is actual intelligence. At least, not in the way us, humans, usually define it. There is actually a whole bunch of technologies which people call AI nowadays (including machine learning, computer vision, etc.), but in this article let’s focus on the single tech backing all the chatbots and coding assistants.
Large Language Models (LLMs for short) are the machines that parse your input and generate some text (or code) in response. The “thought process” behind the text generation by an LLM is literally modeling the language. Imagine infinite monkeys with typewriters from the famous theorem. An LLM just optimizes the process a bit by:
- working with lexemes (tokens) — minimal units of language — instead of single characters;
- choosing the most probable (or almost most probable) order of lexemes based on statistics from a large learning dataset.
What’s really important about the process described above is that no thinking or reasoning is involved. The result might look meaningful because that’s what the model was made for in the first place, but by no means is it guaranteed to be coherent, logical, unbiased — in short, useful for real-world applications.
Unskilled labor
People sometimes like to compare the model’s performance with that of junior developers’. Some claim that in the near future most entry-level positions will be replaced by the AI (people who make the claims usually prefer this term instead of “LLM”). Personally, I don’t consider it possible, due to how differently humans and LLMs operate. An human, when given some task, can:
- ask for additional context when necessary and even question the necessity of the task if they feel like it is not worth doing, as opposed to LLMs generating text without doubt;
- push their own boundaries of knowledge and actually learn something new in the process, as opposed to LLMs operating strictly within the limits of their datasets;
- correct any mistakes when pointed at them, without creating new ones, as opposed to LLMs just generating some new output based on extended context — with no guarantee for this output to be correct or even fix the original error.
So while working with humans (yourself included) is a process of gradual improvement, with LLMs it is a whole other story. When delegating any real task to an LLM, you are condemning yourself to a miserable life of constantly checking for errors after a stupid machine, with zero probability of that machine becoming better and actually learning something over time. You can either do that, or, as an another alternative, embrace the stupidity and just blindly accept everything, even if it doesn’t actually solve your problems. Why bother with it at all then?
Actual uses
Having said all the above, it would be unfair to deem LLMs as completely useless. There certainly are some cases where these shine if you take them for what they are. Just make sure to not make them your single tool.
Learning things
While it is totally wrong to perceive an LLM as a mentor, it still can be used to reinforce your learning. For example, you can use a model to:
- rephrase some text to understand it better;
- discover some sources;
- summarize some source to quickly determine whether it contains the information that you need.
Proofreading
Not everyone has a professional editor to read their texts and point out some silly mistakes. A well-trained LLM might become one for you, just make sure to take its stylistic advice with a grain of salt and fix something as suggested only if you actually agree with it and it doesn’t affect your personal style and way of thinking. Otherwise, you risk getting the most generic shit instead of your own text.
Looking things up
An LLM is sometimes a good alternative to a search engine. At first I used to be skeptical about it, but nowadays I often find myself reading Google’s AI response before even clicking on the first search result. I wouldn’t entrust AI with any serious questions, but for simple syntax lookup or other trivial tasks it performs just fine.
Doing monkey assignments
When I was a university student, these were relatively common. Say, forcing everyone to write an essay that nobody is ever going to read. When assigned such a task, the first thing you should do is, of course, reconsider your life choices and think of how you got into this position. Maybe you really should be doing something that doesn’t require pointless labor instead. If there is no getting around it though, then an LLM is a great way to spare yourself from such pleasure.
Epilogue
LLMs definitely have their own cool uses, but right now coding just isn’t one of them. You shouldn’t be worried about losing your programming job to an AI given that you actually use your head when coding.