Blackfriars Bridge, London, 2024
Blackfriars Bridge, London, 2024

How will LLMs take our jobs?

March 16, 2025

It’s a pretty inescapable take these days in tech circles: if you don’t want to get left behind, you need to be using LLMs, and become well-versed in how dramatically they can improve your productivity as a developer. Besides their utility as “next generation” search engines and replacements for online knowledge bases like StackOverflow, according to many, it is now possible, or will soon be possible, depending on how you look at it, to use these models to generate large portions of production codebases.

LLMs at work

Of course, as of today at least, we are still not at the stage where large amounts of production code is being generated by LLMs. There’s still a few legal and ethical hurdles to overcome before the cat is really out of the bag there I think: since any company that actually values its own intellectual property will not allow its employees to upload source code into third party servers, the ability for companies to train and serve their own LLMs, built around rapidly improving open source models, will have to progress significantly.

Closer to home though, I read a lot of tech blogs and comments on HackerNews and Lobsters, and am seeing a constant stream of posts and write-ups of people using LLMs to complete side projects. Since proprietary models are usually being used here (Claude, ChatGPT, etc), these sorts of projects can give us a glimpse at what may be the future of coding at work. The consensus seems to be that rather than a side project being some sort of idea you have, then spend a couple of hours on, maybe learn a few things, but quickly get distracted by life or a new side project, you can now just chuck your idea into the model and after a couple of hours of iterating you have a working project.

To me, this all seems to point to the fact that we are currently in the middle of a significant paradigm shift, akin to the transition from writing assembly to compiled programming languages. A potential future is unfolding before our eyes in which programmers don’t write in programming languages anymore, but write in natural language, and generative AI handles the gruntwork of actually writing the code, the same way a compiler translates your C code into machine instructions. (It’s worth noting that this workflow of “greenfield” projects is not really that similar to how most “at work” coding is done, where most tasks require changes to established “legacy” codebases rather than churning out thousands of tokens of new code).

This begs the question: what of the future of our livelihoods as programmers? Is a future of writing prompts and wrangling with LLMs something that you could concievably continue to call “engineering”? Will there be a time that software becomes so commodified that this job so many of us love will cease to exist, at least the way we have come to know it over the past half a century?

I think there are still an awful lot of unknowns at this point, we are probably at or close to the peak of the hype cycle so it’s very hard to see the future in between all the bullshit, invented use cases, marketing speak and VC money. But I think it’s at least possible to identify several possible scenarios, which I’ll go into in a bit of detail below, noting that any possible future from this point is probably going to be some linear combination of them and not any single definite path.

Give up

If you don’t like the look of where these changes are taking us, you can always go and learn a new skill set which you are betting is immune to them: gardening, carpentry, plumbing, psychotherapy, construction. This has been a desirable path for many burnt out tech folks over the years, even before the latest AI hype wave, as it can be exhausting to keep up with the latest changes, keep yourself current and relevant, particularly as you get older, and as the hype waves become more and more depressing in terms of the vision they present for our collective future as a species. Presumably this path will be attractive to those who have already reached financial independence through their career in technology and have the privilege to switch off and go and build chairs or milk cows (or both).

Join ’em

If you can’t give up, like most people with families and mortgages and other obligations in the real world can’t, or you don’t want to, you can always try to be one of the folks creating the models and the value from the models. Surely this is a career path with its peak still to come. It is currently extremely lucrative but there will be more and more jobs in these kind of roles in the coming decade. Good luck sorting the real, interesting and challenging opportunities from the hype though, and note that the learning curve here is very high, the base skills required very different from your average tech job (maths, stats, ML, GPU computing) and that it’s currently an extremely competitive market attracting the best of the best talent.

Climb the ladder

Another option for folks with the right dispositions is to spend more effort trying to climb the ladder into management positions, since presumably these will be immune for longer to any disruptive changes caused by the commodification of the actual “code generation” process.

I like the description that Nate Silver gives in this post, where the “market value $V$ of [a knowledge worker is] dictated by the function” $V= G\times S \times P$, where $G$ is general intelligence, $S$ is domain knowledge and $P$ is “soft” skills — communication, management of people, and so on. They are “multiplicative … because any of [them] are potentially limiting factors [on the others]”. So in a world where the first two multiplicands are devalued, the third becomes a lot more valuable. Presumably those “further up the ladder” will benefit more from this shift, since they are (at least ostensibly) there because of their high values of $P$.

Cross your fingers

You could also cross your fingers and hope it pans out differently — particularly if, like me you find the vision of the future spruiked by the most bullish LLM proponents a little ghoulish and offensive to our collective humanity. After all, there’s still only a very small chance (in my opinion at least) that the current technologies get us to the coveted AGI goal, which would truly be a world-changing technological revolution surpassing the Industrial Revolution or the invention and adoption of the internet. So there are three subpaths here:

AI winter

Hope this hype wave leads to another “AI winter”, like it has before, in which case traditional tech could remain the way it is for another boom/bust cycle or two. Each passing day that generative AI companies burn millions in cash without an actual profitable business model make this scenario incrementally more likely. On the other hand, each improvement in LLM capabilities, reduction in cost, and scale breakthrough make it less likely. There are a lot of people waiting to see how all this is going to pan out.

AI “summer”

I’m not sure of a better name for this scenario. But it’s interesting to note that in the “LLMs are just one level of abstraction further up, like moving from assembly to compiled languages” analogy I mentioned in the introduction, there are a lot more programmers making a living today writing Python and Javascript than ever made a living writing x86 instructions. Maybe this change “democratises” programming and software, and opens up a whole host of new opportunities while keeping the “old guard” employed as the “brokerage” layer between the old and the new (see “Linux greybeards” as the example from the previous generation).

Long-shot

Be part of one of the efforts trying to get to AGI with a different approach (“agents”, semantics, quantum, something else entirely). Of course you need to pick which of these approaches you think has the highest likelihood of success, and then have it succeed, which is a sort of startup-like trajectory that will not sound appealing to the risk-averse. It also requires unconventional skillsets similar to the “join ’em” scenario above.

Make sourdough

Finally, you could bet on the “human-made becomes artisanal” path: where software and technology untouched by generative AI develops a premium image and a path for continued employment for talented “old school” developers willing to put in the effort and resist the hype. Presumably when all our food is made by robots, rich folks with a nostalgic bent will pay top dollar for food made by actual human chefs: perhaps the same will turn out to be true for software.

Conclusion

I don’t know how any of this is going to play out, so it’s hard to decide myself which of the above scenarios to invest efforts in to ensure an interesting career with the potential for growth, learning and fulfillment. For now I am trying to keep my options open, to make sure I am keeping up with this very fast-changing field and at least theoretically remaining able to “pivot” if I see things going in a certain direction I think is more definite.

I said a few times above that thinking about the LLM future depresses me: I suppose this is mostly because I really like playing around with computers and using them to build interesting things, and these models feel like they are going to ruin a bit of that fun. I like the feeling of being creative and inventive, and these models give us the illusion that a computer can do that for us, when really all it is doing is garbling together some high-dimensional average of what other people have done before. In this environment, the premium for creativity and “truly new” ideas will presumably become even higher, as anything that can be generated by this averaging process becomes commoditised. So if I have to focus on one thing, it will be this: trying to remain creative and original, and not succumbing to the urge to use LLMs to do that kind of work for me. I think this strategy, alongside continuing to keep up with developments in the space, learning how to use whichever tools gain wide adoption and maintaining a solid understanding of the strengths and weaknesses of this current wave of AI, leaves as many of the potential paths above open as possible during this uncertain and turbulent time.