What AI means for junior software engineers
AI is eating the Software Engineering world, and it is doing so fast. When you compare the capabilities of Microsoft’s Co-Pilot, lauded to be state of the art only two years ago to AI agents of the caliber of Google’s Jules today, this becomes very apparent. While I worry for all of us, I worry in particular for folks who are about to take up programming or are just starting their careers.
There are a couple of cases that are worth separating out.
In case we are able to automate away programming entirely, this worry is ill-founded. I would liken it to escalator operators–while we still enjoy the services the provide today, there are no more elevator operators, and no one knows how to operate one shy of pressing a button. That’s okay.
What happens when, as seems likely for at least the foreseeable future, we end up in a hybrid world, where machines and engineers continue to work together to create code? In that case, it seems that a lot of the easy, low-hanging fruits can be largely automated away, and what remains for the human is to steer and surgically intervene for the most difficult code pieces. But juniors are ill-positioned to do this.
In the software industry, new grads typically have to prove themselves by undergoing an almost torment-like regime as code monkeys, pushing out as much code as possible. You can most easily observe this in competitive internships, which are organizationally setup to select for the interns with the highest amount of throughput. There are arguments to be made for this selection strategy: it gives juniors a large amount of experience quickly, it instills an objective (even if ridiculous) criterion, and, lastly, it largely seems to just work. Without a doubt, it also filters for motivated people, who want to do a lot. Strangely, I have also noticed that at this level quantity and quality are often positively correlated. However, in this new AI age, where code is a commodity, producing large amounts of code will no longer be a differentiation criterion. This has two implications:
- The need for junior engineers (where the bulk of grunt work code is written) significantly declines.
- Companies will need to look for a new metric on which to rate junior engineers – thanks to AI, anybody could produce huge amounts of code. In particular, I worry for the differential ability of the best junior engineers to stick out the most.
While worrying about this long term would be foolish–ostensibly, if there was a continued need for programmers, then people would also find a way to fill it–one might argue that we see the effects of this already, and it is this intermediate generation that is affected the hardest: I heard from a couple of my University professor friends that Computer Science applications are decreasing for the first time in decades (beware, this is very early data from a handful of personal connections for one or two semesters that might well be random), as are job postings for junior engineers across the industry.
I have another worry from a more education-specific angle: If AI can get the code right 99% of the time, how do you ever learn, or motivate to learn, the leg work of getting type definitions right, of figuring out that regular expression, or how to fix this nasty runtime error? Now, you might say, that a) the history of computing is one of abstraction and AI is simply one step up and that b) we have had tools such as calculators, and yet, people still know the mechanics of multiplication, if they had to do it. I agree, but would also say that AI is different: calculators are largely not stochastic machines, at least not for things such as multiplications. AI, however, is–there is always the chance that its answer might be wrong–hallucinated or otherwise. So it provides a leaky abstraction at best. Safe for bugs in the compiler, we can safely forget about machine code. We cannot, however, do this in the AI age: We are simply not used to the compiler working only 99% of the time. From a practical perspective today, as long as there is a real chance for unintended behavior, we need to maintain the knowledge and skills to dive into them and fix them. I’m not exactly sure how to do this, but it seems obvious that we need to strengthen the entire debugging, testing, and verification arm in our Computer Science curricula.
Obviously, Software Engineering or coding will not be the only area affected by AI. But as it stands, it is one of the most heavily influenced, and the one where changes are deployed the fastest. It is a fascinating time to be part of the AI wave from the inside of where it all happens, but I am glad I can do it from a slightly elevated spot. Little time before the rising tide that may give.
Comments