
I am in the undergraduate class of '28 and am worried about the impact AI will have. I am already impressed with the ability of current LLMs to code, and assume this will only improve, and intimidated by the large amount of competition in CS specifically. I foresee the Junior developer role being phased out/compressed, necessary entry level experience increasing, and the field becoming far more competitive than it is now. I consider myself to be interested in coding, but not super gifted at it specifically, and fear if I am not already better than AI at coding, I never will be.
Edit: I'd appreciate if you leave your reasoning in the comments!
I'm a professional software engineer, senior level. I recommend Mech E, for all the reasons.
A.I. will play a role in all forms of engineering, because it's an incredibly useful tool. But, by drastically increasing productivity, it is going to cause there to be a glut of software engineers for a while, and entry level will absolutely get squeezed.
Mech E will always have more humans in the loop, due to the physical nature of prototyping and testing.
My reasoning for mech E: Assuming you have some inherent interest in mech E (and the way physical things work), and didn't just choose that backup major at random, the red flag for me is that you say you're not super gifted at coding. The coding candidate slates are full of people who are, and who love it, and therefore spend a lot of time getting better at it, and whether they are doing this in an AI-augmented way is more of a detail.
If AI development continues on its default trajectory, it's not going to matter what you major in. Either because we're all going to die in the next few years, or because alignment will somehow work out and AGI will automate everything including plumbing no more than a decade later. Do what you will enjoy.
I also recommend taking action to prevent AI development from continuing on its default trajectory. (e.g. join PauseAI, or at least shoot your Rep and Senators an email to say "I'm worried about this and you should be doing something").
I guess I just want to maximize my impact in whatever time we have left. If that means working for 4 years instead of 2, that would be enough for me. I am skeptical I will be able to actually take steps to boosting the chances that we as a society will address AI safety, so I would rather just give to effective charities to alleviate suffering in the short term, perhaps increasing short term happiness.
@OliverKuperman Skepticism is understandable. I'm personally convinced by PauseAI's Theory of Change. Not that it will definitely work, but that it's the best plan I've seen. In the face of extreme risks, I think it's a little premature to give up before trying.