I don’t think you’re correct. I’ve watched junior/mid-level engineers figure things out solely by working on the cloud and scaling things to a dramatic degree. It’s really not a rocket science.
I didn't say it's rocket science, nor that it's impossible to do without having practical server experience, only that it's more difficult.
Take disks, for example. Most cloud-native devs I've worked with have no clue what IOPS are. If you saturate your disk, that's likely to cause knock-on effects like increased CPU utilization from IOWAIT, and since "CPU is high" is pretty easy to understand for anyone, the seemingly obvious solution is to get a bigger instance, which depending on the application, may inadvertently solve the problem. For RDBMS, a larger instance means a bigger buffer pool / shared buffers, which means fewer disk reads. Problem solved, even though actually solving the root cause would've cost 1/10th or less the cost of bumping up the entire instance.
You might be making some generalizations from your personal experience. Since 2015, at all of my jobs, everything has been running on some sort of a cloud. I'm yet to meet a person who doesn't understand IOPS. If I was a junior (and from my experience, that's what they tend to do), I'd just google "slow X potential reasons". You'll most likely see some references to IOPS and continue your research from there.
We've learned all these things one way or another. My experience started around 2007ish when I was renting out cheap servers from some hosting providers. Others might be dipping their feet into readily available cloud-infrastructure, and learning it from that end. Both works.