The issue isn't whether "his point" as you put it is correct. If I said people should safety test the space shuttle to make sure the phlogiston isn't going to overheat, I may be correct in my belief that people should "safety test" the space shuttle but I'm still a crank because phlogeston isn't a real thing.
The reason AI alignment is challenging is because we're trying to make accurate predictions about unusual scenarios that we have essentially zero data about. No one can credibly claim expertise on what would constitute evidence of a worrisome anomaly. Jeremy Howard can't credibly say that a sudden drop in the loss function is certainly nothing to worry about, because the entire idea is to think about exotic situations that don't arise in the course of ordinary machine learning work. And the "loss" vs "loss function" thing is just silly gatekeeping, I worked in ML for years -- serious people generally don't care about minor terminology stuff like that.
That's not what the conversation was about- you're just doing the thing Howard said where you squint and imagine he was saying something other than he did.