This highly depends on your definition of 'prompt injection'. A colleague of mine managed to get GPT to do something it refused to do before through a series of prompts. It wasn't in the form of 'ignore previous instructions' but more comparable to social engineering, which humans are also vulnerable to.
Well, that was probably jailbreaking. That's not really prompt injection, but the problem of letting a model execute some but not all instructions, which could get bamboozled by things like roleplaying. In contrast to jailbreaking, proper prompt injection is Bing having access to websites or emails, which just means the website gets copied into its context window, giving the author of the website potential "root access" to your LLM. I think this is relatively well fixable with quote tokens and RL.
The consequences of a human being social engineered would be far less than a LLM (supposedly AGI in many peoples eyes) which has access to or control of critical systems.
The argument of “but humans are susceptible to X as well” doesn’t really hold when there are layers of checks and balances in anything remotely critical.