Heatseekers can be very effective, but as with any automated system, it isn't infallible.
Friendly-fire even happens by human beings using small arms, just like strikes on civilian schools and hospitals happen because the intelligence told a human being that it looked like it could be a "terrorist training ground".
I'd be interested to see what the failure rate of an AI looked like in what actions it would have taken based on available data versus what actions human took over a several year sample. I have a feeling that the AI will either look terrible, or the human beings will look terrible, or they'll look pretty equal with strange fringe cases where the AI is better than the human, and vice versa. Judgement and authorization are interesting.
I guess my point was given my limited knowledge, it doesn't seem as though heat-seekers are necessarily any less fallible than humans. I'm not suggesting that "no worse than a human" should be the goal, but I'd say that's the bare minimum.
Precisely. All of these things can fail, with or without human involvement, and humans can fail just as easily. Whilst these are all absolutely horrible contraptions that shouldn't be necessary in relative modernity, it's important to look at stats, but also to "sanity-check" an authorization with concepts like a two-man rule.
Whilst AI may indeed be superior to human beings in x areas now or in the future, human ethics, intuition et al are also very important and likely to never be replaced. In the same breath, fuckups will always happen by the very nature of every system and every human being imperfect.
Friendly-fire even happens by human beings using small arms, just like strikes on civilian schools and hospitals happen because the intelligence told a human being that it looked like it could be a "terrorist training ground".
I'd be interested to see what the failure rate of an AI looked like in what actions it would have taken based on available data versus what actions human took over a several year sample. I have a feeling that the AI will either look terrible, or the human beings will look terrible, or they'll look pretty equal with strange fringe cases where the AI is better than the human, and vice versa. Judgement and authorization are interesting.
Something else you might be interested in (which you may already know about) is PAL systems for nuclear weapons: https://en.wikipedia.org/wiki/Permissive_action_link
You'll likely be interested in the "two-man rule": https://en.wikipedia.org/wiki/Two-man_rule