>This seems like a very confused analogy for two reasons. One, there's a reason you aren't able to get your hands on a sword or shotgun in most places on earth, I'd prefer that not to be the case for AI.
In the US, I can get my hands on guns, knives and swords. In other countries you can get axes and knives. I think guns are mostly banned in other places.
>Safety for AI is like safety for a car, or a phone
Your phone has a safety? What about your car? At best the car has air bags that prevent you from dying. Doesn't prevent you from running other people over. The type of "safety" that big tech is talking about is safety to prevent people from using it malicious ways. They do this by making the AI LESS reliable.
For example chatGPT will refuse to help you do malicious things.
The big emphasis on this is pointless imo. If people aren't using AI to look up malicious things, they're going to be using google instead which has mostly the same information.
In the US, I can get my hands on guns, knives and swords. In other countries you can get axes and knives. I think guns are mostly banned in other places.
>Safety for AI is like safety for a car, or a phone
Your phone has a safety? What about your car? At best the car has air bags that prevent you from dying. Doesn't prevent you from running other people over. The type of "safety" that big tech is talking about is safety to prevent people from using it malicious ways. They do this by making the AI LESS reliable.
For example chatGPT will refuse to help you do malicious things.
The big emphasis on this is pointless imo. If people aren't using AI to look up malicious things, they're going to be using google instead which has mostly the same information.