Hacker News new | past | comments | ask | show | jobs | submit login

All of the above! Additionally... I think AI companies are trying to steer the conversation about safety so that when regulations do come in (and they will) that the legal culpability is with the user of the model, not the trainer of it. The business model doesn't work if you're liable for harm caused by your training process - especially if the harm is already covered by existing laws.

One example of that would be if your model was being used to spot criminals in video footage and it turns out that the bias of the model picks one socioeconomic group over another. Most western nations have laws protecting the public against that kind of abuse (albeit they're not applied fairly) and the fines are pretty steep.




They have already used "AI" with success to give people loans and they were biased. Nothing happened legally to that company.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: