Hacker News new | past | comments | ask | show | jobs | submit login

In other news, police is not needed because everyone should just behave.



This is more analogous to a company having an internal "not doing crime" division. I do mention in my original post that having specialist skills within legal or compliance to handle the specific legal and ethical issues may make sense but having one team be the "AI police" and everyone else just trying to build AI without having responsibility baked into their processes is likely to just set up a constant tension like companies often have with a "data privacy" team who fight a constant battle to get people to build privacy practises into their systems and workflows.


But there are no responsible X teams for many X. But AI gets one.

(Here X is a variable not Twitter)


There are plenty of ethics teams in many industries, I don’t think this is a great point to make.


Police are needed for society when there's no other way to enforce rules. But inside a company, you can just fire people when they misbehave. That's why you don't need police inside your company. You only need police at the base-layer of society, where autonomous citizens interact with no other recourse between them.


People do what they are incentivized to do.

Engineers are incentivized to increase profits for the company because impact is how they get promoted. They will often pursue this to the detriment of other people (see: prioritizing anger in algorithmic feeds).

Doing Bad Things with AI is an unbounded liability problem for a company, and it's not the sort of problem that Karen from HR can reason about. It is in the best interest of the company to have people who can 1) reason about the effects of AI and 2) are empowered to make changes that limit the company's liability.


The problem is that a company would only fire the cavalier AI researchers after the damage is done. Having an independent ethics department means that the model wouldn't make its way to production without at least being vetted by someone else. It's not perfect, but it's a ton better than self-policing.


The "you" that fires people that misbehave is what, HR?

It takes quite some knowledge and insight to tell whether someone in the AI team, or, better yet, the entire AI team, is up to no good.

It only makes sense for the bosses to delegate overseeing research as sensitive as that to someone with a clue. Too much sense for Facebook.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: