Hacker News new | past | comments | ask | show | jobs | submit login

the people I’ve seen doing responsible AI say they have a hell of a time getting anyone to care about responsibility, ethics, and bias.

of course the worst case is when this responsibility is both outsourced (“oh it’s the rAI team’s job to worry about it”) and disempowered (e.g. any rAI team without the ability to unilaterally put the brakes on product decisions)

unfortunately, the idea that AI people effectively self-govern without accountability is magical thinking




The idea that any for-profit company can self-govern without external accountability is also magical thinking

A "Responsible AI Team" at a for-profit was always marketing (sleight of hand) to manipulate users.

Just see OpenAI today: safety vs profit, who wins?


> Just see OpenAI today: safety vs profit, who wins?

Safety pretty clearly won the board fight. OpenAI started the year with 9 board members, and end it with 4, 4 of the 5 who left being interested in commercialization. Half of the current board members are also on the board of GovAI, dedicated to AI safety.

Don't forget that many people would consider "responsible AI" to mean "no AI until X-risk is zero", and that any non-safety research at all is irresponsible. Particularly if any of it is made public.


Rumor already has it that the "safety" board members are all resigning to bring Altman and the profit team back. When the dust settles, does profit ever lose to safety?


Self-government can be a useful function in large companies, because what the company/C-suite wants and what an individual product team want may differ.

F.ex. a product team incentivized to hit a KPI, so release a product that creates a legal liability

Leadership may not have supported that trade-off, but they were busy with 10,000 other strategic decisions and not technical.

Who then pushes back on the product team? Legal. Or what will probably become the new legal for AI, a responsible AI team.


Customers. Customers are the external accountability.


Yea, this works great on slow burn problems. "Oh, we've been selling you cancerous particles for the last 5 years, and in another 5 years your ass is totally going to fall off. Oh by the way we are totally broke after shoving all of our money in foreign accounts"


Iff the customers have the requisite knowledge of what "responsible AI" should look like within a given ___domain. Sometimes you may have customers whose analytical skills are so basic there's no way they're thinking about bias, which would push the onus back onto the creator of the AI product to complete any ethical evaluations themselves (or try and train customers?)


Almost every disaster in corporate history that ended the lives of customers was not prevented by customer external accountability

https://arstechnica.com/health/2023/11/ai-with-90-error-rate...

Really glad to see that customer external accountability kept these old folks getting the care they needed instead of dying (please read with extremely strong sarcasm)


Maybe a better case is outsourced and empowered. What if there was a third party company that was independent, under non-disclosure, and expert in ethics and regulatory compliance? They could be like accounting auditors but they would look at code and features. They would maintain confidentiality but their audit result would be public, like a seal of good ai citizen.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: