Hacker News new | past | comments | ask | show | jobs | submit login

Random thought of mine was companies use AI to moderate. But potentially malefactors can train the AI to flag harmless stuff. And because of the opaque nature of neural networks there isn't good mechanism to undo it, except by reverting.



They didn’t kill Microsoft’s Tay - they made her auto mod.


The 4chan syndrome. Make common words into racist dogwhistles.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: