- Supporting Wikipedia’s moderators and patrollers with AI-assisted workflows that automate tedious tasks in support of knowledge integrity;
so, a chatbot on top of some tooling probably
- Giving Wikipedia’s editors time back by improving the discoverability of information on Wikipedia to leave more time for human deliberation, judgment, and consensus building;
extremely vague, but probably the "train a specialized ai on our own corpus to answer questions about it" style bot helper? these make stuff up left and right too
- Helping editors share local perspectives or context by automating the translation and adaptation of common topics;
automated translations would be a big step in the wrong direction
- Scaling the onboarding of new Wikipedia volunteers with guided mentorship.
you can't do "mentorship" with ai in any real way. all in all seems a box checking exercise for their ML officer.
must say i wasnt impressed with the interview coder code nor its organization. skimmed another of his repos and it was basically a minimal chat app wrapping a dozen-ish prompts along the lines of "you are the user's horny anime wife. chat horny style to them. but not tooo horny."
First rule of understanding: you can never understand that which you don't want to understand.
That's why lying is so destructive to both our own development and that of our societies. It doesn't matter whether it's intentional or unintentional, it poisons the infoscape either accidentally or deliberately, but poison is poison.
And lies to oneself are the most insidious lies of all.
something tells me that this pathetic messaging approach is not going to be the one that squares the circle between "piracy is illegal" and "information wants to be free"
reply