Are you serious? You expect us to believe that Facebook and Youtube can’t afford to hire 50,000 content regulators each? 30,000 of them would probably be paid less than 5000$ a year.
Facebook already employs tens of thousands of human moderators. What you're really proposing is that every piece of content that's uploaded to any internet website should be manually reviewed and approved by another human. You don't think that's a ridiculous and unrealistic proposition that could be turned into a weapon by bad actors (just like a livestream feature?)?
The face that you would reduce my proposal to a brain dead, dumbest possible solution shows you aren’t really interested discussion.
Don’t you think they would, for example, be able to prioritize content based on reputation/hidden metrics?
Don’t you think, for example, that a fair amount of videos categories can be auto filtered with extremely high accuracy?
Don’t you think, for example, that if they had enough content moderators they would have been able to stop the stream and to take down this video faster?
Do you really think I am proposing that they review every video in FIFO order? Or did you want to reduce my proposal to that so you can easily dismiss it.
> Don’t you think they would, for example, be able to prioritize content based on reputation/hidden metrics?
What you're proposing is discrimination and isn't a solution.
> Don’t you think, for example, that a fair amount of videos categories can be auto filtered with extremely high accuracy?
No, not really, but please provide some examples. I'm assuming you mean content category, not literal tagged category, in which case you can't really distinguish between someone livestreaming in their car vs. the Christchurch shooter driving up to the mosque.
> Don’t you think, for example, that if they had enough content moderators they would have been able to stop the stream and to take down this video faster?
Sure, but they already have many thousands of moderators in addition to automated systems in place today. It's inevitable that unless every post is hand-reviewed by a human that some objectionable content will slip through.
I get that your point is that FB and the other big tech companies can certainly afford to be doing more than they currently are, but to the extent that we would almost certainly be in this exact same position even if they had hired your proposed 50,000 moderators there's not much substance to your proposal that I can even reduce.
> Don’t you think they would, for example, be able to prioritize content based on reputation/hidden metrics?
Like rank/derank the feed based on contextual clues? No way this wasn't the first thing done.
> Don’t you think, for example, that a fair amount of videos categories can be auto filtered with extremely high accuracy?
They've touted high accuracy in "using AI" for content moderation, like 95% or 99% or something. i.e. using machine learning to automatically tag content.
> Don’t you think, for example, that if they had enough content moderators they would have been able to stop the stream and to take down this video faster?
So your proposal boils down to "hire more people so more content can be filtered faster". GP perhaps didn't steelman your argument for you but if your proposal were to be applied the only real delta would be workforce size. It follows that your proposal in effect reduces to "hire more people".
This is flat out not true. FB doesn't employ 10s of thousands of human moderators.
According to a 3-month old article, $FB supposedly employs 7,500 moderators [0], though it's unclear what constitutes "employment" in this case. Probably a salary of less than $5k/yr. Are there really thouhsands of content moderators pulling up to FBHQ in Menlo Park 5 days a week? I don't think so, no.
"By the end of 2018, in response to criticism of the prevalence of violent and exploitative content on the social network, Facebook had more than 30,000 employees working on safety and security — about half of whom were content moderators." [0]
That number has only increased in recent months and will likely continue to increase in the near future. Not all of them are designated content moderators, it's true, so technically they likely only employ one or two "10s of thousands", but I'm mainly addressing the misconception in the thread that Facebook is "doing nothing." That's a significant amount of moderators. Is it enough? No, but to claim or pretend that they aren't actively doing anything is facetious and something I've been seeing more and more of on HN lately.
If it is not viable to actively moderate all the content, there are two conflicting concerns here:
1. Online services should maintain an acceptable legal and moral standard of the content they serve.
2. Online services should be able to profitably facilitate the communication between millions and billions of users.
Which of these concerns seems more important to you, regardless of what you believe is an acceptable legal and moral standard?
I actually think that’s perfectly realistic, and in fact how forums have worked for ages. There are also automated measures such as user trust levels where an account has to earn trust before they can do certain things like post links, embed pictures, etc. Being on the platform isn’t a right either and it would be perfectly fair for them to rate-limit the amount of public posts someone can make to an amount that’s manageable by their current moderation capacity.
Forums have managed to keep undesirable content at bay with often no budget at all, so anyone claiming Facebook can’t do the same is false. They don’t want to do it, because abusive content still brings them clicks and ad impressions (remember that for every nasty piece of content that turns into a PR disaster there are thousands that go unnoticed but still generate money for them).
What forums are those? Because this isn't the case with the forums I frequent. The boards I read / participate in tend to operate on the principle of "don't make me come over there", and everything is reactive, not proactive. I'm not sure how you'd get things like "user was banned for this post" etc otherwise...
The boards I used to frequent (about computer hardware & video-games, though both had a very active "off-topic/chat" section) had pretty good moderation coverage. It was indeed reactive, however the moderators were active members of the community and as a side-effect they were bound to see every post within 24 hours, but usually much sooner. New users were also restricted in what they could do so the potential for damage was limited even if moderators weren't active.
I don't have a problem with moderation being delayed. I have a problem with there being no moderation or clueless moderation (I have reported hundreds of obviously fake accounts or pages promoting outright illegal activity, and the majority of those got ignored).
Of course, the scale was much smaller than Facebook, but my point is that maybe if you can't solve this problem then you shouldn't be in a position to broadcast & promote stuff to the entire world? The danger of Facebook (& other social networks) isn't in your friends seeing your malicious post, it's that your malicious post can go "viral" and be put in front of people who haven't asked for anything since the ranking algorithm is based on stuff like "likes", shares, etc (and a lot of garbage content tends to attract likes unfortunately) which can also be manipulated by a determined attacker (using fake/compromised accounts to "like" the post, etc).
At least a couple of newspaper comment sections, and forums have had pre-moderation for sensitive topics.
Nothing is visible until it's been validated. Guardian comments still do this for some topics.
With all the assorted profiling going on, I'm sure just as some users (like via Tor or VPN) are automatically given harder Google captchas, or some boards make you earn enough karma to perform certain actions, reactionary moderating could be something you have to earn, and sustain. Perhaps certain interests would trigger proactive moderating too.
Not perfect but certainly capable of taking much load off human moderators.