If you don't have the ability to control your premises, or employ enough security, perhaps you should not be permitted to operate those premises. Unsafe clubs get closed down.
They - or any other social media company - don't have a right to exist. If societies around the world conclude they do more harm than good as they currently operate, it's perfectly reasonable to require them to moderate adequately. If they cannot they can close, or withdraw from that region, and someone else may wish to try. Facebook could employ thousands more to moderate and supervise - without destroying their ability to make profit. Pretending the problems don't exist, as all the platforms have done so far, looks like it's not going to be tenable for much longer.
Not only am I fine with that, I think it long overdue.
If you don't have the ability to control your premises, or employ enough security, perhaps you should not be permitted to operate those premises.
So we should shut down the mosques in New Zealand because they didn't control their premises and keep the shooter out? Should we shut down banks because they didn't have enough security to keep a bank robber out? You are shifting the blame away from the perpetrator. Every crime ever could be blamed on the victim for lack of security.
Those other businesses aren't running 40% profit margins.
It's a fair question to ask, "If a company is that profitable and having toxic effects, then why shouldn't it pay some of the cost of addressing its toxic effects?"
The tendency on HN is to say exploitive governments want to tax technology startups.
But we're not talking about technology startups. We're talking about the new old-old ATT.
I would argue that unlike your examples — the bank that got robbed; the mosque that got shot up — Facebook perpetrated and facilitated the crime.
A better question would be, should a bank be held liable after being robbed if it failed its duties by designing cheap systems to save money — yes. Should a mosque/church be held liable if they received a threat and they ignored causing large loss of life — yes.
Facebook is culpable. They know the kind of garbage that floats in their platform. They continue to develop their platform and they continue to improve their ad systems. Why won’t they dedicate as much engineering effort to gain an advantage on racism/hate speech/fascism? I’ll tell you why.
Because Zuckerberg simply does not care. He owns the platform that dominates information flows and public discourse. Why would he undermine that?
Let's take it to the absurd. I am NOT shifting any blame from the perpetrator.
I am placing additional blame on a medium that isn't willing to exercise adequate restraint or control.
A bank undoubtedly has security measures in place to prevent robbery, if they don't bother, they won't last long. Nonetheless, there are regulations on what may and may not be done for public spaces, how crowded they may be to pass fire regulation and so forth. Neither a mosque nor bank should be closed for a circumstance that could not reasonably have been predicted.
> Neither a mosque nor bank should be closed for a circumstance that could not reasonably have been predicted.
How is this any different from Facebook? A bank has security measures in place to prevent robbery, but a robbery could still take place in one and is more likely to occur in one because it's a high value target. Facebook has human and AI moderation in place that automatically processes anything you upload. Users are also able to report content. However, just like sometimes banks are robbed, sometimes objectionable content slips through their filters. By your own logic Facebook shouldn't be closed down because mass shootings cannot be reasonably predicted. The basis for your argument is fundamentally flawed and if you extrapolate it to your own examples you'll quickly see how ridiculous and illogical it is.
Primarily because we’ve been telling them they need to do a better job about this for years, so “could not reasonably have been predicted“ absolutely does not apply. They have literally been condemned by a United Nations investigator for what happened in Myanmar [1]. Multiple government have been unhappy with their attitude problem [2]. In this report, one of the problems is they “continue to host and publish the mosque attack video”, which they absolutely don’t have any excuse for not being aware of, not any more.
If this is the best that the state of the art can manage, then the state of the art is not good enough for Facebook to continue to exist. If Facebook were a human, it would have been fired for gross negligence.
It isn't - if they were not providing a broadcast platform. The local bank branch broadcasts nada, if they do somehow broadcast your financial info they can be prosecuted for failing their duty of care, and you'll be eligible for compensation even without prosecution.
Youtube, Instagram, Facebook, Reddit have size and reach enough that turning a blind eye is no longer credible. We've seen countless issues crop up with regard to elections, election advertising, hate speech, and in this case broadcasting a slaughter. All of which serve to demonstrate they are unwilling to self-regulate taking them into territory where it's clear current regulation is not adequate.
Calling for adequate regulation to require moderation and oversight, and to require a duty of care to their users, viewers and listeners is not calling for them to be closed down. That comes if they prove unwilling to obey said regulation, as does any of the other measures brought to bear against habitual law breakers - fines, asset seizure, blocking etc. Regulation that will vary between different countries as they will choose differing limits.
We can apply the same argument to governments with their lack of censorship of bad words as people walk the streets. We don't because it's ridiculous on it's face, and we don't want a blatant surveillance society such as that.
Which of those examples already have copyright infringement detection filters built in which should have been capable of preventing, e.g. re-uploading of the original livestream?
Do any of them boast about their natural language processing and engage in psychological experiments on their users to detect, e.g. suicidal tendencies?
Well as much as HN or the news media would like you to believe, societies around the world have definitely not concluded that FB does more harm than good and I don't see that changing anytime soon. You may think the club is unsafe, but a billion people are still attending all night and they feel perfectly comfortable.
Attendance proves nothing at all. We've no idea if they are comfortable or not. Many appear to have come to think of it as a necessary evil.
Millions of people were eating adulterated food every meal prior to food regulations. Political process and advertising was rife with corruption prior to attempts to instil a semblance of fairness limiting what was allowed and when. Yet people attempted to vote before this.
You're claiming simultaneously that we have no idea whether these users are comfortable or not and that most of them think of it as a necessary evil.
Attendance absolutely proves something when it comes to social media. You can't make a comparison between social media and food because one is necessary for survival and one isn't (I'm sure I'm going to invite a ton of comments here on how 'Facebook has become so ubiquitous and powerful it's now synonymous with survival!') If the negatives really outweighed the positives people would be leaving Facebook by the droves. It's only on HN and in the news media that this narrative is being spun.
>Attendance absolutely proves something when it comes to social media.
Yes, it proves that network effects can force people into bad equilibra. If you think high university costs are harmful for society in the long run, but if you personally don't attend you will incur significant cost in the short run, with nobody cooperating, how do you behave?
> Attendance absolutely proves something when it comes to social media.
Not really - I would love to be able to delete my account from Facebook but it's the only place some people that I want to keep in touch with will use.
(I did the next best thing - deleted the apps and only check the website infrequently.)
It doesn't actually matter whether people are "comfortable" on Facebook or not, it matters whether they're being incited to murder people. Facebook has an effect outside its users. The Bhopal of racism.
I am claiming your statement "and they feel perfectly comfortable" is unknown. Further that it is highly unlikely.
Many is not most - it probably is most of my friends and relatives, but they are representative of nothing at all. Yet there is now a certain discomfort in almost all discussions of Facebook - no matter where that takes place - that simply was not there 5 or so years ago.
Exactly. Generally speaking, the onus is on the userbase to react to perceived (or real) immorality by refraining from using Facebook.
However, it remains the office of government to introduce enforceable regulation in this (or any) space to protect their constituents, and hold those in violation accountable. I don't suspect it's an easy task, as the problems are broad and can be a grey area. I.e, some of the problems with Facebook stem from mischievous users -- exactly how accountable the platform is for their behavior isn't universally agreed upon.
I think if rules of engagement happen, people will have less problems with just letting the market decide.
What's the internet version of the fire marshall? If the people that feel the club is unsafe, a simple call to the authorities will have the marshall come out and declare it unsafe and shut it down. Can we get Fire Marshall Bill to do it?
>> Facebook could employ thousands more to moderate and supervise - without destroying their ability to make profit
You're underestimating the scale of this problem.
Facebook users create billions of new posts a day. And, tens of millions of these posts get reported for moderation per day.
"Thousands of more" employees isn't going to solve the problem. Assuming each posts takes 10 minutes of labor to review, you would need an army of 200K individuals, and this amount of labor would cost many billions of dollars per year.
>> Pretending the problems don't exist, as all the platforms have done so far
You must be joking. Facebook has already made massive investments into moderation. It's their top priority for 2019. In many ways, they are doing the opposite of pretending this problem doesn't exist.
>> They - or any other social media company - don't have a right to exist.
Nobody is arguing that FB has an inherent right to exist. The only point GP made was that (as made evident by your comment) many people are underestimating the associated costs with manual moderation of content.
> "Thousands of more" employees isn't going to solve the problem. Assuming each posts takes 10 minutes of labor to review, you would need an army of 200K individuals, and this amount of labor would cost many billions of dollars per year.
Should all online platforms be required to do that? I can post here on HN right now and no moderator is pre-approving my content.
I could go find the shooter’s illegal manifesto and post it here, I might be banned after some period of time but just like on Facebook it would have had time for other people to see it.
Applied consistently this standard would kill most online forums and that type of thing.
It’s a matter of scale. Even before they became larger than the largest single nation on Earth, Facebook used to get in the news for stories like “1000 people show up to birthday party after teen accidentally forgets privacy settings”. Hacker News can #ahem# slashdot servers it links to, so it’s not exactly small, but it’s still peanuts compared to Facebook.
Just as software engineering practices need to change for the big names compared to everyone else, so do social practices.
They do the same thing. They worked all night to prevent the video from propagating. What they are calling for here is preventing it from being published. So this would require your ISP to prevent the content before people complain.
> The question then becomes who decides what's acceptable content, the only answer is regulations and legislation for this otherwise for-profit tech giants will be the ones who decide and they're more concerned with user-engagement than far reaching detached societal implications.
What about this : all content is potential knowledge and should be made available. That way no one get to decide what is "acceptable content". At the end of the day, the laws still forbid anyone to kill people, content or not.
When RIAA/MPAA/etc. came with astronomical fines, those same companies suddenly created the capabilities to block things that even smelled like copyrighted material. Whatever your opinion of MAFIAA is, it would seem this sort of detection can be done when a large enough stick is presented.
I think “social” is the key. If you are using a carrier to facilitate private conversations then I feel you should have a right to privacy. But a forum or online group is by its nature and intent a public platform and the provider should be responsible for policing that space.
> None of these tech companies have the ability to do that
Yet they can attach an ad to it in a millisecond. They’re incentivised to do one and not the other. These bills are about rebalancing those incentives.
Are you serious? You expect us to believe that Facebook and Youtube can’t afford to hire 50,000 content regulators each? 30,000 of them would probably be paid less than 5000$ a year.
Facebook already employs tens of thousands of human moderators. What you're really proposing is that every piece of content that's uploaded to any internet website should be manually reviewed and approved by another human. You don't think that's a ridiculous and unrealistic proposition that could be turned into a weapon by bad actors (just like a livestream feature?)?
The face that you would reduce my proposal to a brain dead, dumbest possible solution shows you aren’t really interested discussion.
Don’t you think they would, for example, be able to prioritize content based on reputation/hidden metrics?
Don’t you think, for example, that a fair amount of videos categories can be auto filtered with extremely high accuracy?
Don’t you think, for example, that if they had enough content moderators they would have been able to stop the stream and to take down this video faster?
Do you really think I am proposing that they review every video in FIFO order? Or did you want to reduce my proposal to that so you can easily dismiss it.
> Don’t you think they would, for example, be able to prioritize content based on reputation/hidden metrics?
What you're proposing is discrimination and isn't a solution.
> Don’t you think, for example, that a fair amount of videos categories can be auto filtered with extremely high accuracy?
No, not really, but please provide some examples. I'm assuming you mean content category, not literal tagged category, in which case you can't really distinguish between someone livestreaming in their car vs. the Christchurch shooter driving up to the mosque.
> Don’t you think, for example, that if they had enough content moderators they would have been able to stop the stream and to take down this video faster?
Sure, but they already have many thousands of moderators in addition to automated systems in place today. It's inevitable that unless every post is hand-reviewed by a human that some objectionable content will slip through.
I get that your point is that FB and the other big tech companies can certainly afford to be doing more than they currently are, but to the extent that we would almost certainly be in this exact same position even if they had hired your proposed 50,000 moderators there's not much substance to your proposal that I can even reduce.
> Don’t you think they would, for example, be able to prioritize content based on reputation/hidden metrics?
Like rank/derank the feed based on contextual clues? No way this wasn't the first thing done.
> Don’t you think, for example, that a fair amount of videos categories can be auto filtered with extremely high accuracy?
They've touted high accuracy in "using AI" for content moderation, like 95% or 99% or something. i.e. using machine learning to automatically tag content.
> Don’t you think, for example, that if they had enough content moderators they would have been able to stop the stream and to take down this video faster?
So your proposal boils down to "hire more people so more content can be filtered faster". GP perhaps didn't steelman your argument for you but if your proposal were to be applied the only real delta would be workforce size. It follows that your proposal in effect reduces to "hire more people".
This is flat out not true. FB doesn't employ 10s of thousands of human moderators.
According to a 3-month old article, $FB supposedly employs 7,500 moderators [0], though it's unclear what constitutes "employment" in this case. Probably a salary of less than $5k/yr. Are there really thouhsands of content moderators pulling up to FBHQ in Menlo Park 5 days a week? I don't think so, no.
"By the end of 2018, in response to criticism of the prevalence of violent and exploitative content on the social network, Facebook had more than 30,000 employees working on safety and security — about half of whom were content moderators." [0]
That number has only increased in recent months and will likely continue to increase in the near future. Not all of them are designated content moderators, it's true, so technically they likely only employ one or two "10s of thousands", but I'm mainly addressing the misconception in the thread that Facebook is "doing nothing." That's a significant amount of moderators. Is it enough? No, but to claim or pretend that they aren't actively doing anything is facetious and something I've been seeing more and more of on HN lately.
If it is not viable to actively moderate all the content, there are two conflicting concerns here:
1. Online services should maintain an acceptable legal and moral standard of the content they serve.
2. Online services should be able to profitably facilitate the communication between millions and billions of users.
Which of these concerns seems more important to you, regardless of what you believe is an acceptable legal and moral standard?
I actually think that’s perfectly realistic, and in fact how forums have worked for ages. There are also automated measures such as user trust levels where an account has to earn trust before they can do certain things like post links, embed pictures, etc. Being on the platform isn’t a right either and it would be perfectly fair for them to rate-limit the amount of public posts someone can make to an amount that’s manageable by their current moderation capacity.
Forums have managed to keep undesirable content at bay with often no budget at all, so anyone claiming Facebook can’t do the same is false. They don’t want to do it, because abusive content still brings them clicks and ad impressions (remember that for every nasty piece of content that turns into a PR disaster there are thousands that go unnoticed but still generate money for them).
What forums are those? Because this isn't the case with the forums I frequent. The boards I read / participate in tend to operate on the principle of "don't make me come over there", and everything is reactive, not proactive. I'm not sure how you'd get things like "user was banned for this post" etc otherwise...
The boards I used to frequent (about computer hardware & video-games, though both had a very active "off-topic/chat" section) had pretty good moderation coverage. It was indeed reactive, however the moderators were active members of the community and as a side-effect they were bound to see every post within 24 hours, but usually much sooner. New users were also restricted in what they could do so the potential for damage was limited even if moderators weren't active.
I don't have a problem with moderation being delayed. I have a problem with there being no moderation or clueless moderation (I have reported hundreds of obviously fake accounts or pages promoting outright illegal activity, and the majority of those got ignored).
Of course, the scale was much smaller than Facebook, but my point is that maybe if you can't solve this problem then you shouldn't be in a position to broadcast & promote stuff to the entire world? The danger of Facebook (& other social networks) isn't in your friends seeing your malicious post, it's that your malicious post can go "viral" and be put in front of people who haven't asked for anything since the ranking algorithm is based on stuff like "likes", shares, etc (and a lot of garbage content tends to attract likes unfortunately) which can also be manipulated by a determined attacker (using fake/compromised accounts to "like" the post, etc).
At least a couple of newspaper comment sections, and forums have had pre-moderation for sensitive topics.
Nothing is visible until it's been validated. Guardian comments still do this for some topics.
With all the assorted profiling going on, I'm sure just as some users (like via Tor or VPN) are automatically given harder Google captchas, or some boards make you earn enough karma to perform certain actions, reactionary moderating could be something you have to earn, and sustain. Perhaps certain interests would trigger proactive moderating too.
Not perfect but certainly capable of taking much load off human moderators.
If a car manufacturer is faced with a problem so that it loses the ability to prevent accidents, would you let that manufacturer continue its business?
We do allow car companies to continue to build cars that cannot prevent accidents. Even beyond preventing accidents we as a society also allow the car companies to build cars with varying levels of safety and have a rating system for them.
We do. However, car manufacturers usually try to downplay reports of issues with the cars to the point when they can't do anymore. That's when something like a recall happens, which results in a substantial financial loss for them. Obviously not every affected vehicle owner can or may afford to have their vehicle serviced, so many keep using them despite the risk- like the people who are using social media knowing its weaknesses.
I think social media are at a point when they are forcefully downplaying the social issues which they are actually responsible for. We've already seen state and other actors exploit the social media and cause mass epidemics; anti-vaxx, white nationalism, Rohingya issue etc. to name a few. Let's see how far it goes before the damage is too much.
Oh please, this is as ridiculous as blaming the manufacturer of the truck for the psychopath that used it to run people over in France. As far as I know Facebook doesn't released public numbers on this kind of information, but their automated systems and human moderators have undoubtedly caught millions of objectionable content streams, you just don't hear about them. If we're making car analogies they would probably be the safest vehicle on the road.
That sounds fine. They bought it, so they own it. If they make money from someone's work, then they are responsible for at least as much money as they made.
If social media companies exist because they are not held responsible for negative externalities, and actually recognizing the externalities would result in the death of said companies, then maybe they don't deserve to exist. We as a society have a real serious problem contending with companies that have massive negative externalities
Which I think may be the appropriate answer. It's unclear that the net benefit to society is greater with these social media companies than without them. If anything, the data we have so far is that it is a net negative.