"[They] allow the live streaming of suicides, rapes, and murders, continue to host and publish the mosque attack video, allow advertisers to target ‘Jew haters’ and other hateful market segments, and refuse to accept any responsibility for any content or harm."
I stopped reading after this point. Facebook has no clue what kind of stuff gets posted on their platform, and their AI isn't powerful enough to detect it. The idea that Facebook is causing/enabling such behaviors is ridiculous. It's a big claim to make with no quantitative evidence. It's similar to when Marilyn Manson got blamed for Columbine, or GTA/violent video games got blamed for making kids violent.
Facebook is morally bankrupt, but not for these sensationalist reason.
Facebook has "no clue what kind of stuff gets posted" because they conveniently have decided to have no clue.
Facebook enjoys the protection of its IPR and the right to sell advertisement overseas in places like New Zealand or Europe. Why should New Zealand or Europe allow their business model of close to zero oversight of content? Facebook is not a startup. It's a huge company making billions showing ads to billions.
In the market of digital services, Europe is as good as non-existent. Maybe I can come up with one or two names in the top 20 of the most successful software enterprises.
And those are probably already dinosaurs compared to most recent developments on that market.
Europe is exceptionally bad at creating their own services. Maybe because it is just too convenient to use American ones or that there isn't really any venture capital available here. But the fact remains.
Subjectively, it seems we often also lack the openness for new ideas compared to America, especially around older generations.
Europe doesn't even know the significance of freedom right know. Not taking the right decision is often less important than owning it yourself. Americans seem to get that. Most of the time at least.
You forget that Europe has been ruined by WW2. It's dependence of US(military and technologically) has always been part of the plan.
Europe is also very fragmented and there was hope the EU could fix that but Brexit and other internal and external factors(i.e Russia, China, Tump's policy etc) seem oppose a cohesive, united Europe for obvious reasons.
It has little to nothing to do with "openness" for new ideas. It's more about a chain of unfortunate events.
What does Huawei dominance in 5G and its ban has to do with "openness" or new ideas?
New Zealand has a senior government position non-ironically named "Chief Censor." They don't even pretend to have free speech, and certainly don't get to lecture anyone on moral bankruptcy.
If there was an existing AI solution in place anywhere in world capable to detect a suicide, I'd agree - but there's nothing even close to that at the moment. You can't blame a company for not using something that's technologically probably not even possible to make reliably today. And what if it's not "morally scalable", you shut it down? You break it by force like it was done with Bell system? Would having "Face corp." and "Book corp." really solve any of these issues, which are fundamentally a problem of human nature?
> If there was an existing AI solution in place anywhere in world capable to detect a suicide, I'd agree - but there's nothing even close to that at the moment.
Should we let the technical infeasibility of them profitably solving a problem that they themselves have created be our moral compass?
> You can't blame a company for not using something that's technologically probably not even possible to make reliably today.
The point IMO is not to assign blame. It's to create legislation for the betterment of society. You can agree or disagree that it would be better, but don't reduce it to a blame game.
> And what if it's not "morally scalable", you shut it down? You break it by force like it was done with Bell system? Would having "Face corp." and "Book corp." really solve any of these issues, which are fundamentally a problem of human nature?
As I said, as far as I'm concerned, that's their problem. You implement the legal framework necessary to uphold a desirable moral standard and let that have its effect on the market. In those terms it's irrelevant how Facebook fares.
And no, this is not fundamentally a problem of human nature. Millions-to-billion user social networks have only been a problem for a brief moment of human history. Much like other problems throughout history it might go away some day. Not by itself, but by systematically working on improving the human condition.
If they don't know what's being posted, maybe they should change how their platform works? If a normal business behaved like this and then just shrugged their shoulders, no one would stand for it. A traditional broadcaster would have their licence revoked. A print publisher would be in serious trouble. But when it happens online, somehow it's someone else's problem. This seems odd to me.
> If a normal business behaved like this and then just shrugged their shoulders, no one would stand for it
That's not quite true. No one blames the US postal service for death threats sent in the mail. Facebook is not the USPS, but it's not a traditional publisher either — and just like the USPS it simply couldn't work if you wanted the same level of editorial oversight as a publisher.
The thing that makes Facebook different from the postal service is that Facebook provides an amplifier.
You can send one message and it'll be received by many.
Facebook may not have to take responsibility for users' content or wrangling with issues such as free speech but they must take responsibility for what and how they choose to amplify and recommend that content to others.
Traditionally this is a power that has only been held by broadcasters. Broadcasters have things like time delays and "dump switches" when they are conducting live shows. The time delays are mostly used to insert the "bleeps" for bad language. The "dump switch" is a more brutal approach that allows them to avoid broadcasting something that is going terribly wrong.
Broadcasters also have "watersheds" that place time constraints on when certain types of things can be broadcast and broadly define who the expected audiences are.
In this case, Facebook are acting more like a broadcaster than a publisher or message conveyer. Because the audience can be more tightly controlled than a regular broadcaster, there is a case to be made for making the responsibility bar higher, not lower, so that inappropriate content cannot be deliberately targeted at vulnerable people.
<stupid pushing the analogy too far reply>
If US Post opened and scanned all the letters, and offered to sell targeted firearm advertising to people who send death threats, nobody would say "That's OK, they're just 'The Platform'"...
Then maybe it shouldn't behave like one. FB actively selects articles to show on your feed - via algorithm, instead via human, but the effect is the same.
That is a choice they could easily undo. It is also a choice a large number of people would be happy to see. ("I just want my feed ordered by date, no filtering" is a very common request). The only reason that doesn't happen is that the enragement metrics go up if you select what you present to people.
So FB is deliberately choosing articles to show you, and it is making that selection for monetary gain. How is that not a publisher?
I'm not sure that the USPS parallel works completely, but I do get your point.
I agree that it couldn't work in the way it currently does, but that's what I'm getting at - that maybe it shouldn't be able to work in the way that it does presently. I think that it is much more like a broadcaster than a 'platform', but I know not everyone agrees with that.
On the other hand the USPS doesn't turn your letters into public announcements. They don't decide which letters you see and which you don't. They don't decide what ads go with it. The analogy is fundamentally flawed.
Exactly this. I work in C2C e-commerce which in my jurisdiction has very strict laws around ensuring we do due diligence on the people we allow onto our platform. If we didn't we would be finished. The only way to get Facebook to start taking this more seriously is through more regulation.
It'd also be nice if individuals were held responsible for death threats and rape threats they post on-line.
Facebook are morally corrupt because they aid and abet morally corrupt individuals. Both Facebook and those individuals should be held to account for that.
Not saying you personally do this, but it seems every time Europe prosecutes someone for threats of violence on twitter there are the usual fools here shouting "it's just free speech!"
Yeah. Most people shouting "free speech" are fully aware that shouting "Fire! Fire!" in a cinema might be "free speech" but is certainly not free of consequences.
I'm pretty sure "the usual fools" turn out to be complete hypocrites when it's their girlfriend/sister/mother getting rape threats too...
And there's no "1st Amendment to the US constitution" right to "free speech" in most of the world. If your Nazi hate speech ends up in Germany, both you and Facebook will be held to account for that no matter how many Ayn Rand books you've read or how dedicated to ethics in video game journalism you are.
As much as the mainstream on here/reddit is afraid of Europe 'breaking' the internet I'm kinda excited. The interesting stuff will remain underground/out of the mainstream and these 'scale monsters' in US social media might not be able to operate in the way they have been pushing all the negative externalities on to society even if we don't use them.
For those of us outside the U.S. that don't subscribe to their view on free speech... it's not a negative.
In Sweden the a bunch of publishers are using the same defence as FB right now. They are being investigate for automatically republishing slander originating from a company similar to AP.
If they win their case I do not see why FB should be held to a higher standard.
> Facebook has no clue what kind of stuff gets posted on their platform, and their AI isn't powerful enough to detect it. The idea that Facebook is causing/enabling such behaviors is ridiculous.
I contend as a society we should hold these massive tech orgs (who have more power than most governments) to the standard of moderating the content that's posted to their platforms. They have the money and profits to hire human moderators, the 'dumb pipe' defense doesn't fly for me anymore.
Twitter/Reddit should face harsh penalties for failing to moderate and remove content that encourages and indoctrinates extremists. I'm not talking about anything that's borderline, I'm talking about pure unfiltered hate speech that directly encourages violence.
The question then becomes who decides what's acceptable content, the only answer is regulations and legislation for this otherwise for-profit tech giants will be the ones who decide and they're more concerned with user-engagement than far reaching detached societal implications.
Why are you implying that Facebook isn't actively moderating their platforms or that governments would be able to do a better job? Facebook has hired thousands of human moderators in the past few years, significantly more than Google and Twitter. They've also invested significantly in improving their AI detection systems. That doesn't mean there isn't work to do still, but you're insinuating that they aren't actively moderating their platforms and that simply isn't true. Furthermore, the idea that harsh penalties and regulations are going to prevent psychopaths from broadcasting their killing spree is ridiculous. Facebook doesn't need an incentive to prevent mass shootings from being broadcast on its platform, and punishing them with monetary fines or regulations would only hurt their ability to do so.
Haha, psychopaths don't have to publish in TV or newspapers because they do it for them. Go back a decade or two before Facebook was a big deal and you'd see CNN, NYTimes, NBC, etc. all feverishly covering Columbine, spending weeks profiling the shooters and every detail of their lives, broadcasting bar graphs and rankings of mass murders like it's a video game scoreboard and openly publishing their manifestos and names for all to read and be influenced by. This is despite the well-studied phenomenon of media coverage inspiring copycat killers and terrorists and yet none of these publishing companies are held liable for their part in encouraging and incentivizing these horrible events.
All of this still happens today. Just two weeks ago I read an article in the NYTimes reporting on how NZ's prime minister asked people not to use or spread the shooter's name. In the same article NYTimes revealed it multiple times. The only frames from the video I've seen are the ones the Washington Post used--The shooters face blown up as the huge cover photo for the story. It just happens that now the internet is the biggest publication platform instead of television or print, so it naturally attracts psychopaths that are seeking the greatest impact.
The reality is that none of these publishing platforms are held liable and the media and HN don't want the internet to be held liable, either--they want FB to be held liable because they don't like FB. No one here is screaming for YouTube's heads after they failed to prevent millions of copies of the videos from being uploaded in the immediate aftermath of the shooting.
There's a huge difference between covering terrible events as a news story and providing the platform for content engineered to divide and radicalize to proliferate to millions. Also yes YouTube should be liable, not for the video of the NZ shooting being re-uploaded, but for the thousands of hours of hateful violent propaganda they allow to spread on their platform even facilitating it by suggesting it to users with their algorithm.
Drawing parallels between mediums like TV/Film is problematic because there's never been anything like the internet in human history, that being said a TV example of the kind of content that shouldn't be legal: 30 minute produced broadcast dedicated to sharing fake crime stats about Muslims and encouraging viewers to organize violent attacks on their local mosques - this is what goes on the internet. Users from 8chan were (and still are) encouraging and validating the NZ shooter. We have specific incidents and shooters we can point to now. Elliot Rodgers (Santa Barbara shooter) was an active redditor and incel and had his extreme beliefs were both validated and enforced by that community before he took action, ending innocent lives. One could make a case that without the wide ranging communal support these psychotic individuals wouldn't have been emboldened to act on their hateful beliefs. Nothing even close to this is broadcast to a wide audience anywhere but the internet and it shouldn't be permitted on the internet either.
Relevant comment from HN in 2017:
taurath on July 19, 2017 [-]
If 20 people were to stand up on a soapbox with a megaphone in times square screaming about /r/redpill, /r/fatpeoplehate, etc concepts they would be removed if legal, and if not legal a huge countermovement would appear to try to force them out. On Reddit you get both the megaphone and the safe space, but are still just as easily accessible to the public as anywhere.
Its "real" freedom of information, without many of the mechanisms that larger society uses to fight back against it. Instead it is just ignored and left to fester and grow until it pops into the public forum at the point where huge efforts are required to fight it.
> 20 people were to stand up on a soapbox with a megaphone in times square screaming about /r/redpill, /r/fatpeoplehate, etc concepts they would be removed if legal, and if not legal a huge countermovement would appear to try to force them out
I think you badly underestimate the apathy towards this stuff. Perhaps a better comparison would be the anti-abortion protestors, who can be pretty extreme.
> 30 minute produced broadcast dedicated to sharing fake crime stats about Muslims
The Times is today running a series of highly misleading selectively quoted articles about gender identity services, presumably with the intent of causing violence against and suicide among trans people.
>If 20 people were to stand up on a soapbox with a megaphone in times square screaming about /r/redpill, /r/fatpeoplehate, etc concepts they would be removed if legal, and if not legal a huge countermovement would appear to try to force them out. On Reddit you get both the megaphone and the safe space, but are still just as easily accessible to the public as anywhere.
I wouldn't be so sure. People would probably just disregard the crazy screaming r/redpill or r/fatpeoplehate preachers, just like they disregard the Black Hebrew Israelites.
It's not terribly unusual to see people preaching about crazy hateful shit on the street, it is unusual for anyone to actually care though.
It amazes me that RedPill gets such hate. It's just Cosmo for men: mediocre life and dating advice, with a nugget of truth. Here are the current top posts.
- "Sexual Selection and Existential Fear": about disregarding the notion that men always want sex and women do the choosing. In other words, self respect.
- "How to Structure Your Day to Maximize Productivity." This could be plucked from a magazine.
- "Practice what you learn." Pretty much what it says, basic life advice about persistence.
- "Day game works--get out there and approach" If you want to date women, approach them and make conversation. Duh.
- "Be thoroughly logical about what you want from women and what you're prepared to pay for it" Again, have some self respect and don't roll over.
This is apparently so threatening that it must be quarantined. How fragile must women be that giving men some basic sanity checking, and telling them not to worship every vagina on sight, is a threat to society and must be hysterically misrepresented and demonized at every turn.
The sad part is, red pill is probably the first time in these guys' lives that someone has given these dudes a script that can actually succeed. It's not great, it misses a lot of nuance, but it's a start. What do you want instead, incels who resent the world and think they are literally incapable of being loved, so they shouldn't try?
Actually yes, that's exactly what people seem to want. For the undesirables to go away, and for sociable people to continue to enjoy the privilege of being sufficiently attractive and sufficiently high class that useless feel good advice like "just be yourself!" is all anyone will ever need. And then calling that empathy.
To clarify you're responding to a comment I've quoted from another user-
The issue becomes with the internet being able to silence all dissent and create re-enforcing echo chmabers. The forces at play are not stupid and they target disenfranchised, lonely, young males full of angst.
Imagine a packed hall with hundreds of people loudly cheering for someone calling for genocide of X group, this is what these hate clusters look like and if we saw it in the real world no one would say "This is an acceptable cost of free speech"
There exists speech that has no redeeming qualities and serves only to incite violence and hate. To argue as many commented here have that outlawing this would lead down a slippery slope of authoritarian government censorship is simply ridiculous. We've outlawed child pornography and that hasn't led us down any slippery slopes.
>Imagine a packed hall with hundreds of people loudly cheering for someone calling for genocide of X group, this is what these hate clusters look like and if we saw it in the real world no one would say "This is an acceptable cost of free speech"
Do nazis not have conferences? I find it hard to believe that this doesn’t happen.
>We've outlawed child pornography and that hasn't led us down any slippery slopes.
Has it not? Many jurisdictions outlaw even computer generated porn depicting minors. We’ve essentially banned people of certain sexual orientation from consuming any porn at all, how is that not a slippery slope?
But hey, I guess lots of people want to live in a world where perverts go to prison for jerking off to anime girls.
The internet isn't exempted. You're just looking at the wrong party for liability. The people who actually put up those live streams are liable, and abusing facebook's service.
Same as TV and newspapers; you don't sue Comcast or the guy who installed your satellite dish, you sue the TV station; You don't sue the paper boy, you sue the news paper.
>The people who actually put up those live streams are liable
Yet we insist on allowing anonymity on the net.
Either you know exactly who is doing what and can then make them liable for any consequences that follow from their actions OR you allow anonymity and live with the consequences of that decision as dark as these may be.
One of the primary reasons anonymity is important, is to enable critics of oppressive regimes to voice their opinion.
However, enabling complete anonymity everywhere might not be necessary for that; if you can get someone else, in a different country, to take responsibility for it, that might work too. That's similar to how journalists keep their sources secret, or how wikileaks works.
Stuff that's important will still get published as long as you can get someone else to recognise its importance. But who's going to take responsibility to publish your child porn or snuff movie in their own name?
Facebook, Youtube or whichever platform is the broadcaster in this instance.
If they're not, why do we sue the TV station rather than the production company of the programme? CBS can yell "don't look at us, we didn't make Game of Thrones".
It's their exception that is looking increasingly absurd.
We don't sue the company who owns the radio tower if it's terrestrial TV, or the hosting company if it's on-demand, or the bandwidth provider if it's cable. The important bit is who makes the decision to broadcast something; in the case of a TV show it's CBS, and in the case of a live stream on Facebook that's the user.
I think a potential solution would be to delay livestreams by 20 minutes for anyone who isn't doesn't have a direct and established connection (eg following for more than 24 hours) to the broadcaster to give platforms time to censor the worst stuff. Obviously this would make platforms responsible for effective and fast moderation though...
"in the case of a live stream on Facebook that's the user"
You need to argue that rather than just state it. I can see arguments for and against the proposition, but the facts that Facebook attaches advertising, sometimes removes content, carefully tunes their platform to maximise "engagement", and the user has no direct relationship with the receivers of the live stream, do rather suggest that Facebook is the publisher and the role of the user is more like that of someone who writes a letter to the editor that gets published in a newspaper - except, of course, that Facebook is publishing nearly everything, but I don't see that that's a fundamental difference.
Isn't this letting Facebook and Comcast have their cake and eat it, too? Giving them full authority to censor whatever they want, but still not holding them liable for what they host?
(c) (1) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
And when it is, the TV programs (where I grew up) need a delay in the loop so they can beep-out rude words if it’s before 21:00. Even mere expletives, let alone directed personal threats of violence, get censored on TV and radio.
If you don't have the ability to control your premises, or employ enough security, perhaps you should not be permitted to operate those premises. Unsafe clubs get closed down.
They - or any other social media company - don't have a right to exist. If societies around the world conclude they do more harm than good as they currently operate, it's perfectly reasonable to require them to moderate adequately. If they cannot they can close, or withdraw from that region, and someone else may wish to try. Facebook could employ thousands more to moderate and supervise - without destroying their ability to make profit. Pretending the problems don't exist, as all the platforms have done so far, looks like it's not going to be tenable for much longer.
Not only am I fine with that, I think it long overdue.
If you don't have the ability to control your premises, or employ enough security, perhaps you should not be permitted to operate those premises.
So we should shut down the mosques in New Zealand because they didn't control their premises and keep the shooter out? Should we shut down banks because they didn't have enough security to keep a bank robber out? You are shifting the blame away from the perpetrator. Every crime ever could be blamed on the victim for lack of security.
Those other businesses aren't running 40% profit margins.
It's a fair question to ask, "If a company is that profitable and having toxic effects, then why shouldn't it pay some of the cost of addressing its toxic effects?"
The tendency on HN is to say exploitive governments want to tax technology startups.
But we're not talking about technology startups. We're talking about the new old-old ATT.
I would argue that unlike your examples — the bank that got robbed; the mosque that got shot up — Facebook perpetrated and facilitated the crime.
A better question would be, should a bank be held liable after being robbed if it failed its duties by designing cheap systems to save money — yes. Should a mosque/church be held liable if they received a threat and they ignored causing large loss of life — yes.
Facebook is culpable. They know the kind of garbage that floats in their platform. They continue to develop their platform and they continue to improve their ad systems. Why won’t they dedicate as much engineering effort to gain an advantage on racism/hate speech/fascism? I’ll tell you why.
Because Zuckerberg simply does not care. He owns the platform that dominates information flows and public discourse. Why would he undermine that?
Let's take it to the absurd. I am NOT shifting any blame from the perpetrator.
I am placing additional blame on a medium that isn't willing to exercise adequate restraint or control.
A bank undoubtedly has security measures in place to prevent robbery, if they don't bother, they won't last long. Nonetheless, there are regulations on what may and may not be done for public spaces, how crowded they may be to pass fire regulation and so forth. Neither a mosque nor bank should be closed for a circumstance that could not reasonably have been predicted.
> Neither a mosque nor bank should be closed for a circumstance that could not reasonably have been predicted.
How is this any different from Facebook? A bank has security measures in place to prevent robbery, but a robbery could still take place in one and is more likely to occur in one because it's a high value target. Facebook has human and AI moderation in place that automatically processes anything you upload. Users are also able to report content. However, just like sometimes banks are robbed, sometimes objectionable content slips through their filters. By your own logic Facebook shouldn't be closed down because mass shootings cannot be reasonably predicted. The basis for your argument is fundamentally flawed and if you extrapolate it to your own examples you'll quickly see how ridiculous and illogical it is.
Primarily because we’ve been telling them they need to do a better job about this for years, so “could not reasonably have been predicted“ absolutely does not apply. They have literally been condemned by a United Nations investigator for what happened in Myanmar [1]. Multiple government have been unhappy with their attitude problem [2]. In this report, one of the problems is they “continue to host and publish the mosque attack video”, which they absolutely don’t have any excuse for not being aware of, not any more.
If this is the best that the state of the art can manage, then the state of the art is not good enough for Facebook to continue to exist. If Facebook were a human, it would have been fired for gross negligence.
It isn't - if they were not providing a broadcast platform. The local bank branch broadcasts nada, if they do somehow broadcast your financial info they can be prosecuted for failing their duty of care, and you'll be eligible for compensation even without prosecution.
Youtube, Instagram, Facebook, Reddit have size and reach enough that turning a blind eye is no longer credible. We've seen countless issues crop up with regard to elections, election advertising, hate speech, and in this case broadcasting a slaughter. All of which serve to demonstrate they are unwilling to self-regulate taking them into territory where it's clear current regulation is not adequate.
Calling for adequate regulation to require moderation and oversight, and to require a duty of care to their users, viewers and listeners is not calling for them to be closed down. That comes if they prove unwilling to obey said regulation, as does any of the other measures brought to bear against habitual law breakers - fines, asset seizure, blocking etc. Regulation that will vary between different countries as they will choose differing limits.
We can apply the same argument to governments with their lack of censorship of bad words as people walk the streets. We don't because it's ridiculous on it's face, and we don't want a blatant surveillance society such as that.
Which of those examples already have copyright infringement detection filters built in which should have been capable of preventing, e.g. re-uploading of the original livestream?
Do any of them boast about their natural language processing and engage in psychological experiments on their users to detect, e.g. suicidal tendencies?
Well as much as HN or the news media would like you to believe, societies around the world have definitely not concluded that FB does more harm than good and I don't see that changing anytime soon. You may think the club is unsafe, but a billion people are still attending all night and they feel perfectly comfortable.
Attendance proves nothing at all. We've no idea if they are comfortable or not. Many appear to have come to think of it as a necessary evil.
Millions of people were eating adulterated food every meal prior to food regulations. Political process and advertising was rife with corruption prior to attempts to instil a semblance of fairness limiting what was allowed and when. Yet people attempted to vote before this.
You're claiming simultaneously that we have no idea whether these users are comfortable or not and that most of them think of it as a necessary evil.
Attendance absolutely proves something when it comes to social media. You can't make a comparison between social media and food because one is necessary for survival and one isn't (I'm sure I'm going to invite a ton of comments here on how 'Facebook has become so ubiquitous and powerful it's now synonymous with survival!') If the negatives really outweighed the positives people would be leaving Facebook by the droves. It's only on HN and in the news media that this narrative is being spun.
>Attendance absolutely proves something when it comes to social media.
Yes, it proves that network effects can force people into bad equilibra. If you think high university costs are harmful for society in the long run, but if you personally don't attend you will incur significant cost in the short run, with nobody cooperating, how do you behave?
> Attendance absolutely proves something when it comes to social media.
Not really - I would love to be able to delete my account from Facebook but it's the only place some people that I want to keep in touch with will use.
(I did the next best thing - deleted the apps and only check the website infrequently.)
It doesn't actually matter whether people are "comfortable" on Facebook or not, it matters whether they're being incited to murder people. Facebook has an effect outside its users. The Bhopal of racism.
I am claiming your statement "and they feel perfectly comfortable" is unknown. Further that it is highly unlikely.
Many is not most - it probably is most of my friends and relatives, but they are representative of nothing at all. Yet there is now a certain discomfort in almost all discussions of Facebook - no matter where that takes place - that simply was not there 5 or so years ago.
Exactly. Generally speaking, the onus is on the userbase to react to perceived (or real) immorality by refraining from using Facebook.
However, it remains the office of government to introduce enforceable regulation in this (or any) space to protect their constituents, and hold those in violation accountable. I don't suspect it's an easy task, as the problems are broad and can be a grey area. I.e, some of the problems with Facebook stem from mischievous users -- exactly how accountable the platform is for their behavior isn't universally agreed upon.
I think if rules of engagement happen, people will have less problems with just letting the market decide.
What's the internet version of the fire marshall? If the people that feel the club is unsafe, a simple call to the authorities will have the marshall come out and declare it unsafe and shut it down. Can we get Fire Marshall Bill to do it?
>> Facebook could employ thousands more to moderate and supervise - without destroying their ability to make profit
You're underestimating the scale of this problem.
Facebook users create billions of new posts a day. And, tens of millions of these posts get reported for moderation per day.
"Thousands of more" employees isn't going to solve the problem. Assuming each posts takes 10 minutes of labor to review, you would need an army of 200K individuals, and this amount of labor would cost many billions of dollars per year.
>> Pretending the problems don't exist, as all the platforms have done so far
You must be joking. Facebook has already made massive investments into moderation. It's their top priority for 2019. In many ways, they are doing the opposite of pretending this problem doesn't exist.
>> They - or any other social media company - don't have a right to exist.
Nobody is arguing that FB has an inherent right to exist. The only point GP made was that (as made evident by your comment) many people are underestimating the associated costs with manual moderation of content.
> "Thousands of more" employees isn't going to solve the problem. Assuming each posts takes 10 minutes of labor to review, you would need an army of 200K individuals, and this amount of labor would cost many billions of dollars per year.
Should all online platforms be required to do that? I can post here on HN right now and no moderator is pre-approving my content.
I could go find the shooter’s illegal manifesto and post it here, I might be banned after some period of time but just like on Facebook it would have had time for other people to see it.
Applied consistently this standard would kill most online forums and that type of thing.
It’s a matter of scale. Even before they became larger than the largest single nation on Earth, Facebook used to get in the news for stories like “1000 people show up to birthday party after teen accidentally forgets privacy settings”. Hacker News can #ahem# slashdot servers it links to, so it’s not exactly small, but it’s still peanuts compared to Facebook.
Just as software engineering practices need to change for the big names compared to everyone else, so do social practices.
They do the same thing. They worked all night to prevent the video from propagating. What they are calling for here is preventing it from being published. So this would require your ISP to prevent the content before people complain.
> The question then becomes who decides what's acceptable content, the only answer is regulations and legislation for this otherwise for-profit tech giants will be the ones who decide and they're more concerned with user-engagement than far reaching detached societal implications.
What about this : all content is potential knowledge and should be made available. That way no one get to decide what is "acceptable content". At the end of the day, the laws still forbid anyone to kill people, content or not.
When RIAA/MPAA/etc. came with astronomical fines, those same companies suddenly created the capabilities to block things that even smelled like copyrighted material. Whatever your opinion of MAFIAA is, it would seem this sort of detection can be done when a large enough stick is presented.
I think “social” is the key. If you are using a carrier to facilitate private conversations then I feel you should have a right to privacy. But a forum or online group is by its nature and intent a public platform and the provider should be responsible for policing that space.
> None of these tech companies have the ability to do that
Yet they can attach an ad to it in a millisecond. They’re incentivised to do one and not the other. These bills are about rebalancing those incentives.
Are you serious? You expect us to believe that Facebook and Youtube can’t afford to hire 50,000 content regulators each? 30,000 of them would probably be paid less than 5000$ a year.
Facebook already employs tens of thousands of human moderators. What you're really proposing is that every piece of content that's uploaded to any internet website should be manually reviewed and approved by another human. You don't think that's a ridiculous and unrealistic proposition that could be turned into a weapon by bad actors (just like a livestream feature?)?
The face that you would reduce my proposal to a brain dead, dumbest possible solution shows you aren’t really interested discussion.
Don’t you think they would, for example, be able to prioritize content based on reputation/hidden metrics?
Don’t you think, for example, that a fair amount of videos categories can be auto filtered with extremely high accuracy?
Don’t you think, for example, that if they had enough content moderators they would have been able to stop the stream and to take down this video faster?
Do you really think I am proposing that they review every video in FIFO order? Or did you want to reduce my proposal to that so you can easily dismiss it.
> Don’t you think they would, for example, be able to prioritize content based on reputation/hidden metrics?
What you're proposing is discrimination and isn't a solution.
> Don’t you think, for example, that a fair amount of videos categories can be auto filtered with extremely high accuracy?
No, not really, but please provide some examples. I'm assuming you mean content category, not literal tagged category, in which case you can't really distinguish between someone livestreaming in their car vs. the Christchurch shooter driving up to the mosque.
> Don’t you think, for example, that if they had enough content moderators they would have been able to stop the stream and to take down this video faster?
Sure, but they already have many thousands of moderators in addition to automated systems in place today. It's inevitable that unless every post is hand-reviewed by a human that some objectionable content will slip through.
I get that your point is that FB and the other big tech companies can certainly afford to be doing more than they currently are, but to the extent that we would almost certainly be in this exact same position even if they had hired your proposed 50,000 moderators there's not much substance to your proposal that I can even reduce.
> Don’t you think they would, for example, be able to prioritize content based on reputation/hidden metrics?
Like rank/derank the feed based on contextual clues? No way this wasn't the first thing done.
> Don’t you think, for example, that a fair amount of videos categories can be auto filtered with extremely high accuracy?
They've touted high accuracy in "using AI" for content moderation, like 95% or 99% or something. i.e. using machine learning to automatically tag content.
> Don’t you think, for example, that if they had enough content moderators they would have been able to stop the stream and to take down this video faster?
So your proposal boils down to "hire more people so more content can be filtered faster". GP perhaps didn't steelman your argument for you but if your proposal were to be applied the only real delta would be workforce size. It follows that your proposal in effect reduces to "hire more people".
This is flat out not true. FB doesn't employ 10s of thousands of human moderators.
According to a 3-month old article, $FB supposedly employs 7,500 moderators [0], though it's unclear what constitutes "employment" in this case. Probably a salary of less than $5k/yr. Are there really thouhsands of content moderators pulling up to FBHQ in Menlo Park 5 days a week? I don't think so, no.
"By the end of 2018, in response to criticism of the prevalence of violent and exploitative content on the social network, Facebook had more than 30,000 employees working on safety and security — about half of whom were content moderators." [0]
That number has only increased in recent months and will likely continue to increase in the near future. Not all of them are designated content moderators, it's true, so technically they likely only employ one or two "10s of thousands", but I'm mainly addressing the misconception in the thread that Facebook is "doing nothing." That's a significant amount of moderators. Is it enough? No, but to claim or pretend that they aren't actively doing anything is facetious and something I've been seeing more and more of on HN lately.
If it is not viable to actively moderate all the content, there are two conflicting concerns here:
1. Online services should maintain an acceptable legal and moral standard of the content they serve.
2. Online services should be able to profitably facilitate the communication between millions and billions of users.
Which of these concerns seems more important to you, regardless of what you believe is an acceptable legal and moral standard?
I actually think that’s perfectly realistic, and in fact how forums have worked for ages. There are also automated measures such as user trust levels where an account has to earn trust before they can do certain things like post links, embed pictures, etc. Being on the platform isn’t a right either and it would be perfectly fair for them to rate-limit the amount of public posts someone can make to an amount that’s manageable by their current moderation capacity.
Forums have managed to keep undesirable content at bay with often no budget at all, so anyone claiming Facebook can’t do the same is false. They don’t want to do it, because abusive content still brings them clicks and ad impressions (remember that for every nasty piece of content that turns into a PR disaster there are thousands that go unnoticed but still generate money for them).
What forums are those? Because this isn't the case with the forums I frequent. The boards I read / participate in tend to operate on the principle of "don't make me come over there", and everything is reactive, not proactive. I'm not sure how you'd get things like "user was banned for this post" etc otherwise...
The boards I used to frequent (about computer hardware & video-games, though both had a very active "off-topic/chat" section) had pretty good moderation coverage. It was indeed reactive, however the moderators were active members of the community and as a side-effect they were bound to see every post within 24 hours, but usually much sooner. New users were also restricted in what they could do so the potential for damage was limited even if moderators weren't active.
I don't have a problem with moderation being delayed. I have a problem with there being no moderation or clueless moderation (I have reported hundreds of obviously fake accounts or pages promoting outright illegal activity, and the majority of those got ignored).
Of course, the scale was much smaller than Facebook, but my point is that maybe if you can't solve this problem then you shouldn't be in a position to broadcast & promote stuff to the entire world? The danger of Facebook (& other social networks) isn't in your friends seeing your malicious post, it's that your malicious post can go "viral" and be put in front of people who haven't asked for anything since the ranking algorithm is based on stuff like "likes", shares, etc (and a lot of garbage content tends to attract likes unfortunately) which can also be manipulated by a determined attacker (using fake/compromised accounts to "like" the post, etc).
At least a couple of newspaper comment sections, and forums have had pre-moderation for sensitive topics.
Nothing is visible until it's been validated. Guardian comments still do this for some topics.
With all the assorted profiling going on, I'm sure just as some users (like via Tor or VPN) are automatically given harder Google captchas, or some boards make you earn enough karma to perform certain actions, reactionary moderating could be something you have to earn, and sustain. Perhaps certain interests would trigger proactive moderating too.
Not perfect but certainly capable of taking much load off human moderators.
If a car manufacturer is faced with a problem so that it loses the ability to prevent accidents, would you let that manufacturer continue its business?
We do allow car companies to continue to build cars that cannot prevent accidents. Even beyond preventing accidents we as a society also allow the car companies to build cars with varying levels of safety and have a rating system for them.
We do. However, car manufacturers usually try to downplay reports of issues with the cars to the point when they can't do anymore. That's when something like a recall happens, which results in a substantial financial loss for them. Obviously not every affected vehicle owner can or may afford to have their vehicle serviced, so many keep using them despite the risk- like the people who are using social media knowing its weaknesses.
I think social media are at a point when they are forcefully downplaying the social issues which they are actually responsible for. We've already seen state and other actors exploit the social media and cause mass epidemics; anti-vaxx, white nationalism, Rohingya issue etc. to name a few. Let's see how far it goes before the damage is too much.
Oh please, this is as ridiculous as blaming the manufacturer of the truck for the psychopath that used it to run people over in France. As far as I know Facebook doesn't released public numbers on this kind of information, but their automated systems and human moderators have undoubtedly caught millions of objectionable content streams, you just don't hear about them. If we're making car analogies they would probably be the safest vehicle on the road.
That sounds fine. They bought it, so they own it. If they make money from someone's work, then they are responsible for at least as much money as they made.
If social media companies exist because they are not held responsible for negative externalities, and actually recognizing the externalities would result in the death of said companies, then maybe they don't deserve to exist. We as a society have a real serious problem contending with companies that have massive negative externalities
Which I think may be the appropriate answer. It's unclear that the net benefit to society is greater with these social media companies than without them. If anything, the data we have so far is that it is a net negative.
> They have the money and profits to hire human moderators
Do they? Most content posted to Facebook has very low reach. If I live stream something for my friends it’s not going to bring in enough revenue to pay a minimum wage employe to watch it in real time.
> They have the money and profits to hire human moderators
How many people do you think it would take to proactively moderate all the video uploaded to YouTube? Back in 2017 it was apparently 300 hours a minute. Do the Fermi estimate on that. That's not a productive use of humanity.
Why draw the line at social media companies? Why not include AWS and Gmail? Why not force data-centres to moderate as well. Hell, why not force anyone proving egress of data to moderate it all?
How about, instead of that, companies that want to moderate content, compete in the free market, and those that prefer to be the dumb pipe, do that as well.
That way users can choose which platform they prefer, the more censored version, or the less censored version.
You're not wrong, but holding social media companies responsible to that level is a relatively new idea. That Facebook is doing that yet doesn't make them unequivocally evil.
What you're asking for also leads to total, Chinese-style censorship, as the government finds more and more content on these network it doesn't particularly like and therefore considers "extremist". Fun fact: Russia has recently passed a set of laws under which you can go to jail for simply "disrespecting" a government official. This "disrespect" can take the form, for example, of pointing out that their spending is several orders of magnitude more than their official salary would allow.
It boggles my mind that people can be simultaneously against the Chinese censorship and in favor of NZ censorship. It's the same exact thing, just cranked to a different degree. How do you live with both points of view in your head at the same time?
And lest I be deliberately "misinterpreted", I don't think that this shooting video should be on FB or that removing it from there represents "censorship". I also don't think we should go much beyond removing mass shooting livestreams however. Nor should anyone go to prison for 14 years for redistributing it.
I’ve largely ducked out of these conversations on HN anymore, trying to explain why basic free speech is important, and hitting resistance, shows the obvious failings (success?) of public modern schooling.
We have a new young generation that seems to unquestionably look to government for all answers, and demand to be silenced by biased censors and it’s shocking.
>> We have a new young generation that seems to
>> unquestionably look to government for all answers
This is especially weird given that the very same people consider the current US government to be incompetent (for the record, I disagree, based on the results in a number of areas).
The very same people seem to suggest that we abolish the 1st and 2nd amendment, and in the same breath suggest that "our democracy is dying" and we're on our way to "totalitarianism". How these things can be suggested at the same time, I don't know. Seems like utter and complete lack of critical thinking skills to me.
You don't see any difference here? One government is trying to keep their people from seeing violent and disturbing content. The other is trying to keep their people from learning about violent and disturbing things that very government is commiting themselves to its own people.
I don't think the NZ government is scooping up and murdering political disedents.
I'm an ex-USSR resident and I see a good bit of difference between a liberal society where free speech is seen as a basic human right and a budding Communist nation with a "Chief Censor" who gets to dictate the bounds of acceptable discourse.
>> I don't think the NZ government is scooping
>> up and murdering political disedents [sic].
That's because the Overton window isn't quite there yet. Continue down this path and eventually it will shift. The basic truth of the matter is: it's much, much easier to rule if there's no freedom of speech. That's why it's the typically the first thing to go before the rest of the personal rights. Makes things super comfy for the ruling class: someone says something you don't like? Just throw them in jail, it's for their own good. The UK is much further down this path already: there you can end up in jail for posting wrongthink on your Facebook page or filming _outside_ the court during grooming gang proceedings.
Private companies already moderate civic discourse, by shoving the extreme views into our faces, every day.
If thry were "dumb pipes" I would have more sympathy, but they are spending yens of millions on AI to tune exactly what to shove in front of us, to keep us engaged.
You make it sound as though private companies are the only means through which communication occurs. T'was a time when companies had their own means marketing, people ran their own blogs, and there were no huge private companies controlling the majority of online discourse.
Anybody else remember email?
We'll never go back to it, but don't act like the social media-driven scenario we have today is all there will ever be. If things like Twitter and Facebook became moderated, and people became dissatisfied, they'll do the same as they ever did with social media: go elsewhere. (By the way, let's take a moment to remember Digg)
And maybe, just maybe, we can go back to a saner time where we don't have to defend the status quo, hideous as it is.
None of what you wrote is a response to what I said, beside “well, if it gets bad enough, society will adapt”.
You admit that in the current market, those companies are vastly important in a way that can’t change quickly and their acting as censors could turn so bad, it would force a change to the very way we communicate.
I think “hellish” is an apt description of “would reshape hundreds of billions to trillions of dollars of economic activity in response to the social disfunction it caused”.
> Your vision for society is one in which private companies moderate civic discourse?
> None of what you wrote is a response to what I said
My post was that there are alternatives to Facebook and Twitter having the monopoly on public discourse, proved it by demonstrating alternative methods by which no one (or two) entities have any such monopoly.
You spoke about a "vision for society". Typically when people talk about a vision, they're talking about the future. I gave a vision of the future.
> You admit that in the current market, those companies are vastly important in a way that can't change quickly and their acting as censors could turn so bad, it would force a change to the very way we communicate
I didn't say any of that. I mean, I agree with you. But you were talking about a vision for society. I gave mine.
I think you've decided I said something quite different from what I actually said.
You have a second vision that had nothing to do with what we were talking about before, which was changing the current system to one where entrenched private interests would be censors — the envisioned change I was responding to.
Your non-sequitur other wishes have nothing to do with that being a hellish vision.
> You have a second vision that had nothing to do with what we were talking about before
I'm sorry, I didn't realise you were the discussion warden. If we're not keeping strictly in step with the topic you've put up for discussion, we're to be derided, are we?
> Your non-sequitur
Goodness, your tone is nothing but rude. Your previous comment's lone "Okay." was much the same. Just because we're not all agog at your comment and have our own points of view to contribute…
Try adding a touch of flexibility to your interlocution.
I've reported actual child pornography with hundreds of likes and comments and got that reply from Facebook.
On the other hand, a Facebook group I created for my CS course with a few hundred members and just a bunch of helpful course material and questions vanished for a few months without any sort of warning or notification, and showed up like it had never been gone in the first place. I tried to contact Facebook, even finding a form specifically for groups that vanish, and got no reply whatsoever. I still don't know what the hell happened there.
If they consider childporn "not against their community standards", isn't that enough reason to report them to the police and make a huge public stink about this?
In theory, yes. In practice I haven't found a way to do so, especially with Facebook being a big American company and me living on the other side of the ocean... The best case I can imagine would probably be an e-mail from the police department arriving at Facebook a few months after and the content being removed, there isn't any real way to damage them without the effort and luck of a targeted PR attack.
Aren't your national newspapers interested in this? It might not become an international scandal, but some newspaper attention will get noticed, and will likely get mentioned again and again in every political discussion about the impact Facebook has on our society.
Also, report it to the police with all the evidence and screenshots you've got and just let them handle it.
It might not bring down Facebook on its own, but it will add to all the other demands that Facebook needs to get a better handle on this sort of thing.
Depending on the jurisdiction taking screenshots might be a crime in itself. And I doubt that this would make a good news item, it's not like the newspapers can reprint the material in question or the url, so it's unlikely that anyone who likes Facebook would find the article convincing.
A criminal conviction of a Facebook employee would make a good story, but that is much harder.
A screenshot of Facebook's message that it's not against their standards, I mean. That should be easily reprintable, unless it contains the original image.
What would that screenshot prove, without the original image next to it? All it proves was that something was reported, but Facebook disagreed. Could be something horrific, or it could be a picture of a tree. There is no shocking news story here.
I guess heres my bigger question. If some individual where to have knowledge of a murder or rape and it came to light that they were intentionally with holding the evidence of the crime, would this individual be committing a crime themselves?
I just imagine if someone found a snuff video on my basement that I would be in violation of a law.
Consider who the person is reviewing these things. Imagine all the horrible fucked up shit they see on a daily basis.
I wouldn't be surprised if after seeing videos of babies being killed, people being beheaded and hung, that some sperm yelling at women might not even register to them as a bad thing.
And "horrible fucked up shit" amounts to clicks. People don't stream Lawrence Welk for their modern entertainment needs. I say the company knows damn well what content it leaves untouched. It caters to enablers.
Well, to play devil's advocate to your point, it's barely Facebook's fault that the public has an appetite for such heinous material. People want to watch these things, why do we blame Facebook for that and not ourselves?
People seem to have an appetite for all kinds of heinous acts, and if you don't actively maintain a social standard that condemns and punishes such behavior, you're left with the law of the jungle.
Social networks already make it harder to maintain these standards because they dissociate their subjects from one another. It's easier for me to send you a death threat if, to me, you're just a bunch of words I disagree with under a profile picture.
They also facilitate the mobilization of like-minded people at an awesome scale, however widely unaccepted their behavior is. If you can get your sense of social belonging satisfied by 100000 other pedophiles from all over the world on a social network, why should you conform to any widely recognized social standard? Why should you work on your problems when thousands of rape apologists are patting your back and reinforcing your delusions?
Finally, social networks exploit these weaknesses, exploit our sense of pride and our ideas of what is right and wrong. More polarized and controversial information leads to more discussion. More discussion leads to more social data. More social data leads to better ads. There is an incentive for social networks to push objectionable and worthless information. The relationship is perhaps indirect in that it just happens to be the best way to turn a profit, but we shouldn't let our social standards budge for a system that is designed to maximize a profit margin.
This has also been my experience.
For most of posts/comments I report(which I think shouldn't be on Facebook), I get the default reply "not against communist standards".
Yes, but the damage has been done then. Parent is right. Facebook doesn't know what is being posted, only afterwards, when people tell them, then they know.
This is just a form of the child pornography scare that resonates with contemporary politics.
There isn't a significant portion of users that condone these threats. I will still be held against anyone not wanting to give facebook more control over content.
> I stopped reading after this point. Facebook has no clue what kind of stuff gets posted on their platform, and their AI isn't powerful enough to detect it.
The question isn't whether it's currently technically viable for Facebook to monitor the content their users post, but that since they're hosting it, maybe they should have a clue and design the service accordingly. If they are held liable for it and can't solve the problem technically, just let them (and services like it) die off. It wouldn't be a great loss to society as far as I'm concerned.
> It's similar to when Marilyn Manson got blamed for Columbine
What was Marilyn Manson's connection to the Columbine shooting? What is Facebook's connection to hosting morally repulsive content on their service? In my opinion these answers are different enough to merit the question on which grounds you make this comparison.
They have power to reach everyone in the world. They don't have the capability to decide what content deserves this wide reach. To compensate and out of greed they encouraged sharing and spreading all types of content without merit. I am not asking for facebook to be a police of content... Users don't even have the capability to flag content.
They promoted a culture of greed over morality. It's a curse on humanity.
Actually they should not allow live content posts, just like TV broadcasting corporations are required to have a delay in case something harmful or disturbing were to be broadcast.
They're 100% responsible for building a platform which allows for this kind of content to be posted and broadcast en-masse with zero responsibility or accountability.
It amazes me people think that making a profit from this kind of "platform" is acceptable.
As you stopped reading at that point, I assume the "big claim to make without quantitative evidence" is the part of the article you quoted.
I'd entertain some debate over the definition of publish, as well as whether or not Facebook refuses to accept any responsibility... but outside that nuance it's a fairly objective claim of Facebook's capability and behavior.
I see it as a quite reasonable position to assume that playing violent video games can ease people's inhibition to engage in IRL violence. I personally believe the liberty infringement, and ultimately, violence, that is necessary to ban them is worse than the violent behavior they can influence.
Further down in this thread you claim that if Facebook were to delay live streams made to "Public" it would mean "the death of every social media company".
That seems a bit "sensationalist".
I think that really what you mean to say is that regulation of "broadcasting" user-generated content (https://en.wikipedia.org/wiki/Broadcast_delay), especially sensationalist content that appeals to people's emotions^1, could mean the end of social media's lucrative profit margins. Social media could still exist even if it were not very profitable.
Facebook does have a clue what is posted. For example, a recent docuentary on public television showed teams of people in the Phillipines reviewing individual posts.
1. Sensationalism, appealing to people's emotions. The same tactic that caused you to stop reading is also what caused you to start reading. It is the same tactic Facebook must use to attract and keep people's attention. The non-sensationalist (boring, non-controversial) personal content that is posted to Facebook -- what makes the website useful for so many people -- is not helping Zuckerberg's business. In the face of endless competition for people's attention on the internet, it stands to reason that he has an incentive to allow and even promote sensationalist content.
Slightly off topic, but if you make this a habit, i.e. stop reading after the first sentence you disagree with/sounds like bs to you, there is no chance of getting out of the filter bubble as you're building it yourself. Of course this applies equally to me and all other participants of global information exchange.
Of course they can monitor everything on their platform, they are just unwilling to pay enough people to do it ... This is really about money and Facebook being extremely cheap-ass
They are doing a great job in the war on the nipple, though. And containing viral content with negative publicity against them.
I don't know about AI, but they have an army of moderators with a strict set of rules. Yet, content that many people would consider hate speech, racism, etc. does not get blocked, even after being reported several times. Are the moderators to lax in interpreting the rules? Or are the rules not strict enough.
(To give context: I believe that hate speech is protected by free speech. And should FB block white supremacists it would not be cencorship.)
I'm told, but have not confirmed it myself, that Facebook do a fine job of censoring Nazi hate speech in Germany. But somehow they cannot do it elsewhere or for other sorts of hate speech. Which is 100% nothing to do with maximising advertising revenue by allowing and enabling hate speech elsewhere...
Besides, I honestly prefer to have a platform censoring nothing, and letting through the occasional idiot, than starting the slippery slopes of drawing an arbitrary line that will get moved when the wind change.
Plus, facebook is full of tracking, so it's very nice to have place to create a full graph of all data of the haters. If you need an investigation, it's easier to find people IRL. Win-win.
The city isn’t responsible for what gets thrown on its streets, we still sweep them.
I think the time for platforms and major tech companies to dodge their responsibility has passed. At least in large parts of the western world and I personally welcome it. I’m European though, we (at least some of us) tend to like regulating companies.
Ad-driven business models ultimately lead to intelligence insulting outcomes.
Whether you look at Google or Facebook, you can notice that these platforms have been highly optimized to produce and stimulate user experiences that are far from sensible or pleasant.
The main product for both companies are ads. One uses massive distribution channel in terms of free internet search the other one in terms of free content. There is a sensible way to balance the delivery of the product (unlike other products like bread or transport, this one is in most cases an unwanted one - close to 1B ad blockers have been deployed on devices world wide) that assumes certain profit-negating levers to be pulled. Control of these levers is in full control of these companies.
By choosing not to pull the lever these companies have created an environment in which common intelligence is insulted every day, and because this has gradually happened over a period of many years, we've grown oblivious to it for the most part.
FB could (theoretically) hire 1 million workers to curate the platform at $50k/year.
> Facebook has no clue what kind of stuff gets posted on their platform
Well there is your problem. Why is Facebook held to a different standard than other media? ("Because they want to earn more money" is not a good answer)
Because the platform offers user content without any claims of correctness.
Education about handling media is the better approach to banning. It is naive to think politically charged content would be exempt from ambitions to ban it.
If it’s posted on Facebook, Facebook bears responsibility. If you own a restaurant, and fail a sanitation check, you don’t get to argue that it’s not possible for you to catch every single rat.
The Marilyn Manson analogy is confusing. Did Manson own the website where the Columbine killers downloaded bomb plans? The computers where they wrote their murderous fantasies? The gun store from which they sourced their arsenal?
It’s not the same thing at all, Facebook is a social media platform that enables distribution of material to a broad public. Terrorists clearly use social media hopping it will reach as many as possible. Remove the non-moderated access to distribution to a broad public and the incentive is lowered.
This is not correct. Roger McNamee, who was Zuckerberg's mentor and also helped recruit Facebook's COO Sheryl Sandberg, said they simply do not want to do it. He tried to push them in that direction but was met with resistance.
Yes, the claim is sensational but not entirely without merit.
Zuckerberg has written that, “live videos often lead to discussion among viewers on Facebook—in fact, live videos on average get six times as many interactions as regular videos.” [0]. Live streams are far more valuable to Facebook due to the more "meaningful" interactions they generate. When coupled with their ability to identify and target quite specific demographics (see 'detailed targeting' [1],[2] ), they absolutely have the power to better target any AI and human moderators. If they were to couple this with a built-in live-stream delay (allowing say for some level of video content analysis prior to display) then I contend they could score 'harmful' material reasonably accurately - perhaps accurately enough to pipe into a queue for human vetting before publication (of course, unless they try, we'll never know for sure, so arguments on this are kind of moot). However, each of these elements come at a cost to FB's business model. MZ has already spoken out against delays in live streaming.
Also, your rebuttal is disingenuous. The claim that they "have no clue what kind of stuff gets posted on their platform", doesn't jibe with their self-described abilities (as pitched to their advertiser customers as "detailed targetting"). I agree that they can't identify with 100% accuracy, but that's not required. If they can get the accuracy high enough that a higher-scrutiny vetting process can be imposed (wether AI or human), then that's sufficient. As for the claim their AI "isn't powerful enough to detect it", can you provide anything to back up that statement?
The real question is: are Facebook and the other social-media and tech content publishers responsible for the content they publish (irrespective of where it's sourced)? My belief is 'yes'. When someone posts on Facebook they are not pubhlishing. They are contributing. Facebook are publishing. Hence, Facebook should be held to the same standards as other publishers.
Arguments that "censoring Facebook means the end of free discourse", are pretty weak when there are still many alternatives (e.g. Mastodon and others) that are more user-centric and resistant to both censorship and abuse (through higher rates of group-local self-moderation).
People have a right to free speech. They don't have a right to impose speech on others. They don't have a right to voilate other's freedoms (as hate speech and trolling often does).
Similarlly, companies don't have a right to operate without any kind of regulation at all. Regulation exists to ensure a balance between the good of the market and the good of the commons.
First of all, nobody ever murdered anybody on facebook. People murdered and streamed it, or uploaded pictures and videos.
There is plenty of places where murder gets shown, either live or recoded. How many millions do the TV stations owe in fines in your estimates for airing live footage and recorded footage of 9/11?
This is a bad take. FB has taken the most action out of any big tech company in terms of hiring human moderators and investing in their AI auto-detection. It's nonsensical and lazy to claim that there aren't serious technical challenges involved with creating a perfect detection system, even for a company like Facebook.
you've got to feel for the human moderators though.
Some of the content they have to watch, it's really the worst job possible. it's going to lead to a lot of PTSD.
I think the easier solution would to just not have the features in the product.
No one has that kind of AI. It doesn't matter if you are a billion dollar company, the scientific innovation is nowhere close in the present day and age. Facebook is relying on deeplearning and machine learning, which is the current paradigm in computer science. This paradigm won't get us anywhere close to such a sophisticated object recognition for this kind of task, it performs abysmally at best.
> Facebook has no clue what kind of stuff gets posted on their platform, and their AI isn't powerful enough to detect it.
Almost certainly Facebook's ML can identify the themes of almost all content on their platform with high accuracy. That's a pretty easy problem at their scale.
What makes you say it's an easy problem? They were able to block 1.2 million copycat videos of the shooting, but 0.3 million still got through. YouTube also struggled to remove them and even went so far to disable uploads entirely for a period because their AI was insufficient. The difference is they just declined to release specific numbers so they took less flack.
When you literally have thousands of humans all determined to bypass these detection systems by modifying the video and disguising it in various ways some of them are inevitably going to slip through. Is that really an issue with the detection system, or is there something wrong with the people who repeatedly try and upload and share videos of mass shootings?
Facebook (and Google and others) can easily identify themes in natural language both written and spoken. They're really good at those problems because they're used as inputs for ad sales.
Identifying themes in video is harder than natural language. But fundamentally it requires the same kind of ML tools as natural language, which these companies have already mastered. I think the bigger issue is that Facebook doesn't have a business purpose for understanding video as compelling as it has for understanding natural language. They also don't have a business purpose for building scalable content censoring workflows since there's no serious regulation.
I don't know how you came to these conclusions, but they're fundamentally incorrect. Video has been one of Facebook's fastest growing advertisement categories in recent years, particularly after their acquisition of Instagram. They absolutely have a business purpose for understanding video and if they didn't they wouldn't have invested so heavily in it with features like livestreaming. Furthermore, I don't know where the idea of government regulation as an incentive came from in this thread, but it's illogical and ridiculous. Facebook doesn't need an incentive to try and prevent objectionable content like mass murders from appearing on their platform because they're well-aware of the damage they can cause their brand. That's incentive enough.
Yes they can identify themes, but not nearly with enough accuracy to make policy enforcement decisions. IIRC the preemptive video blocks were based on fingerprinting previous video uploads. They're still a long way away from being able to automate policy enforcement dynamically.
Would sample size be an issue? An algorithm can recognize that a video is about someone discussing the latest Marvel movie because there are a lot of those types of videos.
But can an algorithm recognize that a video is of someone committing a real-life atrocity? Those are comparatively rare.
Google pretty much undeniably has the best ML people in the world and I see nonstop hate at YouTube for their automated filters false-positives so I don't think it is an easy problem.
It seems like the only way to profitably scale this type of service to millions to a billion users, so it ends up being the go-to technical excuse for the people that think our moral standards should budge for the technical feasibility of maintaining these insanely large social media setups, like it's some sort of inherent right to be able to do that and turn a profit.
In a perfect world, where everyone is liable for the material they host on their own servers, services with this many users wouldn't exist in the first place.
This is really interesting. Did you know that child porn is legally a very special case that makes sites liable for hosting the content?
This is because somehow as a society we decided that child porn is so bad, that there are no excuses to providing any resources for it.
All companies need to make sure that no child porn is posted on their sites, or if it is, then it's taken down immediately and reported to the proper authorities. Failure to do that will get sites blocked by their ISPs, or any of their tech providers if they detect it first (which usually happens within hours or even minutes).
Politics aside, technically, all large enough sites (which allow users to post content) already have the resources and processes in place to monitor and filter all their content. And they are making a conscious decision about what they filter.
I'm a New Zealander. I think our privacy commissioner is so totally wrong here. It's a hard technical problem to solve. It's an even harder political problem to solve.
Automatic censorship of live feeds, how do you do that?
Automatic take downs of videos doctored to avoid being removed, how to do you do that?
Then if you have these automatic systems in place how do you avoid getting accused of censoring everyone?
I don't think Facebook is a doing the world a lot of good. But this is silly. It just shows how little the commissioner has thought about the details of his comments.
and why the focus of Facebook? This could have happened to Twitch.tv or a self-hosted platform (and who do you go after then? Vultr? Amazon? DigitalOcean?)
Live streams are a game changer in helping people get out content. If you try to force more providers to vet material, it breaks down the massive availability. Sure it might foster a growth in self-hosted solutions like PeerTube, and help increase more tooling for discovery and reach; but there still aren't any really decent self-hosted streaming solutions. It's a hard problem space. U-Stream was in that market, before it was sold to IBM.
I lived NZ for several years and I find the current moral panic feels very reactionary. It's finding fear in everything; not unlike how schools locked down after Columbine with lots of security theater (more locked doors; made it harder for the smokers to hide--for better and for worse).
Many people I know there are totally willing to defend not having free speech, which made me truly realize how much it's baked into the psyche of the American identity.
Don't value? They have been begging for free speech to be utterly decimated and cry foul if you even suggest that maybe they shouldn't lockup a teenager over sharing the video.
They just want to "teach people lessons", it's quite sad really.
Why is free speech a good in itself, as opposed to other forms of action which are not good in themselves? Why is a law against hate speech objectionable?
I don’t know if I agree that this could have happened on any platform. Facebook’s core functionality is sharing; unlike twitch.tv, and self hosted services which by their nature have terrible discovery.
Only on Facebook could this have spread like it did.
Let's be honest, it's fairly simple. You submit all content to the Office of Film and Literature Classification and allow the Chief Censor to make the decisions. That's ultimately what the government wants.
As one Kiwi told me recently: "I have never heard anyone complain about our Censor".
And how does this handle the streaming of real time content? The traditional submission would be a complete “video” with a start and end, not an “unpredictable” live feed that is ongoing.
I agree these are hard problems to solve, and don't fault facebook for not having come up with a solution. I still think the "morally bankrupt liars" label isn't too far off the mark though. All of the negative stories that have emerged this past year, their denials and ham-handed cover ups, and perhaps finally the fact that they are actually asking some people to give them passwords for 3rd party services... I can't regard them as a thoughtful, ethically-minded company.
You are correct, but this is one of the few cases where it's unjustified. We have moved to online discourse more and more. Do we really want super powerful AI and armies of moderators combing through all of that?
Indeed, Facebook has probably been in violation of existing New Zealand privacy laws since it started, but there hasn’t been a peep out of the Privacy Commissioner, or anyone else in Government.
If you are a New Zealander, then you should know better. Freedom of speech is NOT an unalienable right in NZ, nor has it ever been.
The ridiculous scare-mongering by some of the sibling comments on this post about this being "the start of the end for NZ" is not representative of:
A) New Zealand
B) Most countries outside of the US.
(Because it's a continuation of existing policy; That not all speech is protected).
As a society, at some point we have to make the decision: Where do we draw the line with speech designed to incite hate and violence; and where do we draw the line on those that enable that speech to reach the masses?
In the US, the answer has been "We don't". In NZ, the answer has been "some speech is reprehensible and we will not tolerate it." This means there's an expectation that anyone of the scale and influence of Facebook should be able to moderate violent speech as it pertains to NZ (Keep in mind that something like 80% of the countries population is on FaceBook. That's an enormous amount of influence they have on the nation).
I'm also a New Zealander, and I work in tech; I agree with the Government here. Facebook will effectively benefit from this event in multiple instances:
1) It drives engagement with there platform (outrage -> views -> engagement)
2) It's more data for there technical god, which will in turn use it to better sell the people (political/fear-mongering?) adverts.
The easiest solution, and one that I think is entirely reasonable, is to not offer live videos in New Zealand. If they're unable or unwilling to moderate the content with regards to NZ law, and given the scale of there operations in NZ (even if that's small fry compared to facebook's scale in other nations) then it's reasonable for the government to impose restrictions on them.
(More generally, I feel government exists to ensure the collective safety and security of society. NZ's government serves the collective safety/security of the people of NZ; That's their role. If they feel the existence of facebook's live streaming threatens NZ's security, then bringing action against it is the reasonable response.)
> More generally, I feel government exists to ensure the collective safety and security of society.
Do you honestly believe that if FB (and no other platform except self-hosting) offered no live videos (in NZ or worldwide), the shooting would not have happened?
“If you are a New Zealander, then you should know better. Freedom of speech is NOT an unalienable right in NZ, nor has it ever been”
I do know better. It should be an inalienable right. We probably will pass these hate speech laws. We will suffer for it. The founding fathers of the USA got this one right. Trying to engineer society is not a good idea.
It’s not the governments job to descide what speech is okay and what speech is not.
What if that speech is child pornography, or assault and threats, or intellectual property?
Child pornography, is not speech. Should be banned.
Assault, is not speech. Should be banned.
Threats, you are correct threats and incitement to violence should be banned. This is the special case to freedom of speech it results in direct harm or coercion. Harm and coercion should be prevented.
Intellectual property is interesting. Yes the government should be able to enforce contracts. But this raises a good point.
You are correct however the blanked statement "It’s not the governments job to decide what speech is okay and what speech is not." is too simplistic and is incorrect.
>You are correct however the blanked statement "It’s not the governments job to decide what speech is okay and what speech is not." is too simplistic and is incorrect.
Why isn't it? We already allow the government to decide many other things, such as what we are allowed to sell (i.e through food safety regulation). Is there any reason why speech is different from other actions that it should have special treatment and be untouchable?
Speech is verbalisation of our beliefs. Once the state starts dictating what we can believe we lose much of our freedom and liberty. As such any restrictions on speech should be done with extreme care.
I'm pretty libertarian on many of my views. I think the government should have as little influence in peoples lives as possible. That's the only way, that I can see, you can have a society that allows for a truly diverse range of cultures and beliefs living as they desire under one government.
Where the government does need to step in is on interaction between people. Such as enforcing contracts and keeping actors in the markets honest. This doesn't infringe on personal liberty nearly as much as restrictions on speech.
But there are plenty of regulations I disagree with and think people should be left to make their own decisions freely.
Also I'm not sure you understood my concession above. You're question doesn't seem to follow?
It is a hard problem to solve, yes, but the solution has to be initiated from somewhere because it won't happen by itself.
And Facebook is not targeted because who they are, but because Facebook+WhatsApp+Instagram covers a ridiculous large part of the online communication spectrum today.
It's a channel where more and more diseased trash is shared unhindered, and if the owner doesn't do enough to stop it, then the government has to force them by any means necessary.
There are an even more fundamental questions than "How": Should we censor? And if you should, what should we censor and where is the line?
These are hard questions, and need good answers, not some laws referring vaguely to "abhorrent" content, and some politicians grand standing without really saying anything.
Just to name a few things that I think are utterly abhorrent, but also think should not be censored, because I think the merits of the society being able to see and talk about those things outweigh the downsides:
- The 9/11 footage and the Falling Man in particular: abhorrent
- The dead Syrian refugee kid on the Turkish beach: abhorrent
- Footage from concentration camps and the Holocaust: abhorrent
- The Christchurch shooter and his murder spree: abhorrent
- The footage from the Paris attacks: abhorrent.
- The footage from the Boston Marathon bombing: abhorrent.
- The screaming naked girl, running away from her village in Vietnam: abhorrent
Yup. It’s a hard problem to solve while keeping the business profitable. But so is safety in mining, aviation, oil drilling, medicine etc. Some tech companies won’t be viable businesses once regulations kick in. Good riddance.
There's something hilariously ironic in saying that tech companies should be treated like mining, aviation, and medical companies. How many natural disasters have the biggest mining companies caused and what repercussions have they faced? Do you really think Boeing is going to dissolve or that the pharmaceutical companies that kill tens of thousands each year by selling opioids and bankrupt millions more are actually held accountable by government regulations? It's all a circus show.
Just to clarify, is it your position that the FAA, FDA, and OSMRE are circus shows and that abolishing the FAA would make aviation safer, abolishing the FDA would make medicine safer, and abolishing the OSMRE would make mining safer?
Or are you saying those agencies don't do enough to hold accountable the companies that caused all those problems? Because that doesn't seem like an argument against regulating tech companies. That seems like an argument for regulating tech companies, and those other companies, more strictly.
No. He is basically saying that as long as we have these much larger problems, we should all of our time trying to fix those, instead of wasting time and money on a problem that is 1 thousand times less serious of a problem.
If social media starts killing 10s of thousands of people every year, then we should start talking about FAA, or FDA style regulations. But as long as that number is measured in dozens of people, we shouldn't be wasting billions worrying about it.
Lets take that moral outrage of yours and spend it on something a bit more productive, shall we?
Are you willing to pay someone for x hours a day every day to monitor and approve everything you post online? If your job is livestreaming then you will need to pay someone almost full time to watch you work.
government censorship comes with a significant price to a society. Far to high as to push problems on it that amounts to bad parenting and consuming objectionable comments.
No, but it's more intuitive that it's unmanageable at smaller scales. Nobody thinks Reddit or HN could start pre-vetting all submissions in the first place.
I challenge the premise that anything needs to be “posted”. What is it with this incessant drive to give everyone a voice? It turns out that the majority of “user-generated content” is either trivial or outright harmful. Perhaps the time has come for this experiment to end.
Don't be so dramatic. The common person really does not have anything interesting to say. I'm sure you've seen Facebook: that's what the common person is like. This isn't new: people have been boring for a long time, but until recently we had no way to know exactly how boring. Now we do.
If you haven't been bored to death with this comment, I may have to try harder. Let me know.
> Automatic censorship of live feeds, how do you do that?
Make it live but with a time delay (say 1 hour). Allow .1% of all users to see it immediately. Add a Flag This Content button. Depending on how many people flag it, triage as safe to show/don't show/ask a human to decide.
I'm sure this could be refined. Maybe an hour is too much or too little. Maybe .1% is to many or too few. Also, over time you should be able to recognize groups of users that are better at recognizing different problems based on the fact that they agree with the human review done by fb or whoever.
How would that work for the NZ attack? Apparently only 12 people were streaming the video, and nobody reported it until after the attack was streamed in its completion. Effectively nothing could have stopped this stream save for magically accurate and effective filtering mechanisms, or manual intervention.
Where do you draw the line. If I throw up an open internet forum, am I responsible for administering everything that gets sent on it? If the Christchurch shooter had set up their own live-streaming box, who would we be getting mad at? Do you need a licence to broadcast on the internet now?? Every day we get further and gurther from a free internet, and its incredibly sad to see people on HN supporting this foolish sentiment.
Lots of people are killed by alcohol, why is it not banned already? Lots of people are killed in cars - let's ban them. You are not eating healthy, not enough vitamins and nutritions? Put you in jail!
Tragedies happen, but why can't we live with them and try solve the causes instead of results? How many people read the manifesto written by the killer? I don't agree with any brutality but there are some points to think about.
If they ban one service, they will use another one next time. Or people will watch it from an archive. Or they won't watch the it at all and trust news blindly.
But the censorship will harm everyone. It will force people to post positive, optimistic posts only. It will control what is good and what is bad. It will present a fake ideal reality.
This is a great analogy. Alcohol, cars are highly regulated. Facebook etc less so. Facebook will be much more heavily regulated in the coming years. It won't be banned, unless it breaks the regulations.
I fundamentally support free speech and acknowledge that protecting free speech includes protecting so-called 'hate speech'. I haven't read the Christchurch shooter's manifesto but as a manifesto I would definitely include it under free speech.
However I wouldn't include live footage of violent criminal acts as free speech. Free speech is expression in words and ideas not acts. So yes, I would draw the line here and I think it is a well-defined line. (As a sidenote, a re-enactment of the crime using actors and where no-one actually dies would be free speech).
I wouldn't necessarily single out Facebook as the privacy commissioner has done but I would agree that everyone who knowingly transmits and shares this video is doing a bad thing. Again, it's not free speech it's the sharing of a mass snuff film. Which is sick.
I call bullshit on Zuckerbergs claim as birthday parties and group hangs streams are primarily with people you have direct relationship status with on Facebook and thus can be distingishued when distributed to users by Facebook as non-related users can have a delay added for review. This would at least greatly reduce the spread of said material to limited groups and if individuals within that groups choose to resubmit it to increase it’s reach a bit, they could be sued for actively and knowingly spreading illegal and harmful material.
Either Zuckerberg don’t want to move a finger, or he simply want regulators to regulate Facebook and other social media to create a higher barrier of entry into the social media market.
I concur. My nephews friend (who is 11) was home sick on the day of the attack, he has no relationship with the attacker, yet for some reason Facebook trended the video on this kids facebook wall.
Facebook claims not many people saw the video before it was taken down which I call bullshit on because if it wasn't viewed by many it would never have trended to a 11yo kid.
(I think hes too young for facebook but that's an entirely different issue)
THIS. No one else has seem to have even brought up, as just a potential idea, a time delay only for the public stream but allow pre-confirmed friends to view the stream live. Or instead of pre-confirmed friends, people who were invited and responded to a Facebook event. Or something.
That would solve the birthday use case while inhibiting the virality.
That's not necessarily good! Maybe the ability to live-streaming police brutality or vote stuffing or whatever, in a way that could go viral, is an important public good. But no one seems to have brought that up, either.
Is it a necessary to have a ability to live stream of police brutaility? Would a delay of lets say 1/2 day and distribution after moderation or by regular media be that bad? As long as you can record a video and store it live on cloud storage hindering cops from destroying video. That should be enough of a deterrent for officers to avoid police brutality.
I am all in for dismantling facebook but this is just a rant. Zuck has way more things in favor for him legally. What confuses the hell out of me is that i always saw from childhood US executives are held accountable for their company's action. Any scandal and i would see the CEO step down. The amount of damage the zuck has caused the world and yet still he wont step down. What the actual hell
>What confuses the hell out of me is that i always saw from childhood US executives are held accountable for their company's action. Any scandal and i would see the CEO step down.
You are confusing regular random CEOs who get hired to run a company with someone like Zuck who founded a company and is still closely tied to its identity. Remember the mess after Steve Jobs was forced out of Apple?
That actually has little to do with why "Zuck" is still there, it's because he has special voting rights that make him immovable. Ditto for the Google founders.
Take that away and we'd see the actual truth of his utility to Facebook and its shareholders.
On stepping down or being forced out: Regardless of any other factor, it's because he controls roughly 60% of the voting stock. He has about 18% of the stock, but his class-B shares get 10 votes per share, while normal class-A shares get only 1 vote. So it would take something truly black-swan level to get him out.
What exacerbates this even more is the fact that once sold, Class B shares lose their privileged status and become class A shares. So as employees with B shared gradually leave Facebook for other opportunities, Zuck gains even more power relative to the rest of the voting stock.
I'm pretty embarrassed to be kiwi right now with our PM & multiple ex-PMs showing zero understanding of so many of these issues, yet they're still very willing to put forward their opinions. They hijacked the gun debate to pass laws with zero public input, people cheered because they weren't part of the 3% who owned guns. Now the govt is turning its ugly head toward censorship and taking away our privacy.
Facebook deserves to become a tech pariah. People should have turned against them during the predatory racial ad targeting scandal, or the depression scandal, or the Cambridge Analytica scandal. If broadcasting live murder is what it takes, so be it.
When America finally gets some good regulators and trust busting politicians, Facebook will get its commupance and its going to feel good.
No one has turned against them except for media narrative. People still continue to use them as much as and many times more than before. Hacker news crowd is not representative of general populace. These stories which are being upvoted here - normal people don't even care about them.
If Facebook is morally bankrupt, so is Twitter, Snapchat and all other social apps. Facebook represents what humans are - and humans are a mix of all different types. They are good, vile, sympathetic, apathetic, passionate, depressed etc.
To my knowledge, Twitter and Snapchat haven't targeted predatory home loan ads at black people, nor did they intentionally make millions of users depressed by manipulating their news feeds just to see what would happen, nor did they collaborate with shady political companies to attack several elections and then attempt to hide the evidence. These aren't things Facebook was a platform for, these are things that Facebook chose to do.
The "bad actors" argument sounds similar to the anti-gun control argument. Nobody wants regulation, but how many pointless, needless deaths are too many?
> [They] allow the live streaming of suicides, rapes, and murders, continue to host and publish the mosque attack video, allow advertisers to target ‘Jew haters’ and other hateful market segments
Sounds a bit like 'murder, arson and jaywalking'. Being 'hateful' is not the same as killing people on a stream.
If 'hate' means 'wanting people dead or gone', then it could be a step on the same path. Maybe it's a cultural thing, but from a Christian perspective, they're not so different.
Sure, but if we're going to redefine words to just mean anything, a discussion will get nowhere.
Hate just means a strong dislike of something. Someone can be hateful of a presidential candidate without wanting them dead or gone. Is that hateful speech, or is this acceptable? Furthermore, do we really trust governments to decide what is?*
*: note that I'm not saying that I do trust FAANGs to make this decision either.
But is hate just a strong dislike? The notion that it means you wish them dead or gone seems to be at least 2000 years old. Not to mention that in recent memory, genocide tends to be preceded by hate speech.
And there's rather a big difference between disagreeing with a particular politician's policies and claiming that a particular demographic group is the reason for whatever ails society.
I don't see how a "Dog Bites Man" story like this keeps getting upvotes. It's entirely unsurprising that a company whose core goal is global 24/7 surveillance and monetization will tell every lie before reckoning with the consequences of their actions.
> Zuckerberg said incidents like the live streaming of the
> Christchurch mosque attacks were the result of “bad
> actors”; not bad technology and a time delay would disrupt
> the enjoyment of users who broadcast events like birthday
> parties or group hangouts.
In this case, he is 100% correct. Facebook, or social media in general, is a tool. A powerful tool, but a tool nevertheless. Knives for example aren't evil and are important for cooking/cutting, yet there are many knife attacks in London all of the time.
A time delay doesn't solve the problem, the Christchurch terrorist could have pretended to act in some acceptable manner for the checks and then performed the atrocities after passing them (and checked from another account). He could have filmed his drive to the Mosque and discussed the weather, for example, to bypass initial checks.
> “It is a technology which is capable of causing great
> harm,” Edwards told RNZ.
I hope he also advocates for the banning of vehicles, banning of knives - or any other mundane object that causes many deaths a year. Perhaps he also calls for GPS trackers to be put into every knife sold [0]?
> “They are morally bankrupt pathological liars who enable
> genocide (Myanmar), facilitate foreign undermining of
> democratic institutions.
Except they are actively banning people (against their own guidelines on acceptable Facebook use) that could possibly upset left-wing establishments [1]. I don't think Facebook could possibly bend any further to the NZ governments will. I don't think silicon valley could have possibly done any more to try to remove the shooting content [2].
The murder rate in London just surpassed NYC. There were many articles over the weekend about it if you’re interested.
Knives are prohibited there, and they have public knife amnesty bins (which occasionally get ransacked by thieves). Much like other cases of prohibition, it’s not really restricting the behavior of criminals.
Perhaps it's seasonal. London had more murders than New York in Feb or Mar 2018, but not in other months of 2018, I think. Did the same thing happen this year?
Because this isn’t a Twitter rant from a random nobody; New Zealand’s privacy commissioner probably has regulatory authority (or at least influence) to force Facebook to change its behavior, at least in NZ. If he’s using language this strong that probably indicates a crackdown is coming.
It's in the article, 5th paragraph. It would be nice to know why my original post was downvoted, did people assume I was racist? I referenced a sentence in the article and stated a fact about Facebook that contradicts what was said.
For me, it went like this: I read the article, missed the reference to the Jews, thought it was a racist comment, made my reply, downvoted your comment, and flagged your comment.
Then I saw siphor's reply to me, went back to the article, found the reference to Jews (control-F works better than I do, apparently), un-downvoted your comment, and un-flagged it. And I admitted my mistake in a reply to siphor, and I admit it to you as well.
Okay, but even if that were true, both Mark and Sheryl are Democrats. You can see a picture of Sheryl with Hilary Clinton on her profile page, posted before the election.
Trump moved the US Embassy to Jerusalem and recognized Jerusalem as capital of Israel. His actions do not show him being an anti-semite. After all his son-in-law is himself a Jew and from what I hear wields a lot of power in the White House.
I appreciate how the word using the word 'are,' though done because of a difference of how other countries refer to organizations, drives home the point that facebook is not a monolith.
It is a collection of thousands of morally bankrupt liars, each willfully using their particular skills and experience for evil every single day.
He's right, but I'm wondering if he's deliberately baiting them with the strong (but accurate) language. Do governments need any further excuses to seek to impose controls on companies like Facebook?
Facebook is going to have to allow users to opt out of advertisements entirely for anything to change. You are essentially trading the social features of the site in exchange of you agreeing for them to profile you, decide what you should see based on a private algorithm, and share that information with third parties.
Zuckerberg even suggested a while back allowing paid users to opt out, but then this just preys on the poor. I for one welcome a decentralized privacy-first web with zero advertisements.
How else is it supposed to exist if it doesn't have ads? That's a huge part of it's business model. Companies that want to make a huge profit but offer a 'free' product all rely on the back-end manipulation advertising model. Either the product can't be free or they must turn into a non-profit. What other service offers something for free besides a charity? Except Facebook is the opposite of a charity.
I try to do my part to encourage people to use the Fediverse. I dream of a day when Facebook, Twitter and others will have to permit ActivityPub integration just to stay relevant.
Hmm .. MySpace should really implement activitypub. Might make a comeback.
As Facebook has repeatedly discovered over the past few years, open integrations are fundamentally incompatible with data privacy. A world where arbitrary third parties can easily interoperate with Facebook is a world where arbitrary third parties have everyone's data.
Debunking the Shareholder Value Myth: History
Although many contemporary business experts take shareholder primacy as a given, the rise of shareholder primacy as dominant business philosophy is a relatively recent phenomenon. For most of the twentieth century, large public companies followed a philosophy called managerial capitalism. Boards of directors in managerial companies operated largely as self-selecting and autonomous decision-making bodies, with dispersed shareholders playing a passive role. What’s more, directors viewed themselves not as shareholders’ servants, but as trustees for great institutions that should serve not only shareholders but other corporate stakeholders as well, including customers, creditors, employees, and the community. Equity investors were treated as an important corporate constituency, but not the only constituency that mattered. Nor was share price assumed to be the best proxy for corporate performance.7...
So where did the idea that corporations exist only to maximize shareholder value come from? Originally, it seems, from free-market economists. In 1970, Nobel Prize winner Milton Friedman published a famous essay in the New York Times arguing that the only proper goal of business was to maximize profits for the company’s owners, whom Friedman assumed (incorrectly, we shall see) to be the company’s shareholders.9 Even more influential was a 1976 article by Michael Jensen and William Meckling titled the “Theory of the Firm.”10 This article, still the most frequently cited in the business literature,11 repeated Friedman’s mistake by assuming that shareholders owned corporations and were corporation’s residual claimants. From this assumption, Jensen and Meckling argued that a key problem in corporations was getting wayward directors and executives to focus on maximizing the wealth of the corporations’ shareholders.
All you've demonstrated is that a person made some very cogent arguments for why Sharholder Value became a dominant driving force after the 70's and why it shouldn't be that way. However, many companies are now run in that fashion. And shareholder lawsuits against corporations when they feel the leadership has not acted in their best interests are exceedingly common. So, you have a "shouldn't be like this", but the reality is it nonetheless "is like this."
He's right, however, managers can simply say, "We believe long term wealth maximization is dependent on not being assholes and getting the company regulated out of existence," and that will be the end of any shareholder lawsuit on the subject. US case law gives wide leeway to managers to determine what is best for the business.
99% of the time, that doesn't actually happen, and even when it does, it tends to amount to more of an annoyance than a serious threat. It's not like it takes the CEO's time personally to deal with an activist shareholder lawsuit. They hire a team to handle it, and things move on.
Managers are practical people. They don't optimize for rare activist shareholder cases.
You've only proved their point. Since 1970 (ie at least a decade before most users of this site were born), it has been a major guiding principle in finance. Because of that, the majority of board members and large shareholders will also share this value. If it sounds like a circular argument, that's because it is.
I stopped reading after this point. Facebook has no clue what kind of stuff gets posted on their platform, and their AI isn't powerful enough to detect it. The idea that Facebook is causing/enabling such behaviors is ridiculous. It's a big claim to make with no quantitative evidence. It's similar to when Marilyn Manson got blamed for Columbine, or GTA/violent video games got blamed for making kids violent.
Facebook is morally bankrupt, but not for these sensationalist reason.