Hacker News new | past | comments | ask | show | jobs | submit login
Apple removes references to CSAM from its child safety webpage (macrumors.com)
390 points by virgildotcodes on Dec 15, 2021 | hide | past | favorite | 393 comments



It's incredibly disturbing to me how little much of the HN crowd seems to understand how these systems like CSAM actually work. I'm not talking about the tech or programming.

This is a fast track tool to putting anyone in jail for the most despised crime a person can commit. The tool has no oversight, no transparency, no counter, no clear your name, no real usage definitions other than relentless spying on the "protect the children" fallacy.

Most people don't understand how life destroying a false accusation is once the government paperwork is started. At that point like it or not you are basically considered guilty by all the agents of the governement by default. Their jobs are to prosecute not eliminate false positives. Once you have been on their radar your life will never ever be the same. And since it is CP related you will not even get helpful people willing to touch it with a 10ft pole cause of fear they may be implicated. CSAM is a dastardly and effective way to establish a totalitarian rule by fear of life in jail, nothing more.


> It's incredibly disturbing to me how little much of the HN crowd seems to understand how these systems like CSAM actually work.

Dunno. I remember most people on HN were absolutely pissed this was even being considered. My takeaway is that most people here understand it to a large enough degree that they don't like it.


Stuff like this gives me pause in the tech industry. My brother is in med-school and always joke with him that the worst thing I can at work is take a website down and the worst thing he can do is kill someone.

But with tech like this, you can literally kill someone with it. Sure you're not actually "killing" anyone, but the decisions these systems make can drive someone to end their life, whether or not the accusations by the machine are true. I wouldn't want to be the guy building that software.


lest we forget

> Because of concurrent programming errors (also known as race conditions), it sometimes gave its patients radiation doses that were hundreds of times greater than normal, resulting in death or serious injury.

https://en.wikipedia.org/wiki/Therac-25


> can drive someone to end their life

Don't underestimate the possibility of somebody else ending their lives for them in a fit of righteous rage.


What's disturbing is that nothing has materially changed. For those of us who are running iOS or Android, we're just an OS update away from CSAM landing on our devices again. It's just a matter of time. The question is what will it be renamed to and under what pretext will it be deployed?


Eugh, yep. Apple pretty much confirms its coming in the future:

“Update: Apple spokesperson Shane Bauer told The Verge that though the CSAM detection feature is no longer mentioned on its website, plans for CSAM detection have not changed since September, which means CSAM detection is still coming in the future.”

Expect a much quieter roll out.


We will not make it quiet.. It will be found out and I'm sure the privacy organisations will make a big deal of it.

I have much less of a problem with server based scanning. But my device is mine.


Yep we will complain and we will let the public know what apple is up to. We are always watching the watchers as well. Don't forget that Apple. We won't let you reframe it as "think of the children". Please stick to the hardware and making $$$ business, we don't want you playing keystone cops.


You act like there is a choice. It will be rolled out period the decision has already been made. The only reason it is not out is Apple whined about their image and asked for a delay. The powers that be will keep dipping their toes in the water until the heat dies down then roll it out no matter what. Like many other such instances eventually the majority of the public will suffer from fatigue on the issue and stop caring. They have lives to lead and the people pushing for this are getting paid to do it as their job and also will gain power from it. They will never stop.


My bet is they will settle for a similar solution to other providers and scan the photos on their servers.


That’s still a much better world to live in than “let’s make it OK for people to think they can deploy scanners to users devices”. My device is mine, their servers are theirs.


> My device is mine

Not anymore. Now its a self subsidized subjugation tool. In 10 years it will not even be possible to buy a networkable phone in the USA that is without spyware embedded. Mark my words.


Do you have any reasons to think that they're not scanning photos on their servers? AFAIK every major storage cloud does that, including Apple.

What probably happened is that Apple planned to introduce E2E encryption for iCloud. But they have to scan photos, so they rolled out client scanning to ensure that they're not storing encrypted child porn on their servers.


> Do you have any reasons to think that they're not scanning photos on their servers? AFAIK every major storage cloud does that, including Apple.

Apple already scans iCloud Mail for CSAM, but not iCloud Photos https://9to5mac.com/2021/08/23/apple-scans-icloud-mail-for-c...


"Medical misinformation" seems to be the trendy pearl clutching excuse of the day.


Dude every single comment in the post when this was announced was against it.


There were a few "why are you defending predators?" type comments.


And "I have nothing to hide!" type comments, from few voyeurists.


And the bUt yOU dOn'T uNderStAnd tHe tEchNolOgy! Really? a communuity of programmers, engineers, and scientists can't understand technology? It's so preposterous.


> from few voyeurists

In this case, wouldn't that be exhibitionists?


There were several threads about it. As they increased there was more and more debate. The fact that there is debate at all is indication of support.

It also mirrors what they are anticipating. An initial backlash then less and less as they continually push for it since they are literally getting paid for it and we have to fight back on our own time.


You should at least be somewhat charitable to the other side of this issue which is that CSAM is the result of real life sexual abuse and the market for it causes more. Unless you want a cop stationed every 10 feet you can't stop the abuse, you can only take away the incentive to do it in the first place which is the theory behind most laws. The reason most people aren't constantly worried about being murdered isn't because we have super advanced defense systems but because the fear of punishment keeps people from doing it.

I'm aware that the majority of HN want the solution to be "do nothing" and some quote about liberty and security but you would probably change your tune if you or your kids were trafficked, starved, raped, and the had the pictures and videos distributed to fetishists online. CSAM detection is the government's best option to kill the market for it and make it radioactive.

At least listen to the experience of human trafficking survivors before you say it's all a big conspiracy. Those scary signs on buses, trains, in women's bathrooms, and doctor's offices aren't a joke.


> The reason most people aren't constantly worried about being murdered isn't because we have super advanced defense systems but because the fear of punishment keeps people from doing it.

No, the real reason most people aren't constantly worried about being murdered is that most of the people we encounter aren't murderers.


> most of the people we encounter aren't murderers

Gods I wish this fallacy had a name, I guess it's a corollary of the Spiders Georg principal. The number of perpetrators is not proportional to the number of victims.

It's not true that 40% of men will sexually assault someone in their lifetime but it's nonetheless true that 40% of women will be sexually assaulted in their lifetime.


Fine, turn my statement around if you prefer. The reason people aren't afraid of being murdered is because being a victim of murder is rare.

But the statistic you're citing here conflicts with your earlier statement (which is the one I take issue with):

> isn't because we have super advanced defense systems but because the fear of punishment keeps people from doing it.

The "fear of punishment" deterrent is clearly doing absolutely nothing for the 40% of women who have been / will be sexually assaulted.

Fear of punishment does not work when "correct" execution of justice is so rare and arbitrary. That's my point. Police are corrupt. Prosecutors are corrupt. Therefore, there is no justice. Therefore, "fear of justice" doesn't work as a deterrent.

Even if the police were better, if all it takes is a few murdering, raping, psychopaths to do all of the damage, how could "fear of punishment" possibly work?

There is, however, fear that the bureaucracy will decide to come after you in a Kafkaesque horror show...


Jesus in what country? Even taking some very liberal definition of rape 25% is insanely high. Do you have some source on that?

EDIT: In the US we have a 2000 study quoting 5% and a 2011 study saying "nearly 20%" but their data include attempted rape which is a pretty important distinction. This is coming from Wikipedia though.


Changed it to sexual assault because you're right, the study I was going off of included attempted rape which made it higher, but also only counted penetration which is also kinda dumb. Switching to sexual assault eliminates some ambiguity.


Nobody is saying “do nothing” about all the depraved stuff some people do with respect to human trafficking, etc.

What people are asking is to not be treated as if they are human traffickers by Apple. If they deploy CSAM, they are in effect saying, “sorry world, we suspect all of you are trying to traffic humans so we need to scan your images”.

That’s not how law enforcement should work in a free and open society. You should be suspected of illegal activity first, then law enforcement can move in and do their job.

Once CSAM is deployed, then it’s much easier to expand the scope of it to include other forms of content, and Apple will have a much harder time saying “No”.

I was really hoping Apple would take the backlash of CSAM to exert pressure against whoever is asking them to do this. “Sorry <large gov entity>, we can’t do that without destroying the decade of goodwill we built up with our customers. You’ll have to legislate this for us to do it”. That would have the advantage of at least a public debate.


Nice job making everyone who opposes totalitarian surveillance systems look like they don't care about abuse. This is exactly why children are the perfect political weapon: it makes people accept anything, and the few of us who resist are lumped in with the abusers.


So do detective work, track down the pedo’s, kick down their doors, shoot/imprison them, and free the children.

Don’t take away the privacy and liberty of hundreds of millions of innocent citizens.

What happened to all the people who visited Epstein’s pedophile island? Any consequences? Nothing so far! But the rest of the population better watch out, Big Brother is going through their photo libraries. It’s all a massive government overreach.


The whole reason these systems are proposed is because "detective work" is ineffective and the harm is ongoing. You gotta a least meet people where they are. Don't you think the level of policing required to actually find people who possess CSAM wouldn't also be a massive overreach? Or are you hoping that "just do more" will come to nothing in practice?

Prosecuting members of the ruling class in this country is a whole separate issue, and one that I'm sure we are in total agreement on sans the "well if rich people are above the law why can't everyone be too" take.


> Don't you think the level of policing required to actually find people who possess CSAM wouldn't also be a massive overreach?

Obviously. Criminalizing possession of data is dumb. It's essentially a declaration that some numbers are illegal. It's the exact same problem faced by copyright. Any attempt to actually enforce these laws at scale will be a massive overreach to the point of tyranny. They are incompatible with the computing freedom we enjoy today.


I don't buy this argument because you can make any law sound silly by reducing it to something absurd. Saying that that the benefit isn't worth the trade in freedom is totally valid but the quip about illegal numbers isn't super persuasive.

"Criminalizing possession of nuclear weapons is dumb, it's essentially a declaration that some molecules are illegal."

"Criminalizing hate speech is dumb, it's essentially a declaration that certain combinations of sound frequencies are illegal."

"Criminalization of murder is dumb, it's essentially a declaration that I certain locations where I store my bullets are illegal."


It sounds absurd because it is absurd. Think about the absolute control that would be necessary in order to stop our computers from working with arbitrary data. Obviously this logic cannot be applied to the physical world.


>detective work" is ineffective

Source?

Detective work actually is effective. The people investigating already know where huge amounts of trafficking and abuse happen. It happens in the CPS system. The problem is they have made it such a quagmire of rules regulations and gotchas that only the truly angelic and those looking to abuse children to make a quick buck are willing to get involved. I got news for you one of those numbers is greater.


Somewhat disturbing content following!

> The whole reason these systems are proposed is because "detective work" is ineffective and the harm is ongoing.

In Germany, there recently was a news story about two journalists infiltrating and effectively shutting down an insanely large darknet forum for child abuse material.

That story exploded, because evidently, law enforcement could have done a lot better, as all the journalists did was crawling for webhoster-links and reporting the content there - the hosters immediately followed through - terabytes of abuse were deleted. The admins of the forum even spoke with the journalists and told them, literally no one cared before and the whole userbase was caught in surprise.

Not-so-much-fun facts: IIRC that site had millions of accounts registered; apparently at least 1% of men are attracted to children; it's been mostly non-commercial, user-made abuse material - strongly motivated by that community's demand.

The journalists concluded: It's not enough to persecute individuals and gather evidence. It is absolutely imperative to disrupt those communities, too, as those sick people not just consume abuse material, they very much actively promote the abuse itself in their interactions. Now, it's not single hit, lasting solution, but requires constant effort. However terabytes of abuse material are not moved in an instant, and they are surely not delivered via Tor to literally millions of people. Apparently it takes about two people, who know Python, to take down a platform like that. (The argument of destroying evidence is moot, as you can easily do both, within law enforcement capabilities.)

German podcast, interview with one of the journalists; articles in shownotes: https://lagedernation.org/podcast/ldn269-bka-laesst-bilder-v...

TL;DR: Before everyone's privacy and democratic distribution of power are getting compromised, ... maybe lets make sure law enforcement is actually doing their job first.


You are making it sound all philosophical when HN crowd is already philosophically behind finding an appropriate solution to the CP problem. “Problem” is not a problem here, current proposed solution is. See it critically and not emotionally.


See this is always how online discussions go.

* "CSAM is -- nothing more -- than a way for governments to establish a totalitarian rule!"

* "Oh come on, that's really uncharitable to the people implementing CSAM detection, at least present the argument for the other side in the way they view the issue."

[loud screeching of goalposts...]

* "Of course we acknowledge the problem of sexual abuse imagery, it's just that we don't like the CSAM detection as a solution... [aaand have no ideas for an effective privacy preserving alternative, and want nothing done in the interim while we search for the unicorn]."

There becomes a point where if you don't want to implement the best idea we've got to address the problem, and don't have any other ideas, then it gets harder to believe you really care about the problem.


>There becomes a point where if you don't want to implement the best idea we've got to address the problem

The assumption here is "something must be done". The fact is that liberty and safety are intrinsically at odds. If we're going to make progress, the question we have to face is how many rescued children is worth how many units of liberty. It's distasteful to present the issue so bluntly, but it's the implicit calculation we do in other cases, e.g. preventing child deaths in car accidents. We all implicitly agree that some amount of death and disfigurement of children is worth the economic and social benefits of cars. Similarly, how much liberty should we give up here?


> "The fact is that liberty and safety are intrinsically at odds."

If you own an Apple iPhone, and you have photos on it, and you enter into a contract with Apple Computer to use their cloud service, and you upload photos to the cloud service for them to host on their servers, do you have the "liberty" for them to not be allowed to stop you abusing the service in egregious ways? This isn't a matter of freedom and liberty while you can still you can refuse to use iCloud and refuse to use iPhones. Where, by comparison to your example you have much less option to opt-out of travelling by road, using products driven to you on roads and working with people who travelled by road to get to you.

> "it's the implicit calculation we do in other cases, e.g. preventing child deaths in car accidents. We all implicitly agree that some amount of death and disfigurement of children is worth the economic and social benefits of cars"

We don't all agree in the same way; the amount of agreement is continually changing. Where roads are dangerous, authorities lower speed limits and add traffic calming measures and citizens agitate for cycle lanes and pedestrian zones. Where vehicle exhaust makes for poor air quality, people demand cars must have catalytic converters and be subject to emissions control regulations. When pedestrians die in crashes, car manufacturers add external crumple zones. Around schools, people are employed to stop traffic and help children cross roads safely. We pay educators to teach children about road safety and pay for campaigns aimed at educating drivers about pedestrian awareness, drunk driving, distracted driving, the risks of speeding, using fairly graphic imagery of children being hit and killed to drive home the point. We mandate that older cars have regular checks to make sure they're still road worthy and still have functioning brakes and tyre tread and working lights for safety reasons.

I'm often arguing against cars and the harm they do to humans, but it's not the "either you ban cars or you hate children" boolean you're suggesting it is where everyone who benefits from cars is happy to sacrifice children for those benefits and thinks the current amount of sacrifice is just fine thanks and wouldn't change it if they could.


You can turn off a iCloud. CSAM cannot be turned off or even viewed. Only evil is done in the dark.


Of course but at least call a spade a spade, the units of liberty you're willing to trade to solve the problem is basically the metric of how much you care about it.

It's a fine position to take that the harm due to that much invasion of privacy and government involvement in our private lives isn't worth but means precisely that you care about the problem less than someone who is willing to make that trade.


I don't think that follows. One can maximally prioritize children while also believing that all children are better served by a society that protects liberty over immediate safety. How you weigh these issues turns on how you weigh the N-th order effects. It's probably not too controversial to say that eliminating all cars would harm children more and thus children as a whole benefit more from cars than their elimination. But it would be disingenuous to characterize this calculation as caring more about economic benefit than children.


You're conflating viewing CSAM with the physical act of exploitation, through some dubious reference to "incentives" and "the market". I don't think people who abuse children are doing so because it's economically lucrative, but rather because they themselves are dangerously sick in the head. But maybe you know more about this scene than I do. (see how that works?)

Once abuse has occurred and has been "documented", completely preventing the spread of that CSAM requires totalitarian systems of policing communications, which are what is being argued against here. Invoking the original abuse as if the topic at hand would have had some bearing on it is an emotional appeal, not a logical argument.


And the defense of totalitarianism begins. Sure strangers 1000 miles away are definitely going to look out for you better than yourself.

>Those scary signs on buses... arn't a joke

They actually are. If you paid attention to Epstine the people doing this have access to massive resources and you don't actually see this happening in public for the most part. Only very small independent traffickers and no one reports them because they themselves are poor and don't have the time to deal with it on their hour long bus trip to a minimum wage job. In fact if you read about most of the people doing it are related to the victims and if you report them there is little to be done about it because of that.


> The reason most people aren't constantly worried about being murdered isn't because we have super advanced defense systems but because the fear of punishment keeps people from doing it.

In this case, CSAM is the "advanced" defense system, not the fear of punishment.


Kinda? Detecting CSAM doesn't stop the victim from getting abused in the first place which is the goal. It's to increase the risk of getting caught and punished, and to make the photos worthless.


I agree with you completely. We know exactly how it works. What we object to is Apple acting as a police agent without a warrant. It's as simple as that. It sets up a precedent to do all sorts of similar policing (without reason for suspicion or a warrant) for everything from DMCA to thought crime. We are not luddites. We are also not suckers for the police state and authoritarianism creep.


Thing you haven't mentioned is that with no oversight, no transparency no counter, no clear your name, the tyranny which wants to ruin your life doesn't need to build a CSAM scanner to do it. A tip-off to the police, a falsified claim from an "eye witness", an "accident" putting your name on the wrong report to get the machinery rolling and it will, as you say, roll by itself. See what actually did happen to Sir Cliff Richard[1] where "a complaint to the police" got his house raided and his name dragged through the media and despite not being arrested, not being charged, case being dropped, him suing the media and winning, suing the police and winning and the media apologising, he says his reputation will never be completely cleared - and it probably never will be, and it didn't require any specific tech.

"The System" didn't need to wait patiently for a big tech company to build a complicated system with dastardly moustache-twirling weaknesses before it could affect Sir Cliff, anymore than Big Brother in 1984 needed to wait for Winston to be seen taking action on the telescreen in his house before the system crushed him. If "The System" wanted to accuse you, it could accuse your car of having been seen on a nearby road at the time, your likeness being seen on CCTV, and if you get rid of licence plate recognition scanners and CCTV then it would be "an anonymous witness" seeing you in the area at the time or "overhearing you discussing" something.

> "Most people don't understand how life destroying a false accusation is"

Citation needed? Because we were forced to read 1984 and To Kill a Mockingbird in school and I'm pretty sure a lot of other people were as well; we've seen cult TV series like The Prisoner and The Fugitive focusing on the lives of the trapped trying to escape "the system". Blockbuster films have been made about the power of "The State" chasing people such as Enemy of the State with Will Smith. Films in the pattern "wrongly accused must fight to clear his name" and "stalwart good guy fights the system gone bad" are practically genres on their own - The Fugitive as a film with Harrison Ford, Who Framed Roger Rabbit, Captain America: The Winter Soldier, The Shawshank Redemption, and specifically about someone falseley accused by a child, The Hunt with Mads Mikkelsen which is in the IMDB Top 250 films.

Are you also, for example, complaining about the system where doctors and social workers are mandatory reporters if they have any evidence or suspicion of children being abused? That's a "their word against yours" system which pervades the nation, has no oversight, has no objective criteria, no explanaiton of the workings of the brain behind it, relies on humans when we know for certain that there are doctors who make mistakes and errors of judgement and at least some who would look the other way for money.

At least with the tech company CSAM scanners we can potentially have a society wide discussion about how they work in detail, whether that's reasonable, who could and should oversee them and how, what kinds of data cloud companies are obliged to know about their customers and what is reasonable for them to not know, etc. But even in tech circles we can't actually have those discussions, most of the attempts to discuss the details of how it works are drowned out by people who haven't a clue what Apple announced, misunderstand massive parts of it, conflate two or three different systems, don't seem to understand that other tech companies already do this with much less careful design, and cry tyranny and blame China.

Hypothetically, iMessage or any chat system could be built end to end encrypted, any cloud file sync or file storage could be. One end of the spectrum is that they should be. If that's your position then "the system" will probably work around you, that's the "opt out of modern life" position of the "return to the woods" lifestyle. You can't unmake billions of cheap ubiquitous cameras, you can't unopen Pandora's box, and you probably can't stop people emotionally reacting strongly to topics of child protection. If your reaction to that is "not listening, no no no" then it doesn't seem likely to sway "the system" very much at all.

> "CSAM is a dastardly and effective way to establish a totalitarian rule by fear of life in jail, nothing more.

America has effectively incarcerated a lot of people into for-profit prisons without needing this. China has effectively added a lot of population control by rule of fear without needing this. You don't need a high-tech murder-detector to make people fear being thrown in jail for murder.

[1] https://en.wikipedia.org/wiki/Cliff_Richard#Property_search,...


Australian police are allowed to secretly add data to your phone.

The Surveillance Legislation Amendment (Identify and Disrupt) Bill 2020 gives the Australian Federal Police (AFP) and the Australian Criminal Intelligence Commission (ACIC) three new powers for dealing with online crime:

Data disruption warrant: gives the police the ability to "disrupt data" by modifying, copying, adding, or deleting it.

https://tutanota.com/blog/posts/australia-surveillance-bill/


You are correct. Having children already puts you in the system of totalitarian control. This is worse but off topic. CSAM is about using that same incredibly effective method and extending it even to those that do not have children. You cannot even "opt out" by simply staying off cloud services now your own phone is a spy against you.

Think of the children is a devastatingly effective way to slip past peoples defense and have them willing allow their own subjugation.

Again you are correct I don't know any place to look for stats other than my own experience of how shocked people seem to be when some sort of legal issue comes their way. Along with internet comments about just hire a lawyer that you can find in any comment thread about law.

Yep I'm aware of all the power abuse but this is another extension of it aimed at finalizing an end to end digital world control.


[flagged]


Some valid concerns are:

* False positives - the visual hashing algorithm has already been broken and people were able to force collisions.

* Dictatorships demanding the addition of "problematic" pictures to the CSAM database, ie uncomfortable events they'd prefer the population to stop talking about.


Well the first point is a problem if real, because we don't want police to have to keep raiding people for no reason. But, after they fix the code that shouldn't be a problem. I thought it was a hash like SHA256?

The second point is unrealistic and seems like a last ditch attempt to shoot down a good idea.


I hate this thing as much as the rest of HN but I think you are thinking too far.

CSAM is just a tool that matches image hashes. It can't make accusations. Matching hashes is not illegal, even having pictures of abused children is not illegal, no prosecutor will take your case if it is the only thing you have. At most it will turn you into a witness. If there is an investigation, it can turn you into a suspect, but suspicion is not an accusation. I mean, if your car is parked next to the place a crime happened, you can be suspected, but it doesn't mean you are going to jail because of where you parked your car.

If, for some reason, you get wrongly accused because the system recorded a match and nothing else, it is not a problem of the tool, it is the much bigger problem of justice not working, blame your shithole country if it happens, not Apple.

I am not saying that false accusations don't happen, but saying that police has no other job than to prosecute is a bit much. Police asked me questions a few times, once for a car that caught fire not far away from where I was, once for a missing person, and once they didn't tell me the reason. Was I suspected? Maybe, I was at the "wrong place" after all, but yet, I never heard from them again. If prosecution was their only goal, I would have been in jail now.


> CSAM is just a tool that matches image hashes. It can't make accusations

who validates the source content of the hashes? what stops the government from adding one tiny bitty hash that they know is uniquely present in let's say, assange's phone?


Again a "what if" scenario.

It is not trivial to identify a picture that is uniquely present in Assange's phone. Without having access to Assange's phone that is. And if you have access to Assange's phone, you probably have way more information than such a trick will give you.

But yes, it is a problem, and a reason why I dislike CSAM. But from "allowing the government to track highly sensitive individuals" to "you, the average citizen, will go to jail", there is a huge gap.

Also, I think one thing many people missed is that a hash is not enough. The only thing a hash does it that it allows a reviewer at Apple to decrypt a matching file. If the picture turns out to be nothing special, nothing will happen.

In fact, that feature is much weaker than people make it, it just puts Apple on par with what others like Google, Microsoft, etc... can do. I think the reason it got so much negative press is that Apple made privacy a big selling point. I think it serves them right, I don't like how Apple treats privacy as just a way to attack Google without really committing to it, but still, for the facts, it is a bit unfair.


This case reminds me a bit of the big stink with the Daraprim price hikes and Martin Shkreli being dumb enough to be the public face of it. That worked out pretty bad for him but also bad for everyone else, because now we had a guy to blame for it, instead of asking the harder questions like "why aren't drug prices regulated?" Utilities must go to the public utility commission to raise rates so why is the same not done for drugs? The federal government stockpiles oil in case of shortages, why is it not responsible for ensuring that drug production is continued?

The maddening thing about the Apple CSAM scanning controversy is that I still have no idea which legislation Apple is complying with, and it's difficult to find out any reporting on that aspect. US? UK? EU? Clearly there is/are some government organization(s) with teeth long enough that Apple is not willing to go to court again as was the case with the San Bernandino shooter. Point is, whether or not Apple has been dishonest about this whole thing, they are still just a distraction.


There's a grand bargain where tech companies are granted immunity from user generated content in exchange for moderating that content. Apple's move is to try and maintain that bargain while moving to E2E encryption.


> Apple's move is to try and maintain that bargain while moving to E2E encryption.

It could be. It makes sense as a precursor to it. However, unless I've missed something, at no point in the entire debacle did Apple say "Yes, we're going to be moving iCloud to E2EE, and this allows us to do it. If you opt into this, the gains are that you can enable full encryption for everything else, and we literally can't see it on the servers."

Apple, to the best of my knowledge, never said that, nor even implied it. It was, "We're adding this to what we can do now, take it."


> Apple, to the best of my knowledge, never said that, nor even implied it. It was, "We're adding this to what we can do now, take it."

Yup. There was a ton of speculation about Apple's motives - whether legislative or related to iCloud E2E encryption - but AFAIK Apple never confirmed anything outside of "we're doing this to protect children". I think if there was some other motive, Apple should have communicated it better.

It's interesting how Apple's normal ability to set the tone and framing of new features was undermined in this case by a leak. I wonder if Apple would have been more successful at marketing this if they were able to frame the debate themselves.


The "slightly paranoid and cynical" hypothesis I have is that the reason the whole set of docs looked like something pushed out at 2AM, and that the "Clarification" documents looked like they were written by people operating on a week of no sleep, is because they were. Apple wasn't planning to really bother documenting the "feature," and someone found something sufficiently objectionable to leak it. Maybe someone wanted to turn it on early or something, or the hash database had some "non-CP data" in it, or... I don't know, and probably will never know.

But they decided that releasing what they had was better than letting the leaks run wild - just, it turns out, the actual system was really that bad. The "clarifications" were a wild run of "Well, OK, how can we answer this objection? And that one? Oh, crap, this one too!" - which is about how the later documents read. "We just came up with this idea that might address your objection!"

I'd love to hear the inside story of it at some point, but accept I likely won't.

However, it's been enough to drive me off Apple products. I can only hope enough other people have done so as well to hurt their profits, though it's doubtful. :(


But why? The way Microsoft, Google, Facebook do it is run the CSAM scans on their server side on everything you upload. Apple could have done that. They could have done it without announcing anything. They could also have put it on-device as a background service which scanned every photo you took or saved, or every website you visited.

Instead they put a non-trivial amount of effort into designing a much more complex system that doesn't require them to be able to see everything you upload, one that has input from multiple countries to guard against a single organization adding meme images to it, one that has an adjustable threshhold for some consideration against false positives, one that doesn't scan everything on your device even though it could do, etc. The actual system was not "really that bad" as you say, considering what it was trying to do, it was a reasonably good attempt to do that. (Not flawless).

Apple already send notification of every program you run on macOS back to Apple HQ and sync your browser bookmarks and passwords through them, Google know everything you search for and which results you click and can track you through sites using Google Ads, Microsoft try to get you to send all your browsing traffic to Microsoft and report on executables picked up by Windows Defender. Compared to all these things, Apple's scanner design was tackling a more difficult topic in a more privacy protecting way. I wouldn't be much surprised if Apple push their scanner to macOS as well as iOS, still only for images uploaded to iCloud and not more than that. I would be quite surprised if Microsoft pushed a similar thing Windows 11 and only restricted it to scanning OneDrive uploads; I doubt that the money saved in running the scans on-devices instead of in-datacenter would alone be worth the engineering effort of moving such a system from datacenter to device as Microsoft would have to do, compared to Apple apparently starting from scratch. That is, if it appeared in Windows 11 I'd expect the reason to be being a whole device scanner. I wouldn't be dreadfully surprised if such a system appeared in the Chrome source code and thence to Chromebooks, Chromium and Edge. The next 10 years is probably going to include various pushes from governments and companies into more invasive tech, getting in with a design that is somewhat privacy protecting and setting that as an expectation is not a terrible idea - especially if you assume they don't want to do nothing, or are not allowed to continue doing nothing.


They did not outright announce it but they dropped several hints that this is the direction they were going, enumerated here: https://news.ycombinator.com/item?id=28249905


The bargain was likely about appstore monopoly and antitrust law.

Apple published plans of CSAM exactly when all news were discussing bad big tech monopoly, congress held hearing with CEOs, etc.


"We support E2E encryption and can't unlock your device"... except we put software on your device that scans for anything our algorithm deems illegal and then we route that data to the appropriate authorities.


There is no grand bargain - the bargain is that they can moderate and be immune from user generated content. The default is no moderation to be immune from user generated content or moderate and be liable.


Given that the CSAM system as documented catches way less cases than server-side scanning, in all likelihood Apple can't implement E2E - no government would let Apple use such an inefficient system by itself. The bargain probably requires invasive client scanning and server side scanning, or at least way more invasive client scanning to catch all the cases the current system can't.


If this were true, the right way to go about it would be to spin out iCloud into a separate company. Being one legal entity is what creates the larger situation where Apple is hosting content that Apple is also able to moderate. Splitting them up, Apple Software would be free to fully represent their users' interests with proper encryption, and iCloud would have zero ability to access to the plaintext.


That seems like the kind of "hah! Gotcha!" loophole the authorities wouldn't fall for. For one, if they were allowed to do that full encryption in your situation and they wanted to, they could do that now; store data on Amazon S3, say. If they aren't allowed to now, they wouldn't be allowed to then either.

For two, encrypting on device means you can't login to iCloud web and see your photos and share them with people. That's not fully representing all user's interests.

For three, some of "their users" are the criminals whom the system is trying to catch; implying all users are the same and want the same thing and Apple wants the same thing as all users is overly simple.


1) That isn't a loophole, but rather the usual state of affairs. Right now I can use rclone or similar piece of Free software with any cloud storage service, and keep the contents of my files private. The cloud service has no access to the plaintext content, and thus can't be construed as having a duty to scan the plaintext. (If they were given notice that a specific blob was CSAM along with an encryption key that proved such, that's a different story)

2) If photos are accessible through iCloud web, then talking about encryption is irrelevant. That's just a standard cloud service which has the ability to scan on their servers. The Apple CSAM issue was remarkable precisely because Apple aimed to do the scanning on users devices before encryption that would make files unavailable through iCloud web.

3) Criminals still have rights. What we're talking about here is the right to computational representation.


The problems with drugs are entirely government created. Through the granting of monopolies on drug formulations through patents and then through the licensing by the FDA that prevents other manufacturers to produce already approved drugs if they're not exactly the same.

As with COVID we are seeing that the FDA fails to serve the public and is overly cautious. If we want an FDA make it voluntary, separate testing danger from effectiveness and at the end of the day let each individual do whatever they want to their own body.


>> Utilities must go to the public utility commission to raise rates

Which I have never seen a denial of so I am not sure that is much of a check


We know why drugs aren't regulated. Pharmaceutical Companies contribute millions every year to Congress critter PACs and election coffers. Only democrats have tried lukewarm measures to reverse some of the damage. Essentially pharmaceutical companies think they are unassailable.


>That worked out pretty bad for him but also bad for everyone else, because now we had a guy to blame for it, instead of asking the harder questions like "why aren't drug prices regulated?"

I would say it went very well for the pharma industry.


> That worked out pretty bad for him

I don't think so. He was incarcerated for unrelated reasons.


“Unrelated reasons”

The average American commits three felonies a day. I don’t think he was targeted by the feds for unrelated reasons even if the actual charges were unrelated.


Apple appeared to be doing it on their own, not complying with any legal requirement.


So far all the comments are predictably negative and imply that Apple will at some time in the future attempt to covertly implement the feature in secret.

Another take would be that they took on-board the extensive public criticism, and changed their minds.


This is the reaction the tech industry has bred with the only options ever being "Yes, now" or "Later". People are used to the industry not taking no for an answer, and so this pattern matches to every other time the industry has "backed off for now" (see also: MS pushing edge as default browser, Apple boiling the frog with gatekeeper, Whatsapp/Facebook data sharing, Google+ account migrations for YouTube accounts)


I have been literally spammed in the last month on my iPhone (even though still using iOS 14) with apple notifications asking me "Do you agree to new terms of agreements? <Yes> <We will ask you later>" (that's regarding new iCloud terms). I was always careful not to miss click and not to agree. No matter what few days later I kept giving the same notification. The stopped for some reason after few months and now I don't know if:

1) I agree by accident

2) They thankfully stopped nagging me

3) Or they buggy implemented it and even though I kept cancelling after few tries they assumed I agreed.


I'm still tapping "later" on WhatsApp's ToS changes. I wonder how much longer I can get away with it.


Likewise. I've managed to keep that little shit show in limbo for several months now. Wonder the same thing.


A few weeks ago I was forced to accept or I couldn't keep using the app at all.


What have you switched to now?


I haven't switched, I accepted them because I don't really care. I just wanted to see for how long I could push it.


At the moment Facebook bought them I started researching alternatives and I am happy to say that a couple of hundred friends and family of mine have switched and not a single person has tried to reach me on WhatsApp for months until I uninstalled it.


I don’t think cloud services realistically can do something else than forcing all users to accept the latest license.

If they don’t, they end up having data from different licenses. If you have many of those, figuring out what you can and cannot do with data becomes a nightmare.

You could get questions such as “Can you make a backup in country C for user U? He’s under the 2019 terms, so in principle, yes, but alas, user V has shared content with U, and V still is under the 2017 terms. Also, U has been in the EU for the past weeks, so EU regulations apply, at least as far as we catered for them in the 2019 terms”

Not changing the terms isn’t an option, either, as they have to be changed to cater for changes in laws. If you operate in 100+ countries,


> If they don’t, they end up having data from different licenses. If you have many of those, figuring out what you can and cannot do with data becomes a nightmare.

That's their problem to figure out. If they can't then they should terminate customers who don't agree and ship them their data.

Obviously they'll never do that since it would be a terrible look and they'd lose customers, because they want to have their cake and eat it, too.

I am not sympathetic to how hard it would be for a corporation to do something. "It's too hard" is not an excuse, not least of all if they're a multi-billion dollar corporation.


We don’t use that kind of reasoning with other service providers.

If, for example, your bank changes their terms, your electricity provider increases prices, landlords increase rent, etc. they will inform you and often silently enrol you in the new scheme. Cancelling the agreement always (and maybe not even that) is an option, but that is opt-out, not opt-in.

So, why would cloud providers be different?


My bank grandfathered me onto an account TOS for a decade before giving me a "move to our modern accounts or close" ultimatum, my electricity provider has legal limits to what the the unit rate and landlords are limited to one rate increase every 12 months, which in my city also has its amount limited. If my phone, internet or TV company wants to increase prices that voids any contractual lock in period I may have been held to otherwise. I work for a B2B company and we do gate features depending on which revision of the TOS the client has agreed to.

In short, other countries outside the US _do_ hold other service providers to similar standards.


I couldn’t have said it better myself. If we don’t hold other service providers to the same standard then that’s our failing.


This isn't a revenue generating initiative. It is different than your examples. Apple is trying to make two groups happy, governments and customers. Governments don't like like encryption(assuming Apple starts encrypting iCloud photos with device based keys) and consumers don't like governments snooping on them. If you're were the CEO of apple who would you favor knowing either group could cost you money. Governments could prioritize antitrust initiatives against apple and consumers could stop buying apple products.


You could ship a mechanism that scans your documents but doesn't report anything.

Or do something equally pointless, but just as signally.

I feel like so many of these unstoppable force meets immovable wall situations warp both functionality and discourse (on all sides) on said functionality.

This comes about because everything is getting so centralized. The government wants to be in everything, and Apple (stand in for all big tech) does too. This pits them against each other, with the consumer only acknowledged when there is vast consumer agreement and motivation.

Similar to App Store, Apple wants total control, then governments use Apple's control to cut of access to particular apps. Consumers that understand the severe loss of freedom this creates (for them and developers) both immediate and in lost innovation don't like it. But there isn't a huge consumer uproar on that one ... so freedom wilts.

Bundling these basic features (cloud backups, app stores) with operating systems is creating an unholy government/platform axis that wins every time consumers are unable to drum up the motivation to speak as coherent allies pushing back.


> knowing either group could cost you money

I'm not a businessman, but I believe Apple's main customers are not governments, so I suppose that it's not good business for Apple to ignore their users' preferences. Governments in most of Apple's marketplaces change every few years, after all.


Why they pulled the scanning was because of customer backlash. Fallout is yet to be seen what governments especially when Apple switches to not having access the keys to decrypt iCloud photos and documents. And moves iCloud backups to being encrypted. One high profile crime especially involving children and we’ll see government propose escrowed keys. UK is already proposing it. I am not saying it’s right or wrong. I do believe consumers anger at Apple is the wrong target. If they want change, get governments to pass laws that guarantee encryption as a “right”.


> Why they pulled the scanning

They pulled documents about the scanning, but we don't actually know they pulled the scanning itself.

> when Apple switches to not having access the keys to decrypt iCloud photos and documents.

We don't know that they are/were going to do that either. It's just a popular 'fan theory.'


If weren't going to do device based keys, why put scanning on the device? Scan on the servers. There would have been 0 backlash if they did that.


Another possibility (or way to look at it) is that it worked poorly enough, and it was poorly received enough, that the internal factions within Apple that opposed it had enough leverage to kill it.

Sometimes, the right way to kill a project (politically) is to let it publicly fail. Killing a project before it fails requires much more of a fight. It could also mean that the very public failure of this project gave Apple leverage over whichever spooks were pushing for it in the first place.


> it could also mean that the very public failure of this project gave Apple leverage over whichever spooks were pushing for it in the first place.

This is my guess. I imagine it went something like this.

Congressman: "So tim, FBI talked to me and want some help. Please find a way to check all pictures on iphone"

Tim: "or no?"

C: "We'll make it worth your while"

T: "This goes against our marketing and history, it'll take a really big push. My team says we can manage X but we need to work with CSAM agency to handle optics.

C: "Great!"

T: "I was right, the public hates it and doesn't care about children, we're going back to our privacy stance in marketing because the backlash would out-due the political reward. We'll throw you under the bus if you push again on this. We totally want to help for real and not just because politics is a game and we have to play, but this cost is too high, even you can see this backlash and recognize we can't go threw with it. BTW thanks for giving us excuse to hire a few new expensive ML scientists from google or wherever. They got to write some nice papers."

C:"Gotcha. See you next week for gulf when we try again at something new? There is this israeli company i want you to meet"


We all know who it was -- NCMEC was getting on Apple's case because their CSAM reporting numbers were orders of magnitude lower than the other big tech companies. Someone in NCMEC upper management even wrote Apple nauseatingly perky congratulatory letters celebrating Apple for actually trying to save the children, and calling detractors braying mules or somesuch.


>Sometimes, the right way to kill a project (politically) is to let it publicly fail.

I feel this is definitely true most of the time, but in this case the cost to their public image/perception among the tech crowd was so high it was a mistake.


It may have been a mistake for Apple as a company to make this decision, but I think we can understand why the individuals within Apple might choose to let decisions like this play out and fail in public, even if they know it is going to fail.


Well Craig Federighi seems to be happy with it. Their software engineers leads of the features seems to be happy with it. i.e I dont believe there were significant objection to the feature within Apple. At least not those in power. And that is why they went with it. The objection obviously came later after it was reviewed to the public.


> Craig Federighi seems to be happy with it.

There is a big difference between happy with it and not unhappy enough to quit publicly when part of your job is to spread koolaid and look happy.

If i were against something like this, but not powerful enough to squash it (especially if a gov had a hand in getting it pushed through), then i'd make sure to do a press tour and get it in every newspaper for all the wrong reasons, while making an all/nothing stance to torpedo the project.

I can think of no other reason beyond utter incompetence why they'd announce it how they did. Apple is known for being secretive, but why did they do a press interview and say "sure other 3-letter gov agencies may need to add to the illegal photos database in the future" unless you wanted to really make sure the project would fail? Why else announce it in the same press release as scanning messages for similar content? It seems like the rollout was designed to be as suspicious as possible tbh.


Never assume 4D chess when simple incompetence can do the job, because 4D chess is rare, and tends to fail when it confronts the unexpected. If you were trying to sink the project this way, how would it work out if some other, unrelated scandal had popped up and distracted the general public?


> If I were trying to sink the project this way

This should be re-written to say

> If [i were not influencial enough to sink it internally, so i] were trying to sink the project [through bad publicity]

Therefore, the answer to your question is "darn, it didn't work. that sucks but at least i tried".

Also...

> Never assume 4D chess when simple incompetence can do the job

In this context, its very likely that a government is somewhat behind it. We know the FBI and apple didn't get along. We know apple has been doing things quid-pro-quo for china to stay on their good side. So it seems like we're already sitting at a 4d chess board, but no one is sure who's playing.

If a government says you have to do X on the DL, and you don't want to because its bad, then a logical solution is to get general population + news media to say "X is bad! We really hope X doesn't happen." Because then its easy to show the government why you can't do X.


> The objection obviously came later after it was reviewed to the public.

Why is this obvious? Normally, when you have an objection to a company policy, you voice those objections in private to people within the company. This seems "obvious" to me, but to explain--(1) it's against company policy and often considered a fireable offense, (2) employees have an interest in presenting a united front, (3) those who want the project to fail publicly want it to happen without interference from inside the company.


>public image/perception among the tech crowd

I don't feel like Apple cares at all about perception among the tech crowd. So many things they champion draw the ire of us.


I don’t think Apple has much choice in the matter, and I really wish more people understood why they did this in the first place.

https://www.eff.org/deeplinks/2019/12/senate-judiciary-commi...


Apple has stood up to unreasonable government invasions of privacy before. And won. They have the means. They have the skill. They have the incentive (their entire brand is arguably built around privacy) and their users have been pretty clear in voicing their opposition to CSAM.

Some nebulous political references in opposition to encryption aren't the reason Apple did this in the first place. They had and continue to have plenty of choice in the matter.


> Apple has stood up to unreasonable government invasions of privacy before. And won.

Only when they knew they would win, as with Bernandino Shooter PR spectacle.

They didn’t stand up against iCloud CN being hosted by Chinese government, they didn’t stand up against unencrypted iCloud backups.


Apple is in it for Apple.

As a corporation, they have no intrinsic morals at all, and privacy is nothing but a marketing term to them.

They don't want to lose China and $$, and they couldn't use and profit from people's data if iCloud backups were encrypted.

Companies are not your friends.


I would have thought they continue the practice but because of the bad publicity will remove all mentions of it. Is it confirmed they won't scan user files? Because I would assume that they still do.

As for the negativity. That is based on experience with large tech companies.


I don't believe that they'll slip it under the radar at all. I think they'll brand it as something awesome and magical and just add it anyway. People, for the most part, don't scratch the surface EVER, so they'll just think it has a cool scanning feature built-in now.


They already scan your photos but the reason for scanning is important. Type 'cat' into the search and they'll show you the pictures you've taken of cats. However, they can't just go through your photos and use it for any purpose they want. They have to give a specific reason.


I meant that they won't hide what it will do. It will be called something benign and fluffy and it will be "for your protection".

That's what I meant.


You can't be that naïve?

> Update: Apple spokesperson Shane Bauer told The Verge that though the CSAM detection feature is no longer mentioned on its website, plans for CSAM detection have not changed since September, which means CSAM detection is still coming in the future.


What does it help if the creepy intelligence services are in the hardware?


I'd venture to say ease of use. I mean AT&T had everyone's emails but still Facebook makes it much easier to infer and understand and expand surveillance. Getting targeted information might be doable in hardware, but mass surveillance might be much easier if implemented a little higher up, by a dedicated dev team which issues bug fixes and feature extensions, which can turn over a request by you in a fraction of the time and whos capabilities are far more mutable. Hardware is more capable but it's a lot more work


This. There is absolutely no requirement for Apple to tell you they are checking the signatures of all your files.


It's proprietary software. There's no way to know what it's doing. Better to assume the worst.


Somebody above found a quote saying that plans are still to go forward with it eventually


It's closed source software - what makes anyone think it isn't already implemented?


It’s easy to tell which - is E2E still there? Then scanning is still there too.


Well, it’s mostly easy to tell since we still have researchers decompiling and scrubbing through the OS to see what’s in it and what it does[0].

https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX


If Apple would like us to believe they changed their mind, they might consider telling us they changed their mind. They haven't told us that, so why should we assume it?


They’re still a company with a PR and Legal team. Letting it die quietly means it stays out of the news cycle (as in, there are less articles about it being removed compared to if they publicly announced they were wrong).


I would respect more company and people that can admit to mistake and apologize and commit not doing the same mistake again. In this case Apple simply keeps people in limbo - you cannot be sure what they are planning to do with it without any public statement. They might as well tweak it a little bit, rebrand with different name and push it again as wrapped with different package.


> They’re still a company a PR and Legal team

All the more reason to not give them the benefit of the doubt with shit like this. If they say something flat out, that has some weight since there could be legal repercussions for lying (in theory anyway.) But not saying anything and letting people infer whatever they want? That's how companies with PR and Legal teams like to cover their asses instead of outright lying. The company says nothing and lets their supporters invent whatever headcanon is most flattering. I don't go in for that.

Edit: The same thing happened when they let Apple fans infer that this feature was preparation for enabling proper E2EE. As far as I'm aware Apple never actually said that and their supporters decided to believe it without evidence, just a lot of wishful thinking.


People like us who do not want this kind of intrusion have other things to do and lives to live. People driving such initiatives do such things to make their living. They will keep pushing for it and eventually we will be too busy/forgetful/tired to stop it. Unfortunately these issues are never prominent points in the political discussion and get sidelined either by vested interests or people's ignorance/lack of information or combination of such factors. I wish I had the optimism to see hope in these matters.


Reminder that Internet Archive has a 2-to-1 Matching Gift campaign for donations: https://archive.org/donate

It's quite important to see what a given page wrote.


My prediction is that client-side CSAM scanning is indeed dead, but Apple will make the “easy” decision to perform server-side scanning. The problem is that this decision will lock them out of offering an (increasingly standard) E2EE backup capability. I don’t know that the Apple execs have grappled with the implications of this, long term. But at least it’s an improvement.


The only acceptable solution is full E2EE for all Apple services. Nothing else is satisfactory at this point. They need to make this step to regain their user's trust.



No law is on the books blocking encryption. Apple needs to nut up and ship full E2EE, daring the politicians to do anything about it. Let's see how big they talk when it's their political careers on the line.


I’m hoping they’re just going to announce it alongside e2ee rather than not doing it at all in favor of server side unencrypted scanning.

It was a mistake to announce it in isolation as seen by this blowback because most people don’t understand or care about the actual details and just loudly complain.

Similar to the dismissal of the privacy preserving Bluetooth exposure notifications. Dumb knee-jerk dismissals imo that end up with worse outcomes for everyone.


Maybe they can offer a choice?

You can pick

a) no backups

b) iCloud backups with E2EE and privacy preserving CSAM scanning as suggested by Apple (private set intersection, half client side, half server side)

c) iCloud backups, unencrypted, with CSAM scanning server side (like all other cloud services)

I know what I'd choose!


You're forgetting the last option, which myself and others use:

d) encrypted offline backups that never touch iCloud

I don't use iCloud for anything at all. Any sort of client-side scanner seriously compromises my privacy for no good reason. I am not a criminal. I don't deserve to be treated like one.


If you don't use iCloud, then the proposed Apple solution (b) does not affect you at all.


Thanks for raising the initial alarm on this.


> The problem is that this decision will lock them out of offering an (increasingly standard) E2EE backup capability

Is this a given?


It's not really encrypted if you can tell what the data is. The two capabilities are entirely mutually exclusive.


They've been already doing server side CSAM, for a while, on iCloud pics. Their issue was targeting all media that wasn't uploaded.


The system they proposed would only have scanned images in the process of being uploaded—so, the opposite of your 2nd sentence.

There was some reporting that Apple does server-side CSAM scanning, but based on the volume of matches they report, it must be extremely limited.

Apple never said it this way, but many people assumed that Apple designed this client-side scanning system so that they could then encrypt iCloud data with client-side keys, which would provide end to end encryption for customer backups… and prevent any server-side scanning. This is likely what Matthew is referring to.


From a read of the expert opinions on the CSAM topic when this blew up. From that, it seemed like you had to be all in, or keeping out of active CSAM scanning/monitoring for legal reasons. They noted the match volume and came to the same conclusion.

With the attention I don't see how Apple can delay taking some action here. A new external liability that can be attacked in different ways. Either they need to level up the CSAM scanning server-side to increase compliance, or they need to reimplement the client-side approach. The latter approach is probably much better from their perspective for various reasons.

Ultimately, they might have been better giving this a lower profile overall and effectively have sought public attention of law enforcement to provide some cover. The authorities are going to get this in some fashion eventually.


> but based on the volume of matches they report, it must be extremely limited.

For reference:

> According to NCMEC, I submitted 608 reports to NCMEC in 2019, and 523 reports in 2020. In those same years, Apple submitted 205 and 265 reports (respectively). It isn't that Apple doesn't receive more picture than my service, or that they don't have more CP than I receive. Rather, it's that they don't seem to notice and therefore, don't report.

https://www.hackerfactor.com/blog/index.php?/archives/929-On...


Thanks, that is the reference I was thinking of.


I don't care. I already switched my business to Linux Boxes. The last Mac OS in use is Catalina. All of my critical software runs under Linux. Figma is in the browser. Affinity Designer is running perfectly under Isolated Windows VM. If I ever use Apple computer again it will be with Linux. After 20+ years of using Apple computers I don't find them interesting enough to risk my data and be scanned, categorized and processed by third party criteria. Think different. Use Linux.


Hey I just want to let you know that you're not alone and there are other sane people out there that aren't just grumbling but keeping on with the boiling frog.

Even though the new mbps look really nice, I've got a Framework laptop sitting at home that I'm in the process of migrating to (that's $4k of MBP max that apple have missed out on).

And early next year, I'm moving off the iphone platform onto probably a pixel and switching the OS to something like Graphine. It's going to be fun and experimental.

I'm also looking to hire some developers next year, provided laptops will be Linux too.


My business operation is small. I wisely postponed buying new computers for the office. I admit that M1 chips were the primary target. Apple lost from my business north of 80k euros, now redirected towards Lenovo ThinkPad X1's and custom build PC's.


What about the mobile OS?


Not OP, but check out LineageOS without Play Services.


Affinity Designer is running perfectly under Isolated Windows VM.

Share more details about how you set that up please? Good graphics software, or the relative lack of it, is one of the few things still holding us back from shifting several daily use machines to Linux. Affinity would be OK for that but the only reports we found suggested it wasn't happy running virtually and performance was poor.



Did you switch because of the proposed CSAM detector?


I can't speak to the person you're asking, but I very much have eliminated Apple products from my daily use as a result of CSAM - though it was on top of a range of other things (mostly China related) that told me I can no longer really trust Apple.

I've scrapped out a perfectly good (and exceedingly nice to use) M1 Mini, I no longer use a 2015 MBP, I've shut down my iPhone (not quite for good, I still have to figure out how to move my Signal root of identity off it), and I no longer use an iPad for work reference PDFs.

I've replaced these with far less functional "But I'm making them work" sort of devices. An AT&T Flip IV for phone stuff (KaiOS, handles basic calls and texts, and not much else, though it does have GPS and maps, and can do email if you let it), a Kobo Elipsa + KOReader for PDF support, and I've spent a good bit of time tweaking a PineBook Pro into my daily driver laptop.

The general view I have of Apple at this point is, "Oh, we know what kind of company you are, now we're just haggling over details."[0]

To whine about Apple doing stuff and continue to use them says, "Yes, it's important enough to complain about online, but not important enough to alter how I use their products - therefore, not really important at all."

[0]: https://quoteinvestigator.com/2012/03/07/haggling/


It's funny, the Apple CSAM issue got me to de-google and de-MS as much as possible. I didn't use Apple, but I figured the others won't be any better. That worked very well in private life, professionally it is Windows and will be for the foreseeable future. That leaves Windows machines for the kids to run MS Office and Teams for school.


> I've scrapped out a perfectly good (and exceedingly nice to use) M1 Mini

Given that the CSAM issue turns out to be moot, I assume you feel regret at doing this.


Not really. It's forced me to evaluate what I actually need (not much) vs what I'd prefer (fancy, nice, expensive systems because... well, why, of course I need them!).

I'd love an ODroid N2+ with 8GB RAM, though. Twice the CPU performance of a Pi4 but it's pretty RAM choked with only 4GB.


I see, so you felt regret for buying a computer you didn’t need and claimed it was because of CSAM but that was actually a lie.

Now you are promoting the ODroid N2+ which is completely irrelevant to this thread.


I've had Mac desktops in my office for about 20 years at this point, to include a mirror door G4, a G5, a sunflower iMac, a 27" iMac, a 2018 Mac Mini (what an absolute turd for anything more complex than a static desktop, the GPU was horrid), and the M1 Mini. At no point did I really evaluate that my needs over time have changed, I just always had a Mac desktop, and, through most of that time, a Mac laptop. It was my default computing base, plus a range of Linux boxes.

The CSAM thing was the final straw for me with Apple, and as it turns out, my needs have rather significantly decreased over time, as the lower power hardware has increased in capability, though... sadly not as much RAM as I'd find useful.

It turns out that I don't actually need Apple hardware to do that which I now care to do with computers. And one of those replacements was a N2+, which, while a bit sharp around some edges, is actually a tolerably usable little Linux desktop.

You can call it what you want, I'm just describing what I've done in response to the changes in how Apple has chosen to treat their users.

The nature of the computing ecosystems have changed around me, such that I chose to make changes. I don't particularly regret the decision any more than I would selling a vehicle that no longer suits my needs when I move.


> The CSAM thing was the final straw for me with Apple.

A final straw which never actually happened. They in fact listened to users saying how unpopular this would be, and changed course.

> I'm just describing what I've done in response to the changes in how Apple has chosen to treat their users.

No, you are describing what you’ve done in response to a demonstrably false belief about how Apple has chosen to treat their users.


> “Update: Apple spokesperson Shane Bauer told The Verge that though the CSAM detection feature is no longer mentioned on its website, plans for CSAM detection have not changed since September, which means CSAM detection is still coming in the future.”

It's still coming.


Is it? What makes you think that?

Edit:

Please don’t retroactively edit your comment to make the comment that follows it look misplaced.

It is a dishonest tactic that makes you look like a bad faith commenter. Who can trust that anything you have written hasn’t been altered after the fact?

There is no reason you couldn’t have included that quote as a reply to my question instead of silently editing it in to make it look as if it has always been there.


I'm not sure why you're flying off the handle when I'm quite obviously not the poster above. I have no intentions of re-formatting my comments here, just informing you of the status quo right now.

> Is it? What makes you think that?

The Verge reached out to Apple[0] and Apple confirmed that their CSAM scanning feature is still coming.

> When reached for comment, Apple spokesperson Shane Bauer said that the company’s position hasn’t changed since September, when it first announced it would be delaying the launch of the CSAM detection. “Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features,” the company’s September statement read.

So yes, they still fully intend to push the update. You should probably take this less personally too, your arguments are only suffering by attacking people who are just trying to chip in their two cents.

[0] https://www.theverge.com/2021/12/15/22837631/apple-csam-dete...


> I'm quite obviously not the poster above.

You are in fact the one who dishonestly edited your comment. That’s what I was pointing out, and it was you and not the commenter before that I was replying to. We can see your name.

Trying to pretend otherwise is either delusion, or more dishonesty.

I agree that the update provided by the verge suggests that apple has not abandoned this plan yet.


What are you even talking about? Can you provide me a link to the specific comment that I had modified here? I'm genuinely lost.

In any case, that's not a refutation or response to the link I sent.


Here is the comment you edited after I asked the question in the comment that follows it. When you first posted it you didn’t provide a quote.

> https://news.ycombinator.com/item?id=29569282

I don’t need to refute your response. In fact after seeing the update I agree that CSAM is not yet defeated.

That doesn’t change the fact that your commenting tactics are dishonest. That is the problem here. You cannot be trusted not to edit upthread comments.


At no point did I ever edit that comment. It's contents were, since the time of posting, a quote and simply "It's still coming."

I legitimately have no idea what you're talking about. This is either an extremely confusing attempt to gaslight me or you're mistaking me for someone else. In any case, I don't really know what else to add here. Do you remember what was supposedly the original contents of this comment? We're really getting nowhere here pointing fingers.


This isn’t true.

You posted “it’s still coming”, and only added the quote later after my reply.



[flagged]



How is not the parent comment a shallow reaction to the post, which was my point? It was the usual empty response that gets the top comment post and sets the tone for the thread.


I beg you pardon, my responses can be categorized as provocative sometimes, but they are never empty. I always state clearly my point and give factual data.


Be kind. Don't be snarky.

Comments should get more thoughtful and substantive.

When disagreeing, please reply to the argument instead of calling names.

Eschew flamebait.

Please don't complain about tangential annoyances.

Please don't post shallow dismissals.

https://news.ycombinator.com/newsguidelines.html#comments


Beautiful is better than ugly.

Explicit is better than implicit.

Simple is better than complex.

Complex is better than complicated.

Flat is better than nested.

Sparse is better than dense.

Readability counts.

Special cases aren't special enough to break the rules.

Although practicality beats purity.

Errors should never pass silently.

Unless explicitly silenced.

In the face of ambiguity, refuse the temptation to guess.

There should be one-- and preferably only one --obvious way to do it.

Although that way may not be obvious at first unless you're Dutch.

Now is better than never.

Although never is often better than right now.

If the implementation is hard to explain, it's a bad idea.

If the implementation is easy to explain, it may be a good idea.

Namespaces are one honking great idea -- let's do more of those!

https://www.python.org/dev/peps/pep-0020/

Take heart in the bedeepening gloom

That your dog is finally getting enough cheese.

https://www.youtube.com/watch?v=Ey6ugTmCYMk


> I don't care. I [switched to Linux]

That's short-sighted and self-centered.

You don't care about whether or not other people's rights are violated?

You don't see how normalizing rights violations could lead to downstream effects on you?


We do care. However, what can we do about it? Can we somehow make the billion dollar corporation do what we want? Unlikely. Better to use systems that we actually control so that we don't have to suffer their tyranny.


Please, don't insult me and judge me so quickly. Read all my responses. You can check my posts back in time (If you wish).

The whole reason for my HN post activity (reading in the shadows from 2008-10) is CSAM. In the beginning of this issue nobody cared and tons of Apple fanboys down-voted every possible negative reaction.

Apple is industry leader. Everyone will copy what they create. You can read "I don't care" as an emotional reaction of an individual who had invested tons of money in Apple, used Apple products and evangelized thousands of friends and customers. Only to realize that The Beauty is actually The Beast.


The truth is that nobody has a right to use Apple products. It's good that Apple listens to criticism but if they had decided to steamroller this through, they would have been within their rights to do so.


And your phone is what?

If it’s an Android it’s probable got something similar in place.

On an iPhone all you must to use disable iCloud sync to avoid CSAM


This is such a silly strawman. First, it is not related to the parent's point at all. Second, it is perfectly possible to run AOSP based OSes not tied to google.


I disagree.

>> First, it is not related to the parent's point at all

The post was about switching away computers. My (badly implied) point was, a lot of the effort is wasted unless if you still use an Apple iPhone...

If you skip using an iPhone, then fine, but the other options out there are likely worse.

>> Second, it is perfectly possible to run AOSP based OSes not tied to google

You can build your own Google-free etc version but again most people won't because of the effort and the need to redo for updates again and again. The Parent post author probably has the means to do it, but it's another endless task they have to stay on. All other options (to my limited knowledge - plz share if you know more) mean compromises in apps, and functionality - unless you're happy to spend hours each wee/month troubleshooting and setting up workaround and working with specific compatible hardware.

It can be done. But how much of your life will you want to spend on that? Reading other comments maybe it is a solved problem now - it hadn't been last time I tried.

If I assumed too much from the parent post then sorry.


I agree that most of this requires conscious effort. I think I was needlessly harsh too. My point was that if someone is making the conscious effort of changing their devices/OSes etc., then they have shown the interest and ability, and at least for them, there are reasonable options (i.e., not just theoretical ones).


LineageOS has near-nightly binary releases, you don't need to build or "redo" anything to keep up to date, other than a 5-10 minute update over adb about once a week.


Basic phone and iPhone SE (the old version) for banking apps. Never used iCloud. All my backups are done locally. It is not only about CSAM.

CSAM was the last straw. Before this was that: echo 127.0.0.1 ocsp.apple.com | sudo tee -a /etc/hosts


In your situation then CSAM scanning would never have been an issue.

It only applies when you have iCloud Photo Library enabled.


It is not about the implementation either. It is about hostile attack towards user privacy after years of advertising: What Happens on your iPhone, stays on your iPhone. Privacy is King. Privacy. That's iPhone.


>So we lost yet another feature which would have increased the privacy.

With all the respect, there is one small detail: The search criteria is provided by third party organisation (funded by DOD), without option of public oversight due to nature of "sensitive" data set. This data set would be defined by similar "organisations" per country bases.

In my humble opinion this "privacy related solution" is Stasi wet-dream on steroids.

Some governments liked it so much, that they wanted it to be extended.. https://9to5mac.com/2021/09/09/csam-scan-encrypted-messages/


CSAM didn’t allow that, they only included hashes provided by multiple governments.

The backlash was obvious from the outside, but it’s clear someone spent a lot of time building something they felt was reasonable. Even if Apple presumably just implemented the same system in iCloud, at least people became aware governments have access to such systems.


I'm not sure I can agree that it's clear someone spent a lot of time building something they felt was reasonable.

The entire technical underpinnings of this solution rely on essentially neural image embeddings, which are very well known to be susceptible to all sorts of clever attacks. Notice how within a day or two of the announcement of this whole system and it's symbols, people were already finding lots of 'hash' collisions.

In the space of people and places that can train and implement/serve such embeddings, these issues are pretty widely known, which makes it very non-obvious how this happened IMO. Someone that understood all of these issues seems to have directly ignored them.


The government image hashes are supposed to be secret information, any crypto system is vulnerable if you assume the secrets are suddenly public information. I am sure plenty of crypto currency people would object to saying the system is insecure because anyone can post a transaction with your private key.

More importantly hash collisions result in manual review by humans at Apple, hardly a major issue. This is also a safety measure protecting users from political images being tagged in these database.


Reasonable and logical: it's a similar idea to a locally hosted virus scanner scanning all your documents...


The most people have misunderstood how this system actually worked. I read every paper carefully, and this was very innovative system to protect your privacy IF we compare to existing CSAM solutions. Of course the best option is to not scan at all, but that is not really option.

Only those pictures which were about to be uploaded iCloud, would have been scanned, and Apple would get information about image contents only if it is flagged. The phone is blackbox and it scans your files anyway all the time, sending metadata to Apple e.g. because of the spotlight feature and photo albums, or simply syncing your data to cloud. There is massive AI behind that spotlight feature. Nothing would have changed, just the different kind of information would have been flagged, but this time encrypted.

The major improvement was E2EE like system for Photos. Currently they are not E2E encrypted in Cloud. They are plaintext for Apple. iOS 15 beta had also encryption for backups, but it never reached public reach after CSAM was delayed. So we lost yet another feature which would have increased the privacy. "But it happens in my device", is not really valid argument since most people don't understand what happens in their devices in the first hand. Even AV engines sends all your data to cloud, and you can't opt out in most of the cases and it is for every file.


> I read every paper carefully, and this was very innovative system to protect your privacy IF we compare to existing CSAM solutions

I did the same and I agree with you with one small caveat. For someone like me, who does not use iCloud or any other cloud service, on device scanning seriously compromises my privacy. I want to be clear that I am not doing anything that is currently illegal in my home country, but that could change under a more oppressive government. I know my opinions would definitely get me in trouble in some countries if I happened to live there.

> Of course the best option is to not scan at all, but that is not really option.

Why isn't it an option? I know of no legal requirement anywhere that mandates Apple has to scan fro CSAM.


> I did the same and I agree with you with one small caveat. For someone like me, who does not use iCloud or any other cloud service, on device scanning seriously compromises my privacy. I want to be clear that I am not doing anything that is currently illegal in my home country, but that could change under a more oppressive government. I know my opinions would definitely get me in trouble in some countries if I happened to live there.

At least in the current solution scanning was integrated part of the iCloud sync pipeline, and therefore this would not have affected you.

> Why isn't it an option? I know of no legal requirement anywhere that mandates Apple has to scan fro CSAM.

Apple is too big and they try to foresee politicians behavior. With "good will" solutions they can adjust the way how they scan or implement this. With regulation it would be more specific and might even restrict encryption in some cases totally. This is quite difficult topic at Apple's scale and I don't know if anybody really knows what exactly is already required and why


An innovative solution to the wrong problem.

What exactly does one achieve in nailing the local pedo for old content? If he's stupid enough to be uploading it to iCloud he's stupid enough to get himself busted in all manner of other ways. The real problem is of course criminals who are currently engaged in the production of CSAM. Which Apple's scheme does nothing to address.


It is essential to stop sharing those files no matter what. It might not help on getting them caugh, but those who buy these pictures or find elsewhere, might try to share them as well. They might not be tech guys, so they might try to use some familiar service. And yes, there are many people like that. iCloud would be flooded if no action is taken against sharing.


So you are saying that Apple has access to all your data ( = no privacy ) but this "app" will improve your privacy by scanning all your files.


It is not scanning all of your files, only those which will be send to iCloud. Scanning for part of pipeline in sync process, so it even can’t scan other files.

But yes, currently Apple can access all your images on the cloud, but with this change they would have been E2E encrypted, hence improving privacy.


> But yes, currently Apple can access all your images on the cloud, but with this change they would have been E2E encrypted, hence improving privacy.

Apple said nothing at all this would be the case. It's pure speculation from people on the internet.


It was all over the spec, whole protocol was about encryption and ”Apple can’t access your images”. Otherwise whole flagging thing was pointless.


All cloud platform scan for CSAM. Apple was going to move the scanning task off their servers, where it is now, onto apple devices only as the devices upload to apple's cloud. If you use cloud services, info does not stay on your phone, does it?


After years of using Little Snitch to defend myself from Apple telemetry, it is not an option for me to "trust" any form of closed source software.

I can ditch my iPhone at any time, but Apple wanted CSAM to be part of Mac OS and this is unacceptable.

Anyway, I am glad that they tried. This was a wake up call with positive financial and technological outcome for me and my business.


> I can ditch my iPhone at any time, but Apple wanted CSAM to be part of Mac OS and this is unacceptable.

It is already on MacOS. It sends hashes from every unknown file on ”virus protection purposes”. Only name and hash database is different. How that changes your privacy?

And to be fair, they never said publicly that it would come to Mac OS.


>These features are coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.*

http://web.archive.org/web/20210807010615/https://www.apple....


Except that you were just wrong and there was no attack. When people complained that this was over the line, Apple didn’t do it.


No, the specific controversy was about on-device hash scanning which used child porn as cover, when in fact it is exactly the feature that China et al. wants in order to make sure people aren’t sharing any political memes etc.


for now? You can't really guarantee anything with companies when they start rolling things out like this.

The fact that they went so hard for the feature without really considering how people felt about the invasion is important.


Without considering how people felt? They cancelled it! The whole process was about getting feedback on it.

We’re you consulted on how google implemented it server side with google photos? You likely didn’t know!

Only one company chose transparency here.


An Apple VP implied anyone who opposed it didn't understand it. And endorsed an NCMEC director calling them screeching voices of the minority.[1]

[1] https://9to5mac.com/2021/08/06/apple-internal-memo-icloud-ph...


> They cancelled it!

No they didn't [0]

> When reached for comment, Apple spokesperson Shane Bauer said that the company’s position hasn’t changed since September

> Crucially, Apple’s statement does not say the feature has been canceled entirely.

[0] https://www.theverge.com/2021/12/15/22837631/apple-csam-dete...


> They cancelled it!

I doubt it. It will be quietly implemented. If a techie digs in far enough with RE to prove it, general society will treat them like they wear a tinfoil hat.


> The whole process was about getting feedback on it.

What a load of BS. Apple told us it was going to happen, and then quietly canned it when there was backlash. There was no intention of "consultation" or "getting feedback" at all.


Why would they bother ? The PR from actually trying to help users has been negative.

So instead they will just continue to scan your photos server-side.


> So instead they will just continue to scan your photos server-side.

Yep, and because of that we are not getting E2E services (Photos, backup), which would have been possible with this new technology.


E2EE services are absolutely possible without this technology. What are you saying?


They are possible, but they will not be implemented on Apple’s scale without CSAM features. Politics already try to implement forced backdoors.


> It only applies when you have iCloud Photo Library enabled.

For now. There are no technical limitations to Apple's proposed solution that preclude it from being always on. It's a policy decision only, and you can be sure there would be immediate pressure to force it to scan full time.


Android have plenty of open source distributions without such features. They don't save you from baseband backdoor, but at least OS itself isn't your enemy.


How about GNU/Linux phones, Librem 5 and Pinephone? Not well polished yet but running desktop OS with all its possibilities.


Those phones are in no way usable as day to day phones.


I strongly believe that this feature could have broken Apple’s brand irreparably. %99.9 of us are not pedophiles but personal devices that snitch you to the police(with extra steps) is something that most people will not accept.

The proposed database for snitching was CSAM but once the system there, the devices become the pocket police that can receive any kind of scan-match-snitch instructions. This would be the point where humanity embraces total control.

That said, an offline version might be workable. Think 2E2 encrypted devices that can be scanned against a database with physical access to the device upon approval from the device owner. Imagine a person being investigated through standard police work, it can be used to prove the existence or the lack of the existence of the materials without exposing unrelated and private information if the person accepts to cooperate.


> are more

Typo of abhor?


Yep, thanks


It's removed from marketing material, but is it removed from the code?

Since the OS is proprietary, how do we know they didn't go for it anyway?


> Since the OS is proprietary, how do we know they didn't go for it anyway?

Reverse engineering is a thing.


Good luck on not being more than half year behind the latest release, with decent accuracy. That really takes some effort.


An expensive, hard, long and unreliable thing.


Have you reverse engineered iOS?



I haven't reverse engineered iOS itself because I've never had any reasons to, but I have reverse engineered third-party iOS apps (dumped from a jailbroken device) and some parts of macOS.


How do you know it’s not on android?


Who cares? Weren't talking about android right now. No need for "what about" deflections of valid criticisms


That may have been simple ignorance instead of whataboutism. The person asking it may genuinely not know how much easier it is to inspect Android (both the device and the open source). They may not know you can build custom images of Android either.


Android is open source? At least you have the chance to use it like that, some tinkering required.


The only difference between Android and iOS in this regard is that Android's frameworks are open source. The rest is exactly the same.


The AOSP code is open source. The Android and related apps running on your latest Pixel 6 is not.


> *Update*: Apple spokesperson Shane Bauer told The Verge that though the CSAM detection feature is no longer mentioned on its website, plans for CSAM detection have not changed since September, which means CSAM detection is still coming in the future.



So it's just damage control.


I never understood the big deal about this. If you use most cloud file synching services you are already being scanned, and if you don't, this (CSAM) thing wouldn't have affected you anyway.

The main rebuttal is that people claimed that Apple could expand the scanning to photos you aren't sending to iCloud. If you believe that though, then not using Apple devices (regardless of whether they implement CSAM or not) is clearly the move.

tldr: if you trust apple, this doesn't matter, and if you don't, you shouldn't use apple devices anyway.


Many people were upset or annoyed at the idea of their own device spying on them. I can understand that.

However, the major issue for me is the database of "forbidden" neural hashes which you are not permitted to have on your phone. Who controls what goes into that database? Right now, it's only CSAM and Apple have said they pinky-swear that if they're told to add other things they will refuse.

Any maybe they will. Maybe Tim Cook will fight tooth-and-nail to avoid adding non-CSAM hashes to that list. But then maybe Tim retires in five years time and his replacement doesn't have the appetite for that fight. Or maybe Apple gets court order saying "you MUST also report this sort of content too". Maybe that list of forbidden hashes now includes political imagery, the picture from Tiananmen Square, known anti-government memes, etc.

Once the technology is created and deployed, the only thing preventing that sort of over-reach is Apple's resolve against governmental pressure; even if you'd trust Tim Cook with your life, I can completely see why that would give people pause.


> However, the major issue for me is the database of "forbidden" neural hashes which you are not permitted to have on your phone. Who controls what goes into that database? Right now, it's only CSAM and Apple have said they pinky-swear that if they're told to add other things they will refuse.

The problem with this is that we already know "National Security Letters with Gag Orders" exist, and we know that the US government, at least, doesn't mind using those.

"You will do this thing and you will tell nobody that you've done this thing."

The Lavabit rabbit hole is a case study here: https://en.wikipedia.org/wiki/Lavabit#Suspension_and_gag_ord...


> However, the major issue for me is the database of "forbidden" neural hashes which you are not permitted to have on your phone. Who controls what goes into that database? Right now, it's only CSAM and Apple have said they pinky-swear that if they're told to add other things they will refuse.

This isn't what was being proposed. What was being proposed was that if you had a photo that resolved to a "forbidden" hash, upon being uploaded to iCloud data would be sent to Apple. Presumably after some arbitrary amount of times this happens, information would then be sent to the local government.

If you were not using iCloud to store photos then it wouldn't matter either way (according to Apple).


I don't think anything I've said contradicts that. Although Apple said they would only scan content which was uploaded to iCloud Photos, scanning still takes place locally on-device.


I’m not saying you made a contradiction - I’m saying that the files in question would’ve been scanned either way. The question is simply whether or not you trust and believe Apple. If you don’t it doesn’t make any difference whether it’s being scanned on device or not.


I'm sorry - when you said "this isn't what was being proposed" I must have mis-understood what you meant.


>Maybe Tim Cook will fight tooth-and-nail to avoid adding non-CSAM hashes to that list.

1) Tim Cool will have no knowledge of this when the government inevitably expands the hash list

2) Apple gets a list of hashes, not content, from NCMEC which is an unconstitutional quasi-governmental agency created by congress and funded by the government to bypass the 4th amendment.

3) Apple will simply get an updated list of hashes and has no reasonable means to verify what the hashes from any given government are.



Problem with something like this is the cat's out of the bag.

The code has been written, tested and merged etc. The project has reached a point where it's ready to ship.

Apple have done their homework and they will not release ANYTHING unless they think they'll either make money from it or at the very least, not LOSE money from it. It's all about money, nothing else.

It's coming whether we like it or not now :(

And you can be sure that others will follow. It will be in Android at some point and possibly Windows and MacOS too.

I try not to be negative and cynical these days but the tech oligarchs make that incredibly difficult.


The "scanning" feature that underpins this has been deployed for years, and I dare say wouldn't even be considered unique for photo library managers. If you can search for "cat" and get photos of cats, then your photo library already has the tools necessary for CSAM detection.

I feel the fears about this implementation are largely overblown - equivalent features are not being misused on Google and Meta/Facebook's properties, despite having a much larger audience and less procedural oversight. Rather Apple is the outlier here for being not just late, but also by having minimum thresholds that must be passed and an unskippable step of human review.

I'm yet to hear a coherent argument about the potential dangers of the system that understood the implementation, nor the state of image scanning amongst cloud providers - even this thread seems to believe that Apple is the first here. Apple's approach was the most conservative, Google's goes much farther: https://support.google.com/transparencyreport/answer/1033093...


The difference is that there is only 1 IF statement or local/remote config flag that will enable this scanning on your device even if you don't use the iCloud photo sharing feature.

It is super well documented that Apple ALWAYS follows the laws of the land, so the scenario is that a government adds some new hashes that might not be CSAM as you and I define it but might be political stuff(but the citizens can't check). Also now that the code is there Apple can't refuse to add this hashes or enable the feature forcefully when a legal warrant for that territory is presented.

some more config changes and it can start scanning for text in documents too.


This argument falls exactly into the point I make about not understanding the implementation or the status quo.

Here's what would need to change about the system to fulfil your requirements.

1. The local government would need to pass a law allowing such a change 2. The hashes would then need to come from the government. Instead of the two intersection of two independent CSAM databases. 3. The review process would need to redesigned and deployed to provide a lower threshold and permit the reviewers to see high resolution images. Neither of these changes are trivial. 4. The reports would then need to go to the government. 5. What's stopping these same governments from requesting such things from Google and Meta - both of which have comparable systems with lower oversight?

Apple don't "ALWAYS follows the laws of the land", one can read into this with the recent "secret" agreement between Apple and China which detail the various ways that Apple hasn't complied with Chinese requests (e.g. the requests for source code.)


>1. The local government would need to pass a law allowing such a change

Sure, it is not hard, but already laws can apply, did China had to pass a law like "any company has to update their maps software to conform with our local maps?" or China just showed Timmy a graph with profits from China dropping and then Apple said "yes sir!!!" .

> 2. The hashes would then need to come from the government. Instead of the two intersection of two independent CSAM databases

Yeap, the government will demand for a local database to be used and since the crimes are local you need to send the reports o the local police and a local Apple employee will have to check, not some cheap random contractor.


Funny that you mention maps, because mapping information in China is heavily censored, and has been since 2002[0]. They even wrote their own coordinate-obfuscation algorithm so that you can't mix Chinese and western map data or GPS coordinates and get accurate positions on both sides.

Apple's response to the "what if governments force you to abandon the hidden set intersection stuff you wrote" thing is, "well we'll just say no". Which, unless Apple plans to pull out of hostile jurisdictions entirely or levy their own private military[1], holds no weight. Countries own their internal markets in the same way Apple owns the App Store market.

[0] https://en.wikipedia.org/wiki/Restrictions_on_geographic_dat...

[1] which should scare the shit out of you even more than the CSAM scanner


Did you miss the reality where Apple says yes to China all the times? Including maps and storing the iCloud data in China?

The CSAM scanner does not scarry me as it is today, but the fact that is a step, next year will scan more files, then Google will copy Apple (willing or not) and then most PCs will do the same thing.

Apple just normalized that is fine you have an proprietary file scanner/"antiviruys" installed on yoru device with the properties:

1 it can't be stopped or removed

2 it get's hashes from some government agencies over online updates

3 matches will be reviewed by someone that it said are trustworthy and might get sent to your local police

4 the stupid scanner has stupid excuses to exist, where would have been extremely simple to implement and non controversial to scan the fucking images after the upload and not before.


How do you know that there isn't already a secret order in place to look for a particular list of hashes? Perhaps something that has been sealed due to national security?

If it's triggered, send the details to a particular group of individuals with certain clearance to view and verify.

If they have the ability to do this, they will definitely do this!


Why would apple have released a whole white paper on the feature if the plan was… to do it in secret? Clearly they could have gone the google route, start scanning and not tell anyone about it. It’s so odd to blame them for something they haven’t done.

Of course they could decide later to implement it! Just like any company could choose to do that! It’s the whole point, your phone is closed source. But clearly apple didn’t pick that route, they announced it BEFORE shipping and asked for feedback! And you’re blaming them for that?!


>Why would apple have released a whole white paper on the feature if the plan was

Isn't obvious ? Some guy using an exploit would install a debugger on his iPhone and find this scanning process, Apple PR would be hit. But Apple could do this and gain PR by pretending is a safety feature, the guy with the debugger will see that files are scanned and hashes are sent to the server but he can't prove that the config file won't be updated in the future for everyone or some and the hashes will be pulled from a different database, the threashdold will change and the is iCloud enabled flag will be ignored.

What makes no sense and Apple failed to explain and fanboys also failled to convince me with any evidence, why the fuck not scan the images on iCloud only since we are pretending that only if iCloud is enabled the images are scanned. (there is no credible evidence that this is part of any plan to offer some kind of super encryption that Apple has no keys for)


I mean that the Gov will come along and say:

"See that hashing thing you have, this secret gov. order that you can't tell anyone about says you need to look for this list of hashes too. If you find them, you have to let these 5 people know and if you tell anyone you'll accidentally shoot yourself three times in the back of the head to commit suicide!"


> Why would apple have released a whole white paper on the feature if the plan was… to do it in secret?

Didn't that only happen after a leak?


I see that you don't live in Turkey. Or Lebanon. Or Egypt, or the United States, or Brazil, or any of tens of other nations where this exact scenario is not only plausible, but is of real concern.


Then you should already be gravely concerned because, as I've stated: Apple is last to the game here. Google and Meta each with significantly larger audiences than Apple are already doing this. Since Meta in particular is social media and the main source of casual political dis/information for the masses this would be the low hanging fruit that you're worried about.

The vague "this is a serious concern because my government is shit" requires a person to ignore a lot in favour of a little.


Sorry, my meta hardware is scanning client-side for CSAM? And you claim it's the same for google; can you tell me which Pixel is scanning client-side?


Which apple device is now scanning client side for CSAM?


>Which apple device is now scanning client side for CSAM?

I think you missed OP point, the grandparent found a bad excuse for Apple, that Google and Facebook do the same so nothing is wrong, but the OP responded correctly that this is false, those companies don't install file scanners on your device so the excuse is bullshit.


> those companies don't install file scanners on your device so the excuse is bullshit.

They don’t install file scanners on iOS because they cannot. They are happy with scanning my contacts and doing who knows what with my photos if I let them, though.


>They don’t install file scanners on iOS because they cannot. They are happy with scanning my contacts and doing who knows what with my photos if I let them, though.

Nobody says that Google or Facebook are good/saints, but there is a difference, my phone did not had Facebook or WhatApp installed, so journalists or activists can just don't install this crap.


  > Then you should already be gravely concerned
I am concerned. I explicitly wrote "...not only plausible, but is of real concern."


What is the alternative? Companies deciding that the law does not apply, just because?

Reverse this for a moment: would it be acceptable that a Chinese tech behemoth decides US law does not apply to their American operation, because they are convinced that western democratic values are dangerous?

This is also a real concern. If you think that rule of law is a good thing, then companies must comply with local laws. The alternative is not a humanist utopia; it is the rule of unaccountable corporations.


>If you think that rule of law is a good thing, then companies must comply with local laws. The alternative is not a humanist utopia; it is the rule of unaccountable corporations.

In this case there is no law to force Apple to scan on device, they can scan iCloud only if the law forces them to do that, or scan only iCloud images that are public or shared if the law forces to do only that.

Before 1989 listen to a specific radio station was illegal , but the good part was that the radio device could not snitch on you , if someone would have invented snitching radio then I am 100% only those radio devices would be legal to be sold in my country at that time. So if you create a snitching device then you invite the bad governments to add the laws after to profit from that and make their lives easier.


> two intersection of two independent CSAM databases.

I don't think that offers meaningful protection. A malicious government could acquire or produce real CSAM, or at least a convincing fake of it, then modify that image subtly to match the hash of the political content they were targeting (I believe such engineered collisions have already been demonstrated.) They would then be able to get that manipulated CSAM into as many independent CSAM databases as they wanted, simply by uploading it to the web then reporting it to all those independent databases.


This isn't feasible for a variety of reasons:

1. Forced hash collisions are fragile in comparison to the genuine material. CSAM hashing is devised in a way to allow change in the image without significant variation to the hash. Forced collisions however don't possess that property as their true hash is different.

2. It's trivial to detect images with false hashes, and different hashing methods will not produce collisions.

3. The task you're proposing, in itself is not pragmatic nor effective: dissenting political materials are easily created fresh (i.e. no colliding hash) and exists in many forms that go well beyond an image.

4. Human reviewers again would see the pattern and adapt the system accordingly

5. CSAM database holders aren't idiots. They understand the significance of their hash libraries and work against manipulation from CSAM peddlers.


Points 1 and 3 assume any government wouldn't choose to target people who had a specific document, I think this is a poor assumption. Hash fragility doesn't seem relevant if both the hashing algorithms and the target content are known to the government

Point 2 and 5 I don't understand, could you expand on it? If the government produced manipulated CSAM that didn't exist in any other pre-manipulation form, how would the manipulation be detected? And supposing it were detected, would CSAM database holders choose not to add that image to their databases just because it was manipulated? If that were the case, it would give pedophiles a way to exempt their images from the database too.

Point 4 is neutralized by the government demanding local human reviewers for phones located in their country. China already does this with iCloud, they could demand the same for the CSAM reviewers, then coerce those reviewers or simply select them for party loyalty in the first place.


Oh you mean apple didn't bend over for China when it involved not giving up their source code but when it didn't involve them losing their IP they did it easily? Great job Apple, stand up move that I can trust.


There's plenty of other examples - but people hunting strawmen are somehow blind.


Given that Apple wrote OSX, this scenario has been possible for over 20 years.

And it hasn't eventuated. But yet now it is something to panic over ?


Until recently you had a lot of freedom on your OSX device, you could install and uninstall whatever you wanted , you could have a firewall that blocked applications. On iOS and slowly on OSX this freedoms are lost, already when you open your computer, open an application Apple is notified (for your safety), only a few more steps are needed until you will get laptops with iOS like lockdowns and maybe a PRO expensive version for the ones that want the key for the gates.


You can blame them when that happens but it’s odd to blame OS X for being locked when clearly OS X is not… locked.


>You can blame them when that happens but it’s odd to blame OS X for being locked when clearly OS X is not… locked.

OSX is getting more and more locked, iOS is locked so why shouldn't people complain when things get worse.

I was just explaining that the excuse "OSX is 20 years old and it was not locked yet so it will never be locked q.e.d " , this is wrong, you can see clearly how things get more locked in steps and you don't have to wait untill last step to complain.

Keep in mind that you have better keyboards on Apple laptops today because people complained, the CSAM shit was delayed because people complained, the Apple tax was lowered in some cases because developers and Epic complained, so complaining sometimes works.


> The "scanning" feature that underpins this has been deployed for years

Yes, I agree that your phone has the ability to find stuff like this based on ML etc. However, if their algorithm finds something that matches a hash of child porn, you'll be reported (after notification or something, can't remember).

Can you imagine what that would do to people? The risk of fucking this up and destroying someone's life is far too high. It's a stigma that will never go away even if it's proven false.

And what if the government decides that they want to ruin a political adversary? The technology is there to report them, rightly or wrongly, for child porn on their phone and ruin them. Again, proving that it was false will not be enough if it leaked out in the first place.

In my opinion, it's my phone (yes, yes, I know they all dial home to the mothership, I only lease it etc.) and I should have complete say over it and this is one more step away from that goal.

It's on the dystopian highway imo.


> matches a hash of child porn, you'll be reported

After 30 hits (so 30 false positives), and then the photos are manually reviewed by a human being. How did people never understand this? How on _Earth_ would people's live be randomly destroyed by this? The reviewer would take one look (or 30) and immediately realize it's not illicit. Crisis averted. But the chances of those 30 false positives actually all being in someone's iCloud was like astronomically unlikely.

> And what if the government decides that they want to ruin a political adversary?

Then you're already fucked, and the child porn scanning software won't be what makes it possible to ruin your life.


>After 30 hits (so 30 false positives), and then the photos are manually reviewed by a human being. How did people never understand this? How on _Earth_ would people's live be randomly destroyed by this?

Don't be naive, the governments will demand that matches on the "special" hashes will be reviewed by local government people.

There is a similar feature in the "child safety" package that scans for stuff in chat messages and probably (maybe soon) in documents that are received ), so the code is in place to have an uninstallable "antivirus/antiCSAM/antipiracy" on your device that you must trust both Apple and your goverment that does what is promised but 100% you can't just delete the executable to have piece of mind.


> Don't be naive, the governments will demand that matches on the "special" hashes will be reviewed by local government people.

Seriously, if a government is after you, they will get you regardless of the content of your phone. In a country with decent laws, just a match is woefully insufficient to demonstrate someone’s guilt, and they will need some proof. In other countries, laws do not matter so they can just get you without issue. They have done it for quite a long time and are doing it today.

All of this also points to the complete absurdity of this mass hysteria: it is insane to ruin someone’s life just because they have a picture on their device. It is just thoughtcrime, at this point.


>All of this also points to the complete absurdity of this mass hysteria: it is insane to ruin someone’s life just because they have a picture on their device. It is just thoughtcrime, at this point.

But it happened, some guy was falsely accursed because someone made a typo in an Ip address, he lost his job and he was forcefully kept away from his wife and children.

>Seriously, if a government is after you, they will get you regardless of the content of your phone

That is false, for example before 1989 listening to a certain radio station was illegal here in Romania but the government could not ask the guys the made the radio to give them a list with all people that listened to this radio station. Your claim is that the govberment knows my name and what I did they can fuck me up, that is true but this imnplies the gov already knows and has the proof, in this case without the file scanner they won't know I own political/religions materials that is illegal but after the file scanner runs and reports me , only then I am on the gov list.

I hope is clear, tech makes the job of the bad gov easy, so in this case you as a random user you gai nothing from your device scanning for CSAM but many lose from this, the question is why if you gain nothing and this can be done with a simple iCloud scan? (because for sure it will be extended to private messages and documents)


> so in this case you as a random user you gai nothing from your device scanning for CSAM

This makes it seem like there is no upside at all to the CSAM scanning. There is: we stop more folks peddling child porn, hopefully making a dent in the problem.


>This makes it seem like there is no upside at all to the CSAM scanning. There is: we stop more folks peddling child porn, hopefully making a dent in the problem.

Apple could have scanned iCloud from the start and prevent this problem if it is so big. If they are already scanning then there is no use for scanning on device only if iCloud is on and if they are not yet scanning iCloud then you should ask Tim why is he ignoring the problem, Google and Facebook already reported a lot of abuses, does Tim love CP?


And if the government makes it illegal to have a certain document, or a book, what happens then?

You can guarantee Apple will report you. They have to. It will be the law.

It will be used for human rights abuses. Guaranteed.


This falls under the "you're already fucked" part of my argument. If it becomes illegal to have a file, or a book, or an image, then the entire tech ecosystem we all use everyday is completely untrustworthy.

Is that an issue? Sure! Is it a new issue, or specific to Apple's implementation? Nope.


> If it becomes illegal to have a file, or a book, or an image

This is already true in certain circumstances.

> then the entire tech ecosystem we all use everyday is completely untrustworthy.

This depends on what tech you use. The degree to which you cannot trust it, or need to understand that it is actively working against you, varies by platform. Nearly all platforms undermine you in different ways. Some of those platforms do so in a way in which you're ok with.

> Is that an issue? Sure! Is it a new issue, or specific to Apple's implementation? Nope.

Agree to disagree. I don't mind cloud services (someone else's computer) scanning my data and working against me. I do mind my computer scanning my data and working against me.


I understand this. But am also aware of how processes work out over longer timeframes.

Not too many people get caught, review team gets downsized since it is not a revenue generator. Maybe we can replace it with ML!

How many stories do we have on HN about overworked employees reviewing horrible material? Or about how impossible it is to contact support?

Just because there is a good system in place now doesn’t mean it will stay there.


> After 30 hits (so 30 false positives), and then the photos are manually reviewed by a human being.

As much as I think the panic is way overblown, this bit is simply unacceptable, and they need to get nailed just for having thought this was a good idea. Nobody is to review personal photos; this is an insane breach of privacy.


>After 30 hits, the photos are manually reviewed by a human being. How did people never understand this? How on _Earth_ would people's live be randomly destroyed by this?

Many people here simply assume all governments and corporations are simultaneously that evil and that incompetent. The aggressive cynicism of this forum and its reticence to engage with content before commenting undermines its pretense at informed technical acumen and civil discussion in many ways.


> After 30 hits (so 30 false positives), and then the photos are manually reviewed by a human being.

Sounds like a great gig for a pedophile.


I really just love these comments.

You do know that Apple is scanning your photos server-side, right ?

Just like Google, AWS, Yahoo etc anyone that stores your photos in the cloud.


There's a fundamental difference between you sending files to a company server which are then scanned, and your own hardware constantly monitoring you for illegal activity so that it can report you to the authorities.


> You do know that Apple is scanning your photos server-side, right ?

Yes.

They have every right to. It's their servers.

However, this is MY phone (ok, not mine, I don't use Apple devices but you get my point)


It seems that many of the arguments against this system rely on believing governments (or similarly powerful groups) will strong-arm Apple and fabricate evidence.

I fail to see how this system (which I'm not a big fan of, to be clear) makes it easier if you assume governments can and will fabricate evidence. Doesn't seem you need an iPhone, smartphone, or to have owned or done anything at all in that case.


> equivalent features are not being misused on Google and Meta/Facebook's properties

The objections come when such systems are to be run not on corporate property, but on end user property. I don't see anybody objecting to corporations scanning content uploaded to the corporation's own computers. Scanning content on the user's computer, which is the user's property, is what unnerves people.

Furthermore, while Apple may have implemented this feature in a relatively safe way (I don't fully understand it so I'm not confident of that, but smart people here think it so I'll assume it for the sake of argument), I believe that the general public does not really understand this nuance and Apple normalizing the practice of scanning personal computers will clear the way for other corporations to implement scanning in shittier way. If Microsoft were to start scanning all documents on a Windows computer by uploading those documents to their own servers, is that a distinction that would necessarily be made clear to the general public? I don't think so. The rhetoric will be "Apple does it and that's fine, so Microsoft can do it too". The technical differences in the two implementations will be understood in forums like this, but not by the general public at large.


I agree that there is a need for such changes to be public and debated - my grind comes from those which makes arguments which are neither factual, nor pay attention to the status quo. (Thus leaping to fantasy assumptions which are without basis.)

For example Google and Meta both scan all material that is on their services, both of these providers actually go further by utilising AI to hunt imagery which may not be identified yet. So the idea of this compelling others to scan is false: Apple is last here.

Similar to Google and Meta, Apple's CSAM processes only come into play when the material is uploaded to Apple's servers. The minutiae of where the scanning occurs is irrelevant because Apple's implementation is merely a way of performing the function without giving up the user's privacy (by scanning locally rather than sending all of your photos to Apple's servers for review.)

The scanning process is merely the addition of verified hashes to a system which already uses this same technique to categorise ones photos into searchable images ("waterfalls" "dogs" "bicycles" etc.), because this is an existing system there is no impact to performance/device usage.


> The minutiae of where the scanning occurs is irrelevant because Apple's implementation is merely a way of performing the function without giving up the user's privacy (by scanning locally rather than sending all of your photos to Apple's servers for review.)

This is where you and I part ways. I don't think that's an irrelevant minutiae, because it normalizes on-device scanning. This implementation or others may change in the future and opposition to those future implementations will be hampered by "Apple was already doing it" arguments, which will sound fairly plausible to the non-technical public.

Maybe I'm making a 'slippery slope' argument here, but I've become comfortable with such lines of thinking. My cynicism has been rewarded too often before.


Yet the on device scanning already exists on every platform. This is how the system makes content rapidly searchable.


The context of this discussion is on-device scanning that reports users to the police. It's tedious to spell this out; you should be able to infer it.


> I don't see anybody objecting to corporations scanning content uploaded to the corporation's own computers.

That depends on how "BYOD" the company is. There's an awful lot of companies that provide work machines that most of their employees reasonably use for some sort of personal work, be it checking their bank statements to see if they got a deposit or logging into a handful of services so their settings sync. In my opinion it is unreasonable to expect people to not do this, especially given that the alternative is quite painful with modern software. Corporate searches have a problem of not really having clear limits like a legal search might, so going through e.g. your synced web history when trying to find evidence of credential abuse (check your own audit logs!) is common.


My understanding is that the EU has stronger privacy protections for employees using their employer's computers. Personally, I keep all my private business off my company laptop, if I want to check my bank statements I use my phone (which is not enrolled in any company system, I refuse those offers.) I don't sync a single thing between my personal computers and work computer. But yes, I see your point. Maybe there is some merit to stronger privacy protection laws in this circumstance, since so many users of corporate computers lack the inclination, awareness, discipline to keep their personal stuff off computers they use but don't own.


Photo classification takes place on device as well. The models are generated on Apple's side, but the photos on your device are classified by your device while it sleeps and charges. Check the last line on this page: https://support.apple.com/en-us/HT207368


In 15.2, they are scanning text messages for “nudity”. Oddly, no post here on HN about that.

My guess is it’s already quietly pushed, but the flag needs turned on.

https://petapixel.com/2021/12/13/apple-to-release-nudity-det...


It's local only and they already scan your photos on-device anyway. All the images have been processed for content and you can do content aware searches on the images.


Is it local-only? If so, it has far fewer privacy implications than CSAM scanning.


Yes, local only as far as I understand. But how many additional steps does it take to turn the process into CSAM scanning? It looks really close, even without squinting. Caveat that I have no certain info though.


> But how many additional steps does it take to turn the process into CSAM scanning? It looks really close, even without squinting.

It’s a different feature that works in an entirely different way. There’s plenty of descriptions of how both features work out there, there’s no need to guess at what they might do.


There has been talk about it here, but it coincided with the CSAM scanning which got much more coverage.


Thanks, I guess I missed it. Too many things going on in this modern world to have a complete picture.


It hasn't gotten as much coverage so far but that could very well change. A lot of people aren't happy that iMessage now has an easily exploitable backdoor built in.


> Apple have done their homework and they will not release ANYTHING unless they think they'll either make money from it or at the very least, not LOSE money from it. It's all about money, nothing else.

> It's coming whether we like it or not now :(

Um, I think the public outcry showed them pretty clearly that they will lose customers and money precisely if they go ahead with this. I think the people championing this inside Apple (some rumors say Cook himself) were expecting this to be some kind of feel-good, "we're thinking of the children" PR piece - not the shitstorm it turned out to be.


Now they know to be quieter about this. It will be a quiet line item in an upcoming dot release. And not enough people will fuss (eventually).


I have a feeling it already came in 15.2. They are scanning texts for “nudity” in images. Not that they are the same thing. Or being used in the same way. But I have a feeling CSAM scanning just needs it’s flag turned on to work.

https://petapixel.com/2021/12/13/apple-to-release-nudity-det...


Thank you for your “feelings”. Except your feelings have no basis in reality. Why would you blame a company for something that they clearly haven’t done?


I see where you're coming from, but Apple brought this on themselves by annoucing this invasive spyware garbage. This is what you get when you undermine your user's trust.


Well, their gonna push it at some point. Do you think they’ll tell you when they do, given the backlash.

Tell you what, put up 20k USD and I’ll go through the formal process of verifying binaries and whatnot. Otherwise, I think my “feeling” still contributes to the discussion. I don’t need to put in 200 hours to prove something I’m not misrepresenting as fact.

CSAM scanning will come. Apple probably won’t tell you.


> The code has been written, tested and merged etc. The project has reached a point where it's ready to ship.

Are we talking about anything that scans photos?

If so that’s pretty common… everywhere.

I have some technically capable friends who were up in arms about this situation.

Then they share their “vacation at X with friends “ Google photos album with me.

Cat isn’t just out of the bag….


> It's all about money, nothing else.

Not being funny, but I am not sure why this comes as a surprise. Of course everything Apple (and any other for-profit org) does is to make money.


If Apple kills the program they deserve some positive attention, but realistically the governments of the world will want this capability to scan people's data for things and will get it developed one way or the other.

It has gotten so cheap to run dragnets that the only rational responses are pessimism or demand for privacy by design.


> but realistically the governments of the world will want this capability to scan people's data for things and will get it developed one way or the other

Sorry to burst your bubble.

But all of the photo/cloud providers including Apple have been doing CSAM scanning for many years. And will continue to do so for the foreseeable future.

Apple actually tried to give users more privacy.


The US government, and probably others, scan through everything on the internet. That has been a generally accepted matter of fact since around 2015. I doubt there are any secrets on iCloud.

I don't like it, but I'm still going to complain glumly if things get worse. And at least I don't own the servers involved.


> Apple has quietly nixed all mentions of CSAM from its Child Safety webpage, suggesting its controversial plan to detect child sexual abuse images on iPhones and iPads may hang in the balance following significant criticism of its methods.

OR more likely suggests that Apple prefers to continue its program in secret.


Absent an on-record commitment that this surveillance has been abandoned, I don't put it past Apple to deploy it silently in a minor point-update and then reveal it later, similar to how they secretly converted iOS end-users' filesystems to APFS in versions prior to its official release. They could then point to nobody noticing the deployment as purported evidence that privacy concerns were overblown.


>> Apple Removes All References to...

First rule of scanning feature is don't talk about scanning feature.


Apple Removes All References to [undefined]


@dang

suggested title update as the important bit is cut off

Apple Removes References to CSAM Scanning from Child Safety Webpage


Apple is extremely sensitive to bad PR, this should not surprise anyone.


It worked very well with the iCloud encryption (not): https://news.ycombinator.com/item?id=25777207.


Does anyone know if the law enforcement file signature databases use SHA or MD5?

My guess is that they are all MD5 signatures.

So - my question is - can I get a file onto someone's device that will match a CSA image, but is actually not at all CSA? (i.e. basically a more serious "SWAT" attack because it ends with the person potentially spending years in jail and their life ruined)


I still don't understand what's the goal here? It certainly isn't about catching child abuse. Any non completely stupid predator will have switched to Android already. Who is paying Apple to develop this feature that has zero (or negative!) value to the end user?

Is it governments that want to scan end user devices for content declared forbidden by them?


I am glad, that finally, a healthy dose of skepticism are applied to Apple. Finally. Even on site like Macrumors.com.


> Apple Removes All References to Controversial CSAM Scanning Feature from Its...

I was hoping to read "Source Code" next.


Is Apple's CSAM scanning operationally different than Windows 10 SmartScreen [1]?

[1] - https://www.windowscentral.com/how-disable-smartscreen-trust...


Yes. One is application binaries, the other is photos.


Has anyone verified that it only scans binaries?


Probably. It's not just about 1 in 8 billion people having taken the time to look, it's also about 1 in (what I'm guessing is) at least 100 people at Microsoft not leaking it. Apple had a leak before it was even released. Secrets are hard to keep.


I don't get why they ever green lit this "feature". It's unpopular with at least some users, it flies in the face of their big push for privacy and it increases their liability. Who approves such a request!?


What I do not understand is why there is not more press on this..

They literally did not budge and shipped the product as presented from months prior, yet, no coverage


If it is already implemented how would we even know.

I believe that since iOS is doing ML scanning locally anyway, it is doing hash checks too.


The page also no longer references the iMessage forced snitching on children. Was that covered in another news story?


I am impressed and scared about how quickly media coverage and "public outrage" scaled down after the first few weeks this feature was announced. While I agree with the technology and its mission, I'm glad that Apple walked it back while they try to improve its messaging and privacy.


I presume Apple is going to come out and announce tomorrow or so that they have decided to rejoin QAnon in persuing the sexual exploitation of children?

They were _very_ committed to their story -- that this tech was not primarily or just about spying on everyone for the NSA and other agencies and governments -- so it's not really credible that they just changed their minds.

Maybe just decided to stop marketing it until they can drum up some fake news to support crushing dissdance with this tech.


Let’s give all your data to governments under the guise of protecting children or against terrorists. It is like the John Jonik cartoon.

http://www.libertyclick.org/propaganda-for-government-contro...

http://jonikcartoons.blogspot.com/


> Let’s give all your data to governments under the guise of protecting children

But that's precisely what Apple's CSAM implementation doesn't do. It compares on-device image hashes with hashes of known CP images. It affords more privacy than other methods, which Apple is probably using anyway and which other cloud services are definitely using.

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


>It compares on-device image hashes with hashes of known CP images.

No, it compares against hashes supplied by a government agency (NCMEC). Apple has no way to verify the hashes are in fact CP, as it is only re-hashing hashes.


But once it reaches some non-defined threshold, it's Apple who reviews the content not the government. They don't have a way to verify the hashes, but they would have a way to verify the content which matches the hashes.

Presumably at this stage is where malicious hashes would be detected and removed from the database.


> it's Apple who reviews the content not the government.

An unaccountable "Apple Employee" who is likely (in the US and other countries) to be a LEO themselves will see a "visual derivative" aka a 50x50px greyscale copy of your content.

There is no mechanism to prevent said "employee" from hitting report 100% of the time, and no recourse if they falsely accuse you. The system is RIPE for abuse.

>Presumably at this stage is where malicious hashes would be detected and removed from the database.

Collision attacks have already been demonstrated. I could produce a large amount of false positives by modifying legal adult porn to collide with neural hashes. Anyone could spread these images on adult sites. Apple "employees" that "review" the "image derivatives" will then, even when acting honestly, forward you to prosecution.


> and no recourse if they falsely accuse you

Of course there is. The judicial system.

(Although, to be clear. I don't live in America and I might be more worried about this if I did.)


Doesn't that put you in the position of suing apple after you've:

1) Spent who knows how long in jail

2) Lost your job

3) Defaulted on your mortgage

4) Been divorced

5) Had you reputation ruined

Money can't fix everything, and trusting the courts to make you whole years after the fact is a foolish strategy.


Yes but no one wants the finger pointed at themselves. Even if innocence is proven, someone will go through your files and you will have to deal with the law.

The recourse should be before this reaches the law.


> Presumably at this stage is where malicious hashes would be detected and removed from the database.

How, if 1) the original content is never provided to Apple, and 2) the offending content on consumer devices is never uploaded to Apple?


You're misunderstanding the proposed system. The entire system as-described only runs on content that's in the process of being uploaded to Apple -- it's part of the iCloud Photos upload system. Apple stores all this content encrypted on their servers, but it's not E2E so they have a key.

This entire system was a way for Apple to avoid decrypting your photos on their servers and scanning them there.

Hypothetically, if Apple implemented this system and switched to E2E for the photo storage, you'd be more private overall because Apple would be incapable of seeing anything about your photos until you tripped these hash matches, as opposed to the status quo where they can look at any of your photos whenever they feel like it. (And the hash matches only include a "visual derivative" which we assume means a low res thumbnail.) I say hypothetically because Apple never said this was their plan.

You can argue about whether or not Apple should be doing this. But it does seem to be fairly standard in the cloud file storage industry.


I never heard that Apple was decrypting the original content at all. That implies that there is a team at Apple looking at child pornography all day. Not sure how that's even legal. It was my understanding that CSAM systems were simply hash based and 'hits' were reported to authorities. Do you have a source for them decrypting information?


Here's a summary of how their proposed system was supposed to work: https://educatedguesswork.org/posts/apple-csam-intro/ (if you want to skim it, search for mentions of "manual review" and "visual derivative")

I suspect the key would be that there'd be a team verifying that something is actually child pornography, because the system is a perceptual hash rather than a strict comparing-bytes so before someone looks at it they're not certain.


No thanks, I'll give up my iphone before it gets to this stage. Not an experiment that I'm willing to partake in.


There was a manual review step if the hashes triggered.


Exactly. Apple doesn't want to be in the unenviable position of making that judgement (is a hotdog or not).

As a taxpayer and customer, I concur. I'm glad someone is doing that job. But I don't want it to be a corporation.


A third party reviewer would confirm the images weren't just hash conflicts, then file a police report, according to the documentation.


>third party reviewer

A "third party" paid by apple who is totally-not-a-cop who sees a 50x50pz grayscale "image derivative" is in charge of hitting "is CP" or "Is not CP".

I don't understand how anyone can have faith in such a design.


That's incorrect. That's the Apple reviewer. After that, it goes to further review by NCMEC, where it's verified. NCMEC is the only one legally capable of verifying it, fully, and they're the ones that file the police report.

So, to get flagged, you need many hash collisions destined for iCloud. Then, to get reported, some number must get a false positives in the Apple review, and then some number must somehow fail the full review by NCMEC.


I envy your naïveté. My world would be so much less complex with your child-like acceptance.


You’re assuming quite a bit here. I never claimed to have faith in the system, and there could be problems with the review process, but it’s best to talk about how things are, from an informed perspective.


Also, can we create hashes that are not CSA and match CSA?

https://www.theverge.com/2017/4/12/15271874/ai-adversarial-i...

If we can, then, hypothetically we need to get non-CSA images onto important people's iPhones so they get arrested, jailed for years and have their lives ruined by Apple.

Disclaimer: I buy Apple products.


I have another problem with this. In a lot of jurisdictions virtual CSA images are legal. i.e. cartoons and images created entirely in CG.

These images can be 100% indistinguishable from the real thing. Without knowing the source of the images that they are putting in the database, how do they know the images are actually illegal?


How about we expand the list of known CP images, under force, to include many non-CP images.


Then the myriad other tech companies that scan the images uploaded to their platforms will be alerted of supposed illicit images being uploaded. This isn't new. No one's lives have been ruined by this and it's been practice for years now.


>No one's lives have been ruined by this

This is a strong claim with zero evidence. The social costs of defending an accused perpetrator of this nature are insanely high.

While we're making unsubstantiated claims, I assert that intelligence agencies routinely exploit this. My only substantiation is that it's obvious and I would if I worked for them.


Ian Freemen is a controversial political activist whose pro-liberty, pro-bitcoin radio station was raided by the FBI in 2016 and had their equipment confiscated. It was widely covered in the news, he was kicked out of at least one community organization, and last I heard, they still didn't have all their equipment back.

No charges were ever filed, and the media outlets didn't bother to update their articles.

I think false accusations happen, especially to controversial figures, and assume most victims just don't want to call much attention to it.


Isn't this a separate issue to the one the grandparent was complaining about? I can't see anything about child porn related to this person, just a lot of talk about money laundering.

Also, are you sure there were no charges filed? Some cursory googling turns up articles including a PDF of the indictment [1] and more-recent articles seem to refer to an ongoing process [2].

[1]: https://manchesterinklink.com/fbi-descends-on-keene-and-surr...

[2]: https://nymag.com/intelligencer/2021/08/bitcoin-ian-freeman-...


We have literally no reason to think it's an issue. The burden of proof is on those who believe its a threat, not those who believe it isn't.


"Burden of proof"? Seems like it's pretty hard to prove what intelligence agencies are doing.

Also, there's the fun "you can't sue us because you have no standing/can't show harm, and showing harm to prove you have standing would be illegal".



Why would they want to entrap people with fake child porn images? What would even be the point of getting people flagged? They would never be convicted anyway, since they are not actually storing illegal images.


It doesn't matter.

What matters is that the person is arrested, that their name is leaked to the mainstream news media, and that they're dragged through the mud - the same as seems to happen in literally every other child porn case.

The actual guilt or innocence doesn't matter at that point. Their lives have been sufficiently ruined, and, if they are actually innocent, the message has been clearly sent.


Do you have any examples of that happening, innocent people getting arrested for child porn? Never heard of it.

In any case, the police will not arrest anyone because they have only been flagged by the automatic system. As soon as Apple reviews the images that triggered the flagging, they will see that they aren't actually child porn, and hence they won't even contact the police.


I recently described an example: https://news.ycombinator.com/item?id=29567729


Seems they went after him for other things

https://nymag.com/intelligencer/amp/2021/08/bitcoin-ian-free...


As I stated in a reply above, I guarantee that this database contains a number of non-real-CP images already that are just CG.


The same is true of the technology that can find photos of cats on your device. It can be abused, but smartphones and cloud services are packed with technology that could be abused by a motivated bad actor. If you're worried about it, you're better off not using a smartphone at all.


Why do you not consider hashes of my photos to be my data?


That's an interesting question. I have no idea who "owns" hashes. Does the copyright on a photo transfer to hashes of the photo? Is a hash a derivative work? Regardless, I am sure that somewhere in the terms and conditions you give Apple the right to copy, move, and process your photos and probably hashes of them too.


Seems bizarre that this is what people take issue with and yet they're happy otherwise to hand over their data to FAANG companies. If you're concerned about privacy / data misuse you shouldn't be using these services at all and you should have been pointing out the issues years ago.



Data stored in the cloud should be considered public (at this time), but not the data on your device.


Just so everyone is clear here:

a) CSAM scanning only ever applied to photos being uploaded via iCloud Photo Library. It was never applied to photos that you weren't giving to Apple nor any other files.

b) The "but they could expand it to anything else" logic is baseless and frankly stupid. Why ? Because they wrote the operating system. They could implement this feature in any of the 100+ daemons that are currently running as root and no one would ever know.

c) It was a better solution for user privacy than what exists today i.e. Apple secretly scans your photo server-side. Which you gave them permission to when you agreed to the EULA for iCloud.


So what happens when China sends Apple some signatures of anti-Party memes? "Sorry, you can't view the actual photos to verify they are of abused children, that's illegal".


>CSAM scanning only ever applied to photos being uploaded via iCloud Photo Library

Which is everything in your camera roll, and last I checked, you can't pick and choose which pics. Additionally, saving a photo from the internet, or in some apps, merely receiving one (WhatsApp does this, at least), automatically puts it there.

So let's amend A to reflect reality:

a) CSAM scanning is applied to all photos an average iOS user will interact with if they have iCloud Photos turned on, which is the default

which boils down to:

a) CSAM scanning is applied to all photos an average iOS user will interact with




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: