I assume you meant this piece with "scanning of conversations":
>As mentioned, detecting ‘grooming’ would have a positive impact on the fundamental rights of potential victims especially by contributing to the prevention of abuse; if swift action is taken, it may even prevent a child from suffering harm. At the same time, the detection process is generally speaking the most intrusive one for users (compared to the detection of the dissemination of known and new child sexual abuse material), since it requires automatically scanning through texts in interpersonal communications. It is important to bear in mind in this regard that such scanning is often the only possible way to detect it and that the technology used does not ‘understand’ the content of the communications but rather looks for known, pre-identified patterns that indicate potential grooming. Detection technologies have also already acquired a high degree of accuracy, although human oversight and review remain necessary, and indicators of ‘grooming’ are becoming ever more reliable with time, as the algorithms learn.
That whole paragraph sounds a lot more terrifying and intrusive. At least they admit the flaws, but I haven't quite had the best experience with the EU's way of handling this stuff.
> “becoming ever more reliable with time, as the algorithms learn“
I mean considering Amazon has put in kajillions of dollars on their learning algorithms and yet now in 2022 those algorithms still aren’t even smart enough to figure out trivial stuff, like that I’m probably not gonna buy a second air fryer immediately after buying a first one, and stop advertising them to me. What hope do we have for this new algorithm?
It just showcases either the ones making the decision are naïve or they have an ulterior motive. They tried to justify it with the following footnote:
>For example, Microsoft reports that the accuracy of its grooming detection tool is 88%, meaning that out of 100 conversations flagged as possible criminal solicitation of children, 12 can be excluded upon review and will not be reported to law enforcement; see annex 8 of the Impact Assessment.
Of course, anyone could poke a dozen holes in this statement. "Possible", so there still needs to be a ton of review. Microsoft reporting on its own detection tool. Etc.
The ulterior motive is gaining total control over citizens in EU member states. Which is also why the EU digital ID [0] will soon be introduced and a Euro CBDC will follow in the coming years. It's quite likely the Euro CBDC wallet will be linked (part of) the EU digital ID.
I expect at some point the EU digital ID will be required to use when accessing the internet or perhaps when signing up for email, creating a Twitter account and such. This EU digital ID will
EU member states will become a carbon copy of China. People will worry what they can say out of fear that they will not be able to spend their hard-earned Euro CBDC.
> I expect at some point the EU digital ID will be required to use when accessing the internet or perhaps when signing up for email, creating a Twitter account and such.
I'm not going into the debate about whether there's an ulterior motive, but current eID solutions in various EU countries are considerably less draconian than what you're describing. The current solutions are going to tie into the new one, after all.
In Finland, as an example, we simply have a chip with a cert/key on ID cards. People without card readers or who don't want to use them are free to use their netbanking logins instead. Being able to use the same solution in other EU countries is of rather limited value to most, but it will ease up paperwork submission when travelling, moving to another member state, and so on.
Problem is that if an ID system is ever established and becomes ubiquitous, a quick change in legislature can force online services to leverage the ID. Protection of intellectual property, CSAM, a pretext is easily found. Happens faster than you can blink and than the free internet in Europe would quickly die. The best protection against that is to have alternative ID systems or just anonymous usage if no transactions are involved.
And a lot of services would be fine to use a state ID because they can tie their advertising to a real and unique person. To secure against that the smart move is to reject the little convenience here for longtime benefit.
Pretty much whole of EU already has some form of government ID (not sure if there are any exceptions, but if they are they are not many).
People are used to them, so it's really just a matter of time, before it goes digital.
I don't disagree with your observations, I just think its a forgone conclusion at this point.
That said I think the value lies in creating new non centralized (and less popular) solutions. They will probably never have mass appeal, but at least they will be there for people who want/need them.
Yes, sadly even with fingerprints in the newest version which is completely ludicrous and we have it because some countries needed to implement it for domestic self-gratification. There are regional differences but here nobody uses their government ID for online services.
Democracy didn't really protect use from intelligence agencies sniffing communication data, so I don't see how it would protect anything. Not an intrinsic fault of democracy since the EU still has deficits here.
The little fact that Microsoft then has the possibility to read random interactions that have been flagged by its own blackbox should really induce doubt here.
Maybe they aren't optimizing for conversions but instead for running out budgets? Amazon often has anti-consumer practices, that likely evolved that way because it improves bottom line, you'd think how hard they make it to search reviews or to filter reviews for a specific version of a product would be simple fixes, but likely testing has shown those things decrease sales.
>like that I’m probably not gonna buy a second air fryer immediately after buying a first one,
this comes up all the time, but one of the primary indicators that {person} will buy a second {item} of type {X} increases when they have already bought one.
However it does seem to me that I have never done this, even when I am unsatisfied with item, I soldier on for at least a year or two before saying aw screw it, perhaps the machines would be more impressive if they could recognize what people will and will not ever buy a new expensive item immediately again after buying one.
Yeah that is a pretty naive algorithm though, especially if you can explain it in one sentence. After all that, that’s the best algorithm they got? If you buy one thing you might buy two. Pretty sure most of us could implement that in a line or two of code and save billions in research. And even if this whole research exercise was necessary to discover this fact, it’s still not a “smart” algorithm if there are a ton of false positives.
And it’s still a relatively trivial use case—-if it comes to potentially miring someone in legal issues, I hope the “algorithm” can do better than that.
I've been told (I don't know if it's) that showing you an ad for something you just bought reinforce your satisfaction and are thus significantly less likely to return the product.
It's about the weaknesses of men, that would gladly change a reduction of the chocolate ration into an increase, because it's their job to lie and alter the past, believing they are doing god's work.
Also, in 1984 surveillance is secret, not publicly stated in a formal document visible by everybody.
Better examples of 1984 in action: the DDR, the NSA.
"Big Brother" was a pretty big part of 1984, and I definitely have gotten "Big Brother" vibes from the EU these last few years, under the pretense of "we know better than you and we want to protect you".
Having a good will doesn't make it okay for the EU to become a helicopter parent, nor do I like the prospect of that good will evaporating and them having access to what is effectively more and more surveillance.
yeah, and what the Big Brother does to you if you don't follow rules is kept secret, by secret police agents, because
1 - the Big Brother doesn't physically exists
2 - people would revolt if they knew what really happens to deviants, so it's imperative they are kept in the dark (“Until they become conscious they will never rebel, and until after they have rebelled they cannot become conscious.” )
“If you want to keep a secret, you must also hide it from yourself.”
So 1984 does not apply at all in this case, because the document is public and everyone can read it and eventually revolt against it
Smash down doors, get embarrassed a few times, pass legislation to ban "roleplaying as a minor in text conversations." Then there's no reason to be embarrassed again!
More practically, I would assume "age of the users of a device" is a well-derived bit of information available to anyone who asks for it with the proper letterhead. Or who just helps themselves to it.
My initial thought was to give adults an adult service, but that also means adults need to provide information (which a lot of adults wouldn't be okay with) so they can filter the kids out. Then you'd still have to run the proposal on every service kids and adults have access to. Then on top of that, you still need to filter out cases where its just minors, but at the same time you can't filter all of them because it might be a case of sexual violence between minors.
Legislation of a similar tone is already the law in many places. Some places ban pornography of adults with flat chests. Some places ban lewd drawings of characters who aren't sufficiently curvaceous. Many platforms hosting content such as literotica and erotic audio recordings ban content that describes or roleplays as minors.
I thought most of these proposals were around doing device-local analysis and reporting (I suppose specific clients might not, but if they can mandate this at the device level, and make it default on iOS and Android, they're going to get almost all users.)
No reason an XMPP client wouldn't be forced to include this as well. Reminder that iMessage and Signal are encrypted communications but are surely a target of this sort of lawmaking.
That's one of them, yes. I've not had the time to read the whole thing today, and probably won't have time, so I'm trying to encourage people to dig through it a bit themselves and get a feel for just how "But the Children!" it is, as a reason to violate every bit of privacy they think they can get away with.
>As mentioned, detecting ‘grooming’ would have a positive impact on the fundamental rights of potential victims especially by contributing to the prevention of abuse; if swift action is taken, it may even prevent a child from suffering harm. At the same time, the detection process is generally speaking the most intrusive one for users (compared to the detection of the dissemination of known and new child sexual abuse material), since it requires automatically scanning through texts in interpersonal communications. It is important to bear in mind in this regard that such scanning is often the only possible way to detect it and that the technology used does not ‘understand’ the content of the communications but rather looks for known, pre-identified patterns that indicate potential grooming. Detection technologies have also already acquired a high degree of accuracy, although human oversight and review remain necessary, and indicators of ‘grooming’ are becoming ever more reliable with time, as the algorithms learn.
That whole paragraph sounds a lot more terrifying and intrusive. At least they admit the flaws, but I haven't quite had the best experience with the EU's way of handling this stuff.