Hacker News new | past | comments | ask | show | jobs | submit | jamescun's comments login

UK citizen and 23andMe customer here. How likely is the sale of UK/EU customer data, or is it worth submitting a GDPR deletion request anyway? Get my data deleted before it's sold.


Whatever you do, do it soon, because it doesn’t sound like they’re long for this (corporate) world, before they sell all that data to (probably) much more nefarious vultures that are circling


gdpr might help you with data in a web database or data warehouse but if they have anything outside of that you're still screwed. I doubt a failing company has the time, energy, or resources to comprehensively clean up your data everywhere. Definitely submit the request but don't expect it to prevent your info from being resold


GDPR covers all personal data, that would include any DNA. It also includes the prevention of creating profiles without your consent.

But as 23andme is an US company, it is not under the jurisdiction of the GDPR. The legal situation isn't clear, the EU would claim some jurisdiction, but I (IANAL) think it's more like you go to the US, walk into a Walgreen and give up your data.


According to the GDPR, its jurisdiction is global via “public international law” and mutual government agreements, but you’re right that’s not entirely clear and they are claiming untested jurisdiction. The law says it applies to non-EU companies if the company establishes any marketing or sales presence either located in the EU, or markets or sells to EU residents (which might apply if the company so much as analyzes sales data by country), or if the company is “monitoring” the behavior of EU residents in any way, where monitoring does not seem to be defined in Article 4 so could mean a lot of things including doing anything with collected data or corresponding with customers.

https://gdpr.eu/article-3-requirements-of-handling-personal-...

I’m sure there are US companies that happen to sell to EU residents that happen to acquire some PII but don’t know and can’t correlate it with the EU, and so aren’t subject to the GDPR. But according to the law’s language, it seems as though something simple on a company’s website like using Google Analytics, which does identify and “monitor” the behavior of people by ___location, might trigger GDPR. I might expect 23AndMe to trigger applicability for multiple reasons, including that they are using DNA to identify regional heritage and relatives, the samples may be delivered with EU addresses on them, and the samples are as personally identifying as it gets. That’s on top of whatever the website, account registration, and sale process collects.


The problem of something like Google Analytics is that a company in the EU (EU company, US subsidiary, ...) exports PII to the US which it can't do (law interpretation is not clear inside the EU, e.g. is it legal if GA doesn't store IPs or if using GA without consent is generally illegal).

And exporting data to the US is illegal because US companies can't guarantee that the EU citizen data is protected (which is the goal of the GDPR).

But then again, it is not clear if this applies if an EU citizen goes to a company in the US (real or website in US datacenter) and leaves their data there.


Notably, the GDPR applies depending on customer jurisdiction rather than company jurisdiction. If they’re serving EU (or UK) customers, the GDPR definitely applies.


Happy to be told the uk falls under the actual gdpr....do they (i thought after brexit, the uk wasn't covered...and they have their own version)?


From the ICO website:

> The GDPR is retained in domestic law as the UK GDPR, but the UK has the independence to keep the framework under review.

The UK GDPR. It’s like the GDPR, only with a Union Jack and a bulldog slapped on the side.

Now, in practice, companies seem significantly less scared of the ‘UK GDPR’ than its full-fat European progenitor (probably for good reason; even before brexit, ICO was one of the less aggressive regulators, with its largest GDPR fine ever only being 20mn pounds), and of course the EU has a number of _newer_ consumer protections in this general area (DMA, DSA, AI Act etc) which the UK has _not_ implemented, but, for the moment at least, the UK still has some degree of data protection.


23andme markets and sells services in the EU and is therefore subject to the GDPR. And they know this very well: https://www.23andme.com/en-eu/gdpr/


Yes, because of "The GDPR applies to 23andMe because we market and provide the Personal Genetic Service in EU Member States through our UK, EU and International sites."

The problem is that the EU parliament thinks this does not work, because US companies can be (secretly) coerced into giving data to the US government, even without telling the affected EU citizens (the EU commission has a different view). And the EU cititzen have no way of going to court over this. And a US company can't guarantee in any way to protect EU citizen data.

Which also the reason that all the *Shields failed and were killed by EU courts [0]

The view of the parliament is that you can't export personal data to the US at all as a company, so 23andMe can put up anything on the website they want, either they don't export data to the US (my Walgreen example) or they do, then they do it illegally.

So I (again, IANAL) would say this is marketing speak aimed towards users and has no relevancy.

[0] https://en.wikipedia.org/wiki/EU%E2%80%93US_Privacy_Shield


I agree that the EU–US data transfer frameworks are unlikely to provide complete privacy safety, and this is an open problem. However, I was addressing whether 23andme is subject to the GDPR or not, and it clearly is. The data transfer frameworks are what supposedly allows them to transfer data to the US and still be GDPR-compliant. But regardless of whether they are actually compliant or not, they are indisputably subject to the GDPR.


Yes and my point was, to me it's open to discussion if they do transfer data to the US.


That's not how GDPR works. GDPR doesn't care where your company is registered or does business; if they process the personal data of EU citizens then GDPR applies.


Supposedly.

I was an Estonian resident a while ago, and I wanted to delete data in my old VK.com account (a Russian company). They didn’t do anything, naturally, so I wrote to Estonian data protection inspector or something. They said that (surprise!) they can’t do anything either.

Things might be better now, but my bet is if you register a company in, say, Seychelles, and your business is purely digital, you can ignore GDPR all you want.

EU can, in theory, tell payment processors to stop working with you, but I haven’t heard of such cases. Even then it won’t help if you don’t sell anything (apart from user data).

Some EU countries have started blocking websites (by spoofing DNS) – this could actually work to put some actual pressure on non-compliant companies, but also is kinda too authoritarian for EU?

Tl;dr: GDPR has good intentions, it just doesn’t work right now if the data processor is not in EU.


Correction: replace "EU citizens" with "people in the Union". That's how GDPR describes the people it covers. It's where you are that matters for GDPR rather than citizenship.


Mostly. Howver if I am in New York and walk into Sam’s deli GDpR doesn’t apply.

If Sam were to target an EU citizen then it would.


Correct. If 23&M sells their services in the EU (and you bought the service while in the EU) then GDPR would apply

But if you just walk into a pharmacy in the US and send your sample from there GDPR has nothing to do with it


No if this is the case, they can't service EU citizens at all because US companies can't have any EU data because they can't protect EU citizen data.

The only way to service EU customers is when we assume entering data on an US website is not exporting data from the EU to the US by the US company. Just like when I go into a Walgreen in NYC as an EU citizen.

For the last decade US and EU companies have ignored the fact that it is/was mostly illegal do transfer EU citizen data to the US (it is currently legal but will be illegal again) - also every EU company that exports data to the US (e.g. by using Mailchimp) needs to guarantee the safety of the data by auditing Mailchimp, no one does and there have been no fine for now, but I assume there will in the future.

See the discussions around

https://en.wikipedia.org/wiki/EU%E2%80%93US_Data_Privacy_Fra...

"The EU parliament raised substantial doubts that the new agreement reached by Ursula von der Leyen is actually conform with EU laws, as it still does not sufficiently protect EU citizens from US mass surveillance and severely fails to enforce basic human digital rights in the EU. In May 2023 a resolution on this matter passed the EU parliament with 306 votes in favor and only 27 against, but so far has stayed without consequences."


Someone randomly walking into a Duane trade in Seattle and purchasing a device would not be reasonably coveted under the GDPR

However if 23&me were targeting European citizens that would be different.

Despite what the adtech industry likes to claim online, Bobs Burger Joint in Baltimore does not have to be specifically concerned about abusing their customers data even if a customer happens to be an EU citizen.

Now if they shipped frozen burgers to France online then sure they would. If they sold “merch” in euros they would. But a local store with a physical premises trading in person? Not covered.

A European citizen living in Austin buying from Amazon though, could well be covered. Amazon do target EU citizens


Pretty much. If EU citizens are targetted then it applies.

“Provided your company doesn't specifically target its services at individuals in the EU, it is not subject to the rules of the GDPR. ”

https://commission.europa.eu/law/law-topic/data-protection/r...


Easy way to submit a GDPR/CCPA/etc. request: https://yourdigitalrights.org/d/23andme.com


It depends on the ToS they had at the time, when they started they explicitly had protections (privacy, data handling) only for US customers pointing to some local law, no details on how the data and samples from outside the us would have been handled. And that's why I never used they service. I think the GDPR road is well worth a try, good luck.


They had a massive data breach that hit about 50% of their customers last year. There’s a good chance the data’s already being resold by brokers:

https://techcrunch.com/2023/12/04/23andme-confirms-hackers-s...


Not sure what questions Microsoft have to answer. A third-party vendor shipped defective software.

I guess the only question they could answer is why they don't provide a framework like Apple do with Endpoint Security for third-party vendors to use.


Because an essential enterprise security application was /able/ to bring down an entire OS like this. The issue is that Microsoft doesn't provide an interface for an application to operate in user-space to have the functionality it requires.

Linux has eBPF which can provide most of the capability that Crowdstrike needs, by using an "in-kernel verifier which performs static code analysis and rejects programs which crash, hang or otherwise interfere with the kernel negatively". If MS had this functionality, it is likely this incident would not have happened.

That said, from personal experience on Linux it's been an extremely long time since a bad kernel module has rendered a system entirely FUBAR'd.

(To Microsoft credit, they have begun copying the eBPF methodoloy to Windows, but it is still in it's infancy https://github.com/Microsoft/ebpf-for-windows/ ).


It's possible for a badly-written eBPF policy to prevent any application from starting up, AIUI, so that's more or less the same situation isn't it?


Crowdstrike brought linux machines down earlier this year in April. There* are several posts in this thread about it.


> Linux has eBPF which can provide most of the capability that Crowdstrike needs, by using an "in-kernel verifier which performs static code analysis and rejects programs which crash, hang or otherwise interfere with the kernel negatively". If MS had this functionality, it is likely this incident would not have happened.

It didn't stop Linux machines from being down so it is clearly not as easy as you put it. The reality is that writing software is hard yet devs often trivialise it to their own detriment


The issue I am raising is /design/, not /development/. The current model of unconstrained unforgiving highly privileged execution space is a bad design, that is what eBPF tries to address.


It didn't make a different though. Linux still went down so clearly the design is enough


It is a different issue[0]. The Linux issue from April was a Linux Kernel bug[1], that CS Falcon happened to trigger. The design to use eBPF is sound, but the implementation on the kernel side had a bug.

Also, CS Falcon didn't support RHEL 9.4 (only up to 9.3), so for this specific bug you highlighted, CS should not be held accountable for regression testing, because it was a platform they did not support.

With Windows, the design is currently poor to not be able to run code in a safe manner. Most recently, it appears MS is blaming the EU for forcing them to create an interface for services such as CS to run[2]. Rather than lean into the problem and create a good design, they didn't create security boundaries - risking the entire system.

Bugs happen, and Linux will continue to harden and be more resilient - but unless MS focussed on secure design in this area, things like this will continue to happen (same as they have with AV before).

  [0] https://access.redhat.com/solutions/7068083
  [1] https://access.redhat.com/errata/RHSA-2024:3306
  [2] https://www.forbes.com/sites/davidphelan/2024/07/22/crowdstrike-outage-microsoft-blames-eu-while-macs-remain-immune/


Might be editorialised by op or sky changed the title, it is currently:

"Serious questions to answer after what could be the biggest IT outage in history"


Assuming Sky since the URL slug shows "Microsoft"


>Not sure what questions Microsoft have to answer.

The only thing I could think of is if it was a driver update, the driver has to be "WHQL" signed. WHQL stands for "Windows Hardware Quality Lab" -- what quality are they ensuring? (spoiler alert from my time at Microsoft: it's not terribly robust :p )

It's not realistic for Microsoft to test drivers in a manner that represents real-world usage, but perhaps they need to start doing some basic "it works with whatever integrated agent/etc is required" testing as a requirement for signing a driver.

If it was a user-mode update? Yeah no real fault on Microsoft here.


From what I heard Crowdstrike just updated their DB file, which means the bug was alreadyq there, waiting for someone to trigger it with a "low risk" quick roll out.


So kind of like the xz exploit, carefully placed and laying in wait.

I only hope this was a good guy move by someone to knock a placed chess piece off the board.


You're confusing the Crowdstrike issue with Azure being down. Microsoft is ultimately responsible for anything regarding Azure even if it was a vendor that did something wrong because they choose their vendors


The article is about CrowdStrike incident and not the Azure configuration issue.


That channel is a gem!


I was recently in hospital with not-so-great WiFi.

With the mobile app, I would often notice Spotify loading album artwork, lyris, artist information and even video before playing the music. It's network prioritisation is deeply disconnected from the users wants.

I remember when Spotify heavily optimised to play music in the quickest possible time. Enshittification indeed.


Why is silently patching considered a "no-no by the infosec community"?

If your product's automatic update functionality can reach most users within the responsible disclosure window, that sounds like a net positive? We still learn about the vulnerability but limits the potential fallout of the disclosure.

I'm very much in-favour of the private vulnerability research and responsible disclosure, but the "no silently patching vulnerabilities" sounds more like wanting to own the press to me than actually wanting to improve people's security.


The general theory is that there's really no such thing as silently patching. Consider:

- Company silently patches issue. Patches have to be applied, which can take some time if people don't know they need to apply them. Even in the case of automatic updates, patching can be delayed if it requires an app restart, for example. - Malicious actors examine patches, work out exploit, begin exploiting in the wild. - Customers left in the dark. - Company assumes that having issued patches is good enough, substantially delays disclosure.

Co-ordinated disclosure aims to prevent all of that by ensuring everyone knows about it at the same time. That removes some of the ability of threat actors to exploit and allows SOCs, EDRs, etc., to update as well, so anything unpatched gets caught. If there are workarounds or other defenses that can be implemented until patching is possible, those can be employed as well.


For anyone trying to understand the issue at hand, this is an excellent summary. Thanks for your comment!


Not sure how this,

> Co-ordinated disclosure aims to prevent all of that by ensuring everyone knows about it at the same time

helps with this,

> Even in the case of automatic updates, patching can be delayed if it requires an app restart, for example.

Not saying they don't have a use-case. Just not sure how it's fixed.


For starters, it (hopefully) delays when a bad actor knows about it, meaning they start their process of reverse engineering the vulnerability after customers have been notified.

There are always going to be situations where out of date software hangs around. This at least levels the playing field when compared to the idea of trying to silently patch something.


It is common for organizations to delay routine patching for some fixed period of time (allow others to be early adopters and assume the risks of negative impacts from the patch), and/or schedule patches to occur during a predetermined maintenance window.

When you make a public security disclosure coordinated with the release of a patch to fix the issue disclosed, you alert the aforementioned organizations that there is an exploitable security vulnerability present and allow them to make an educated assessment of the comparative risks of patching immediately versus waiting and potentially being exploited.

It's not a perfect system, but transparency is the best compromise possible and allows everyone to make an educated choice. All other options have greater downsides.


>I'm very much in-favour of the private vulnerability research and responsible disclosure, but the "no silently patching vulnerabilities" sounds more like dick swinging to me than actually wanting to improve people's security.

Rather than dick-swinging, it's generally an acknowledgement that large organizations move slowly and need a bit of prodding to apply patches in a timely manner.

If you're a big-and-slow company, there is a significant difference in how quickly you'll worry about applying the patch that says "here's a minor patch" and the patch that says "here's a patch for a severe vulnerability".

I have worked with several companies which simply will not update something unless they are either mandated to or it is a large enough security risk.

Read more about Rapid7s opinion on silent patching at https://www.rapid7.com/blog/post/2022/06/06/the-hidden-harm-...


The ICO has always been a bit of a wet blanket. Never really uses its powers effectively. From pathetic fines to putting out press releases about raids before they happen. £140k is just the cost of doing business with 79 million spam emails and 1 million spam texts.


> putting out press releases about raids before they happen

This sounds like a Chris Morris sketch, got a link to share?

edit: I think it's referring to the Cambridge Analytica scandal


Many companies just factor fines and legal costs into the bottom line.

HelloFresh might well have known they are overdoing it but just put some money aside to pay the fine - when it then comes.


I don't like this. Is there a reason for using a stringified method prefix?

I'd prefer the type safety of verb-specific methods (i.e. mux.Get, mux.Post etc) than magic strings validated at run time. Additionally editors can autocomplete/intellisense methods.


Adding this is so trivial. If it really, really bothers you that much:

    func Get(mux, uri, handler) { mux.HandleFunc("GET " + uri, handler) }
Obviously skipped the types for brevity.


Not a fan either. I want to be sure that routing is going to work at compile time, not at runtime.


If this turns out to be real issue (which I seriously doubt), then they'll add a vet check.


how about build time? just add a test and the "runtime" become build time.


If you want type safety, pick a type-safe language.


Go is type safe.


The zero-values idea, nil, reflection and stringly-typing all over stdlib and the most popular libs makes go not type safe. Were you thinking of statically typed?


Type safety is a spectrum, not a binary choice. Having used Go for 10 years now, next to other popular languages like JS and Python, I think it squarely falls into the "more type-safe than not" half of the spectrum. But it's definitely a positive development that, as this discussion shows, the Overton window of type safety is shifting towards the safer part of the spectrum.


That's true. From that perspective it is safe. But from the perspective of for example Elm and Rust, I would say go ends up in the other half of the spectrum - but still close to the middle.


I genuinely have no idea what you're referring to by "stringly-typing all over stdlib". I've written Go every day for the better part of a decade and used the standard library the whole time. What standard library functions require passing in the string of a type?


Struct tags is the most notorious example.

They’re convenient but error-prone. I think everyone who wrote a decent amount of Go had that malformed, misspelled, or misnamed (“db” vs “sql”) tag at some point.


You are right, stdlib doesn't have much of stringly-typing.

However the core language way of dealing with enums for example is extremely weak. It's common to have a typed enum on a struct. When parsing the struct, random string (or whatever the alias is) values sneak in and the only thing you can do is validate.


We're literally in the comment thread of a new stringly typed thing being introduced to the stdlib!


I also don't prefer using strings, but to be fair, HTTP methods are just strings when the request is received. There is some beauty in that in matches the prefix of the first line of an HTTP packet


On the wire, everything is a string. Doesn't make it a good type to use!


Strings with a well defined meaning that is not being taken into consideration here beyond routing.


This is discussed broadly in the proposal.


> I don't like this. Is there a reason for using a stringified method prefix?

Backward compatibility, they didn't wanted to change function signature nor add different method for it.


I'm based in the UK and have a passive interest in amateur radio. If I had to guess, it isn't outright forbidden, just a licensed activity. Maybe the license isn't easy to get or widely granted?


Obtaining an amateur radio licence in the UK is fairly trivial. The courses and exams are administered by volunteers so the hardest part is finding availability to align with your schedule. COVID really helped because it became possible to take the exams online.

There are 3 levels, Foundation through to Full which come with different privileges. You can achieve a lot with the basic Foundation.

In terms of airborne transmitting - yes the UK is an outlier. It is forbidden to make amateur radio transmissions in the air over the UK, or use a UK licence to do so anywhere I think. The key word here is amateur - so specifically on those bands and with that licence. I think the ISM bands would be fine - and there are balloon projects and clubs in the UK.

I have a Finnish amateur licence in parallel and that doesn't have this restriction but naturally it would still not be allowed to use it to transmit from an aircraft over the UK. And even if it were to be elsewhere there are still some rules surrounding that, and it's hopefully obvious that you need permission of whoever's in charge of the aircraft.


Transmission outside of unlicensed bands requires a license, amateur radio has a license requirement to teach responsibility and proper clue, which isn't a bad thing per se


you have to have a license to watch television for chris' sake, so it absolutely falls in line you'd need licensing to transmit. you probably have to have licenses in your kid's walkie-talkies. this is the same country that has the national power grid introduce "hum" (or whatever it is technically callled) in the signal so that a time reference can be decoded from it.


> this is the same country that has the national power grid introduce "hum" (or whatever it is technically callled) in the signal so that a time reference can be decoded from it.

I don't think that's intentionally introduced. My understanding was that mains hum in any grid is an artifact of a noisy signal that just happens to be useful as a forensics fingerprint.

https://en.wikipedia.org/wiki/Mains_hum


Yes, it seems I crossed a few streams in my head. Here's an article[0] talking about how the forensics is done because someone is creating a database of all of the fluctuations that give it the forensic finger print rather than it being deliberately injected.

[0]https://www.bbc.com/news/science-environment-20629671


I believe that the frequency actually is deliberately managed... to keep old clocks synced.

https://www.kccscientific.com/the-dirty-little-secret-about-...


Fascinating


Its not really a license as much as it is a fee, even if it is called that. It'd be like calling taxes a life-license or something. Also worth noting its not the only country that has one, just off the top of my head I know that Ireland, Switzerland and Japan have them as well


Risk Ledger | Remote/Hybrid London, UK | Full-Time | Backend, Frontend and Product Design | https://riskledger.com

We're a London-based startup, with a mission to solve the problem of security risk in the supply chain, globally. The world runs on data, with every business relationship involving a great degree of trust. We facilitate that trust. Risk Ledger offers a secure social network model for organisations to connect and share their security and risk data.

Risk Ledger is backed by multiple high-profile VCs, including Lifeline Ventures, firstminute capital, Seedcamp, Village Global and Episode 1. We're already working with a number of great companies across multiple verticals to achieve our vision, including the likes of ASOS, Snyk, BAE Systems and the NHS.

Senior Backend Engineer https://riskledger.com/careers/senior-backend-engineer

Senior Frontend Engineer https://riskledger.com/careers/senior-frontend-engineer

Product Designer https://riskledger.com/careers/product-designer


I bought a Dyson V7, and the battery life even from new is atrocious. On a full charge, unable to cover my very modest London flat, and of course, it cannot be used while charging.

I even fitted a larger aftermarket battery. Still awkward to use. Eventually I gave up, and bought a corded Henry from Numatic.

I cannot recommend a cordless Dyson for anything except light dust busting.


Corded vacum for residential is such a thing of a past, can't believe genuine people fallback to cords over dysons... I use V8 and V11, wonderfull machines. I like V8 more because it light and nimble.


they're not, I have a v11 and it attrocioius. Constantly needs maintenenance for really light loads, plus James Dyson is a shark.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: