This is the kind of thing that makes me not want to register anywhere except on sites that use known open source and actively maintained software (so mostly forums). But I guess it's too late. The best I can do is generate a bunch of unlinked email addresses, and randomize passwords everywhere with my password manage. I used to use LastPass a lot, but once I lost my vault I havent gone back to fill a new vault, I really do want to use BitWarden as a LastPass alternative, but I've just been putting it off for a while. Maybe it's definitely time to take the plunge.
It sounds like you are taking the word "breach" in the title as if it's a security thing.
They are referring to breach as in breach of rules. These are not security vulnerabilities, these are GDRP complaints that may or may not be real, but have been reported by people.
> Austria has issued its first fine for GDPR violation, sanctioning the owner of a retail establishment with EUR 4,800, Digital Freshfields report. The reason is that the entrepreneur has placed a surveillance camera which not only captures too much of the sidewalk in front of the establishment, but it was not properly marked as conducting video surveillance.
If you own a ___domain you can use a catchall e-mail adress and then just invent the part left of the @ as you go. Then store that adress with a random password generated by keepass.
As a nice side effect you see who lost your data to the spammers, based on the email adress and you can block whole email addresses from getting stuff
Exactly ! I've done that for years (and for the email alias part, well, at least the last 15 years), and it's a relief.
Back at the beginning of the 2000's, I use to get hundreds of spams. So I decided to use an alias of a special mail on my own ___domain name for each new site I register to.
I used alias for not having to configure a new account on my mail client.
And then it's super easy to shame whoever sold your email.
Best anecdote I have ? An antivirus / antispam brand. I registered as reseller with an alias.
Less than a week later, I started receiving unsolicited mail on the alias.
I asked them why it happened, but never received anything more convincing than "Mmmm, don't know, let me check".
Of course I explained them I won't be buying their product anymore for my clients, and destroyed the alias.
A nice variation of this is plus addressing. The advantage it has over a catchall/wildcard is not attracting or being susceptible to random spam (i.e. when a spammer just sends emails to a ___domain using common names like david@ or mike@ etc.).
True, I do wonder which phone vendor (or forum) lost "nexus@mydomain" from when I was doing android dev and had to register to get to some of the binary blobs.
I had the bitcoin porn spammers use that address (with what was probably a valid password).
After the linkedin breach I moved to a password manager so I'm mostly immune these days.
Yeah! That's what I'm working on next, a companion iOS app with offline sync. It will be the killer feature!
I don't currently include browser extensions, you still have to open the app and search, then CMD+C for the password, and CMD+B for the email or username.
I may do browser extensions, but I'm always wary of browser extensions these days
Note that what constitutes a reportable breach under Article 33 GDPR [1] is not entirely clear yet, so companies tend to over-report rather than risk the consequences of under-reporting.
For example, I know of a financial institution that reported a breach because they sent an account statement to the wrong address.
(Edit: well, certain breaches are entirely clear, of course. It's the breaches at the lower boundary that are in question.)
When the definition is unclear like this I would say companies tend to over-report what they're reasonably certain probably isn't a breach and under-report things in the grey area that probably are a breach.
We can be reasonably certain the regulators won't want to be bothered with every account statement misdelivery but they haven't specifically told us. Self-reporting this is low risk and demonstrates an effort at complying with the regulation. It also has the added "benefit" (from the corporation's perspective) of demonstrating to the regulator how onerous and unreasonable the compliance obligations will be for their office if they don't make an effort to set "reasonable" compliance and reporting standards (again, from the corporation's perspective).
But the contractor who "misplaced" the flashdrive of customer data? Well we haven't been told that's a breach because we don't know that it's "lost." It certainly feels like something a regulator would care about more than a misdelivered account statement but they haven't specifically told us they care about this scenario yet. That's a risky thing to self-report because there will probably be consequences and we have no idea what the consequences are because it hasn't come up for anyone else yet either. In that case it's low(er) risk not to report it and hang our hat on the ambiguity of the new regulation in the very unlikely event the regulator even gets wind of the breach. The strategy is to ask for forgiveness for our ignorance of the scope rather than clarification.
It's not right, but that's how it works.
Source: Corporate attorney/Former Chief Info Security Officer at an investment bank. I quit over their handling of a particularly egregious PII breach.
Surprised to find the Dutch in first place. I believe it’s their obsession with accountability that encouraged more people to report instances of breach.
Did we expect 60K? Does that mean we should expect a reduction in the future because companies are taking the right actions? Should we expect to see a reduction in identity thefts? How does the EU know the law works?
Not on the government side, but on the implementation side it has at a minimum caused organizations to shift from "better keep this data just in case" to "hey, can we maybe not have all this data around I don't want to be responsible for it."
These are great questions. I suggest two approaches.
First, if you consider the sample size and timeline large enough, you can weigh what the goals were to what has actually happened (be sure to include the societal costs of the legislation in your calculation).
Second, you can look at the success or failure of previous attempts (e.g. data protection directive) and the reasons for success or failure. Are we repeating history, for better or worse, concerning government oversight of the internet and enforcement? Did we improve on the successes by adding more (larger scope, bigger fines, more legislation, etc) or did we exacerbate the failures?
While the questions are subjective and therefore not amenable to an objective large-scale determination, they help me arrive at a conclusion personally.
How will we know is perhaps a better framing? I mean, the law has not even been active for a year. We know basically nothing yet, any quantifiable effects have not had time to happen yet.
Wow, I have to admit I didn't expect that large an effect.
Even if that were the only effect of GDPR, I think we could already chalk it up as a huge win, but it won't be. The recent news about German anti-trust regulators using very GDPR-like language to forbid Facebook data gathering/correlating, but with the additional teeth of anti-trust, is also a good sign.
I also saw that less than 100 fines were levied, so for now a rate of 600:1 breaches:fines.
Exactly. I think the ratio reflects this very sensible approach. It always seemed sensible to me, but it's good to see the data confirm that, in a rough way.
Even if data-breaches are not punished, we can still shame them and avoid using their product, which is a major step up from the old status quo where actively hiding the breach to avoid bad press was a legitimate strategy with very little downside.
what this creates at actual companies: PMs making $250k+ per year to sit on their asses and read about GDPR and tell you whether or not to post comms and whether you should do encryption without knowing any details about it
1. GDPR… who cares? Give me a break, sick of that shit.
2. OK, let's do the bare minimum to comply.
3. Hey, you, IT guy! You know where we're storing all the data, right? Good. Put that in a document somewhere. DPIA: done.
4. Oh shit, someone's laptop got stolen / email account got breached. What sensitive data was actually there? Who's affected? What now?
Disclosure: we built SW for automated AI discovery and analytics of personal data, PII Tools. It's used by auditors, and I can say the most severe problems come from unexpected places (file shares, archives, email attachments…), not your "front-facing central DB" that's typically top of everyone's mind.
> I can say the most severe problems come from unexpected places (file shares, archives, email attachments…),
As to your ...: backups, old hardware that is discarded, lost USB sticks that had data on them they shouldn't have had in the first place, test systems, developer laptops with data on them they should not have.