Hacker News new | past | comments | ask | show | jobs | submit login

I've actually had this discussion with Plausible directly back in 2022[1], and more recently with the lawyer they had write a blog post[2] on the topic. I wrote an article on it, that was recently discussed here on HN [3].

The response from Plausible is essentially "we've checked with legal council, and stand by the statement". The conversation with the lawyer started out well, but he stopped responding when I asked about the ePD, not GDPR.

There generally seems to be a lot of confusion, even in legal circles, about what ePD requires informed consent for. Many think that only PII requires consent, or think that anonymization bypasses it. That amount of confusion makes it very easy for a layman (e.g. Plausible) to find _someone_ willing to back up their viewpoint.

The EDPB released a guideline in 2023 that explicitly states that what Plausible et al. are doing is covered by the ePD's consent requirement, but that's a little too late: the implementations in member countries already differs massively on whether it's covered[4].

1: https://github.com/plausible/analytics/discussions/1963 2: https://plausible.io/blog/legal-assessment-gdpr-eprivacy 3: https://news.ycombinator.com/item?id=42792485 4: https://matomo.org/faq/general/eprivacy-directive-national-i...




> There generally seems to be a lot of confusion, even in legal circles, about what ePD requires informed consent for.

That seems to be true, going by this comment section and the other ones I've seen.

It's hard to get a non-hyperbolic answer to the question: if everyone is so confused, what's the real-world consequence of best-effort implementation?

Some would say it's the ultimate responsibility of the app owner to understand the law, but how much further can you go than hiring a lawyer?

If more diligence needed to be done than that none of us would get anything built, we'd all just be running around researching the laws around these dumb popups.

What are the real-world consequences of making a mistake here? What kind of boundary would you have to trip over to actually get the authorities to prosecute you for not having a consent popup or doing it badly?


That is unfortunate, and seems to be similar to ADA compliance, as far as what is truly compliant and what is not. It seems like it is up to the courts to decide (speaking as an American, I know GDPR is a European law). I try to do as much as possible to keep up to date with ADA compliance and best practices, but when it comes to tooling around scanning for non-compliance, there seems to be differences. I believe that showing that you made an effort to comply is usually enough to avoid a lawsuit, but it would be nice if things like this were spelled out more clearly for those that need to implement these features.

I have recently gone through a conversation with a client that has been told in NY state (in the US) that something similar to GDPR is coming for those that deal with PII. Both the client and the agency I work for have added various scripts to the website for dynamic forms, tracking (Google Analytics), and newsletter functionality. It's at a point where everything that is 3rd party has to be discovered first, then seeing if there is the ability to anonymize everything (either by default, or with a user consent dialog). Even with current laws, it seems intentional to keep things vague.


Agreed. The company I work for has fought off two "ADA trolls" in the past ~3 years. I'm fully behind accessibility, and we design/develop our website specifically to conform with best-practice; I get, and generally accept, that civil remedies are (currently) the only way to enforce any kind of compliance. I nevertheless call the lawyers targeting us trolls, because their technical analysis was beyond incompetent, and their understanding of accessibility issues woefully out of date. It cost a few days of my + developer time, and I don't know how much lawyer-time, to make them go away.

We (I'm in the US) badly need clarifying regulation. Until then, compliance will mainly be about preventing yourself from being low-hanging fruit for opportunistic litigation - which, to be clear, can generate productive results, but is clearly inefficient.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: