Hacker News new | past | comments | ask | show | jobs | submit login
Interoperability can save the open web (ieee.org)
207 points by pseudolus on Sept 6, 2023 | hide | past | favorite | 97 comments



A smart, insightful interview of Doctorow, with a good interviewer, Michael Nolan. Reasonably optimistic, the framing is on target, dare I say inspiring.

Read it; it is worth your time. That's more useful than dwelling on thoughts such as "the Web is past its prime" and "even if we improve it, Big Tech will exploit it". Repeating the negative generalizations over and over can contribute to a self-fulfilling prophecy.

If so inclined, after you read it, find a way (small or otherwise) to help.


I did just that, years ago. I took a bet on the open internet.

I don't have to convince big tech to treat me right, there's already a whole big open internet where I can go and what I can connect to. Existing techniques from the 90's and before still work. Websites still work. Websites without third party code do work. Analytics from Google? Thumbs from Facebook? One can remove that with ease. Just to name a few.

So I ended up not with the most innovative approach (no blockchain, no NFT, no AI), but with a solid piece of software to build your own website with. One can connect to other independent websites via RSS. One can download it for selfhosting, one can take a subscription. You can design the thing yourself without knowing anything about design, all built-in functionality. Use it for business with a webshop, as a personal blog or photobook, or as an organization with an online info library and custom forms to connect stakeholders, etc. Oh, and visitor statistics without cookies and tracking.

Websites don't necessarily have to stay websites, they can evolve into personal online multi-tools that work nicely together with the rest of the open web.

My main concern - broadly speaking - is knowing that the future content of every website I deliver might end up 'raped' by some AI company and 'abused' as part of their product. But still, it doesn't stop me from betting on the open internet, because AI products don't exclude me and my customers from publishing our content on the internet.


I followed the link and gotta say, love the approach you took.

About AI, totally. I guess my hope is big publishers waking up and realizing they are too being ripped off by big SV tech that's just blatantly ignoring copyright.


Thanks a lot! It encourages me.

If you happen to start a free website at Hey Homepage, know that the 'Quick Design' module (that I'm pretty proud of) might not work over there (yet). I'm working on it.


What link is that, very curious?


It's in the user profile.


>> Repeating the negative generalisations over and over can contribute to a self-fulfilling prophecy.

Very true, but it's also treacherous to go blindly in the opposite direction. We need to be capable of both modes of thought... even if they contradict occasionally.

Interoperability definitely has the potential to "save." It's very good to have someone focus in on that with optimism and conviction. OTOH, we also need to remember that there are (or at least may be) other factors at play.

At this point, "freeing the web" likely entails bankrupting FB and Google. Their ad businesses just do not work without dominant market share, control over user data and such. See Twitter, Bing, reddit, etc. They don't have pro rata earnings relative to the big boys.

Can Google and Facebook fail, without causing disaster, political mitigation efforts, recession, etc? I'm not saying this to contradict the arguments, just pondering the scale of the task at hand.

I think a usable middle road might be to focus on interoperability's direct, first order achievables... not the big picture. What, in the most practical and down to earth terms, can be achieved by and achievable step in the interoperability direction.


Agreed. I'd put the emphasis on avoiding dwelling and maximizing detailed system awareness with a bias towards doing something.


It's the only thing that can. Interoperability should be written into law and stuff like remote attestation should banned. Otherwise there will be no such thing as "open" anything. There is no freedom if they refuse to interoperate with you for daring to exercise it.


Indeed, you have to go legal about this (what I am exactly doing, but my lawyer gave birth then it takes some time).

"Interop" alone does not mean that much anything: what Big Tech is scared of, small software, simple protocols, able to do a good enough job, which it is "easy" and "cheap" to develop an alternative of.

For instance, IRC(TLS) bridges, noscript/basic (x)html (HTML forms can do wonders). EU started to emit personal user certificates for auto authentication, and let me tell you: it is HORRIBLE to install such certificate in Big Tech browsers... and I am suspicious about the certificate file format (never really got into it).

But don't fool yourself, Big Tech "knows" and will fight it, then expect the worse: they will shadow-hire teams of hackers to destroy your alternative. The part of having up and properly running "juicy" public internet servers up is 50% of the job... and it gets worse if you have a payment processor.


> Indeed, you have to go legal about this (what I am exactly doing

That's extremely interesting. Can you tell us more?

> "Interop" alone does not mean that much anything

True.

By "interoperability" I mean being able to have any client connect to any server without discrimination. They should not know or care what software I'm running, only that it speaks the same network protocol. Remote attestation violates this by enumerating approved clients and cryptographically vefifying them.

"Adversarial interoperability" is an even more interesting concept.

https://www.eff.org/deeplinks/2019/10/adversarial-interopera...

> But don't fool yourself, Big Tech "knows" and will fight it, then expect the worse: they will shadow-hire teams of hackers to destroy your alternative.

I don't expect it to be easy. We're talking about amoral trillion dollar corporations who only work for their bottom lines. I wouldn't be surprised if they killed people over stuff like this. Coca-Cola did.


Behind Big Tech (msft/apple/intel/google/meta/etc), you have blackrock and vanguard, we are talking tens of thousands of billions of $.

There is zero "economic competition" here: only moral values and strategic interests.


Last time I got warnings of "intended violence" was with some vmware zealot.

Shadow-hiring teams of hacker to give hell to Big Tech alternatives is one thing, contract killing/violence is another.

But since we are talking about massively rich trash(or severely mistaken) human beings, it is important to stay alert. They literaly can buy anything they want to.


Not sure when this article was prepared, but it doesn't mention WEI and other related technologies (remote attestation) which is IMHO the biggest current threat to interoperability. Sure, the "standard" is open, and theoretically anyone can implement it, but when the only "trusted" keys are those owned by Big Tech, it's hard to argue that's actually open.


Doctorow is a co-author on this EFF article from 2023-08-07 discussing WEI and remote attestation: "Your Computer Should Say What You Tell It To Say" https://www.eff.org/deeplinks/2023/08/your-computer-should-s...


EFF's series on adversarial interoperability is also great.

https://www.eff.org/deeplinks/2019/10/adversarial-interopera...


The biggest threat against the open web are the malicious actors on it like spammers or fraudsters.


If spammers and fraudsters are the price of freedom, so be it. Have corporations and banks eat the cost. God knows they can afford it.


Eh, I'm barely able to run a search engine due to all the bot activity. Less than 1% of my traffic is human, the other 2.5 million queries per day is from bots. It's only by the grace of Cloudflare it works.

Your mindset ensures the big corporations will be the only actors able to host anything with any level of interactivity. Doesn't sound very open to me.


Well, your mindset ensures the computing freedom we enjoy today will be destroyed. Google is on the verge of introducing remote attestation in browsers so that servers can cryptographically verify any number of things about your client. It's essentially guaranteed that none of those things being verified will be in our best interests.

I'd rather have a static web where I have the power to choose my own browser, inspect source, block ads or even just use curl or Python to scrape something.


What mindset is that, actually trying to build alternatives to FAANG?

I honestly think remote attestation is a smaller problem. At least then we can still build alternative services. I'd rather have a free web and a locked down browser than a free browser and nothing but an endless digital mall to browse. Remote attestation will only lock you out of that endless mall to begin with. It will find ways of sucking regardless of what Google does.


The mindset of sacrificing computing freedom for user convenience and corporation profitability. We should be absolutely uncompromising on this one value. The alternative is the destruction of everything the word "hacker" ever stood for.

> nothing but an endless digital mall to browse

Funny, that's essentially what the web feels like to me in 2023. Escaping it is the number one reason why I visit HN nearly every day. It's like the only sane website left. I don't even open the links posted here anymore, websites are just unbearable these days even with uBlock Origin turned up to the max. I just assume whatever's important enough will be directly quoted in the comments.

The "static web" you and the commenter below mentioned? I actually kind of want it.


Dude, my struggle is to be able to operate a non-profit service free of charge with no ads, and the aim is to help users find a way out of exactly the aforementioned digital mall and onto the free and independent, mostly static web.

This is harder every day dude to all the bot traffic helping themselves to disproportionate usage of my service via a botnet that is all but indistinguishable from human traffic.

But yeah, must be the corporate profits I'm hoarding...?


Why is it relevant whether the traffic is human or automated? The whole point of the internet is you can put a server out there and anyone anywhere can connect to it with any HTTP client.

To me it seems like the only people who care about that are those who want to sell our attention to the highest bidder via advertising. Wouldn't you be having the same difficulties if there were just as much traffic coming from humans?


I want to provide as many human beings as possible with value by distributing my processing power fairly between them. If I get DDoS:ed by a botnet, I won't provide anyone with anything other than optimistically an error page.

If I had infinite money and computing resources, this would be fine, but I'm just one guy with a not very power computer hosted on domestic broadband, and even though I give away compute freely, it just takes one bag of dicks with a botnet to use it all up for themselves, and without bot mitigation, I'm helpless to prevent it.

Oh and I actually do provide an API for free machine access, so it's not like they have to use headless browsers and go through the front door like this. But they still do.

Serves me right for trying to provide a useful service I guess?


Arguably, the problem here is that you want to do it free of charge. That's the problem in general: adtech aside, people want to discriminate between "humans" and "bots" in order to fairly distribute resources. What should be happening though, is that every user - human and bot alike - cover their resource usage on the margin.

Tangent: there's a reason the browser is/used to be called an user agent. The web was meant to be accessed by automation. When I use a script to browse the web for me with curl, that script/curl is as much my agent as the browser is.

I see how remote attestation and other bot detection/prevention techniques make it cheaper for you to run the service the way you do. But the flip side is, those techniques will get everyone stuck using shitty, anti-ergonomic browsers and apps, whose entire UX is designed to best monetize the average person at every opportunity. In this reality, it wouldn't be possible to even start a service like yours without joining some BigCo that can handle the contractual load of interacting with every other business entity...

(Also need I remind everyone, that while the first customers of remote attestation are the DRM-ed media vendors, the second customer is your bank, and all the other banks.)


> The web was meant to be accessed by automation.

Completely agree.


I see. I respect that.

The bot detection won't come without cost. It will centralize power in the hands of Cloudflare and other giants. I think it's only a matter of time until they start exercising their powers. Is this really an acceptable tradeoff?

If we do accept it, I think the day will come when Cloudflare starts rejecting non-Chrome browsers, to say nothing of non-browser user agents.


I don't see any good options at this point. The situation profoundly sucks for everyone involved. We're stuck between the almost absurdly adversarial open web, or bargaining with the devil at Cloudflare, and now Google's remote attestation which is basically Google taking a stab at the problem.

To be clear I don't think remote attestation is a good solution, but it's at least a solution. Any credible argument against Cloudflare or remote attestation needs to address state of the open web and have some sort of plan how to fix it. Or at least acknowledge that's what Google and CF are trying to solve. Dismissing the problem as a bunch of mindless corporate greed just doesn't fly. It affects anyone trying to host anything on the Internet, and is only getting worse. The status quo and where it's heading is completely untenable.

It's easy to say well just host static content, but that's ceding all of Internet discovery and navigation and discussion and interactivity to big tech, irreversibly pulling up the ladder on any sort of free and independent competition in these areas. That's, in my opinion, a far greater problem.


Yes, I agree with you. It sucks having to make these choices and compromises. The adversarial nature of the web is difficult for service providers but it's actually ideal for users. We all benefit from being able to connect to servers using any browser, any HTTP client. This is especially true when the service providers don't like it. Software like yt-dlp is an example of software that interoperates adversarially with websites, empowering us.

I apologize if I came off as aggressive during my argument. It was not my intention. I think we reached the same conclusion though.

Maybe the true problem is bandwidth is too expensive to begin with. Would the problem still exist if the costs were negligible?


Network bandwidth cost is negligible, it's hardware and processing power that's expensive. Each query I process is up to a 100 Mb disk read. I only have so much I/O bandwidth.

As far as I see it, there are two bad solutions to this problem.

The first bad solution is to have a central authority inspect most of the web's traffic and try to deduce who is human. This is the approach taken by Cloudflare, but essentially the same as Remote Attestation. It gives the chosen authority a private inspection hatch for most of the web's traffic, as well as unfettered authority to censor and deny service as they see fit.

The other bad option is a sort of 'free as in free enterprise' Ferengi Internet where each connection handshake involves haggling over the rate and then each request costs a fraction of a cent. This would remove the need to de-anonymize users, likely kill the ads business and virtually eliminate DDoS/sybil attacks. It would also be an enormous vector for money laundering, and as a cherry on top make running a search and discovery services much more expensive. I do think the crypto grifters pretty solidly killed the credibility of this option.


Xanadu comes close to this "ferengi Internet" mindset with some of the tactics it chooses for monetization of content, albeit from an entirely different angle (enabling remix culture more or less indiscriminately while preserving the sanctity of the existing copyright system and enabling royalties to flow to authors proportional to how their works are used and reused).


> The other bad option is a sort of 'free as in free enterprise' Ferengi Internet where each connection handshake involves haggling over the rate and then each request costs a fraction of a cent.

> This would remove the need to de-anonymize users, likely kill the ads business and virtually eliminate DDoS/sybil attacks.

Sounds like a massive win to me on all fronts. I agree with you.

> It would also be an enormous vector for money laundering

I don't mind. If that's the price, I pay it gladly.


Is this a purposeful DDoS or just bots trying to scrape results? If this is a DDoS on purpose, what's their financial gain? Did they demand payment?

If you're talking about bots scraping content, then the question is also why. Perhaps by letting them do so, you indirectly provide even more human beings?

It's entirely possible that these questions are absurd, however, since scraping using headless browsers is not free, then there must be some reason for scraping a given service... and it's usually something that in the end benefits more human beings.


Best guess is it's some attempt at blackhat SEO, to manipulate the query logs and typeahead suggestions (I don't have query logs but whatever, maybe they think I secretly forward queries to Google or something).

But really, fuck if I know. I've received no communication so I can only guess as what they're trying to do. I have a free public API they're more than welcome to use if they want to like actually use the search engine, but they still try to go through a botnet through the public web endpoint.

I've talked to a bunch of people operating other search engines and all of them are subject to this type of 24/7 DDOS. It's been going for nearly two years now.


Oh, I guess query logs could help reveal a pattern which could be throttled/blocked..


>Why is it relevant whether the traffic is human or automated?

because all traffic costs for the service provider, but the automated traffic can be run at thousands of users cheaper than it is to run one human user (who after all is bounded by time and cost of computation and bandwidth) whereas the automated is not bound by time, giving them the opportunity to DOS you - either on purpose or just accidentally.


But the solution for this is a rate limit, not captcha. The real reason they care about "human traffic" is because bots don't buy stuff.


Rate limits do all of bupkis against a botnet. It's not possible to assume that each one IP or connection is one person. The crux that all of these initiatives like remote attestation are trying to solve is that as it is, one person may command tens of thousands of connections, and from a server-standpoint, there's really not much you can do to allocate resources fairly.


you're the first person to say anything about Captcha? The guy who started this argument needing some way to sort out human traffic operates a free service and is complaining the bot traffic makes it hard to offer a free service since bots cost money.


By captcha I meant "telling computers and humans apart", not necessarily a particular implementation.

Why are you focusing on bot traffic? Doesn't human traffic also cost money? Who operates bots and why?


The problem is allocating resources fairly. A single human may operate tens of thousands of bots, and thus use disproportionate amount of resources, possibly all of them.


Do you want anything but static websites to only be possible to be run by a corporation?

Everyone suffers from this.


Yes so let's let one giant company hold the keys to the castle to fix it.


Since nobody is proposing alternative solutions that actually work, people are looking to Cloudflare and Google for help with the real problem that impacts them right now.

People are looking for solutions to:

- How do we only allow real humans to access it so we stop wasting money on handling spam requests?

- How do we permanently ban a known malicious individual from accessing it?


Sorry to disapoint you but web integrity doesn't work either. On Android where you have the play integrity api, bot farms are still very well alive and kicking.

Third party rom users are affected on the other hand though and third party browser vendors will similarly be if this is pushed forward, reducing competition in the space.

The whole thing is a complete failure on Android so I don't see any argument why we should also suffer on the web.


Spammers and fraudster are a problem, sure, but the biggest? By what measure?

If we're trying to predict which threats are "big" enough to lead to system failure, the analysis is quite different. In natural systems, parasites tend to fill niches and can persist for lengthy periods, often as long as the host. Or longer.

Think of it as a historically situated evolutionary battle. Thinking over many scenarios, there are many failure modes. One way to tease apart the likely causal threats involves thinking through a lot of scenarios.

Under what conditions you think spam/fraud would (more or less) 'destroy' the open web? And what does that destruction look like to you?


>By what measure?

Time and money lost to them. The idea of creating a web closed from those people starts to look very attractive when you have to spend time multiple times a day cleaning out spam from your site.

>Under what conditions you think spam/fraud would (more or less) 'destroy' the open web?

It's already happening.

>And what does that destruction look like to you?

The destruction looks like more websites able to offer free or cheap services. A great reduction in spam comments. More effective ads. For good actors nothing really changes.


The biggest threat against the open web is Google who wants to turn the web into yet another AOL with vertical integration.

It is the corporation who wants to put malware on your computer under pretense of "verifying content integrity", just to force people to see their ads from spammers and fraudsters.

Google ads often promote scams in Google search.


... and other fairytales we tell ourselves

The web is dependent on big tech, so interoperability will always be limited by shareholder interests

We have to consider the alternative hypothesis, that interconnectivity opens up markets and leads to global competition and winner-take-all phenomena, whether it is the internet, the money market, export controls, foreign real estate etc etc. If we really want the small guy to win again, maybe we should wish for the opposite


> The web is dependent on big tech,

If that were the case, it would never have been built until big tech came along and invented it.

As the web existed for years before big tech got involved, and for a few more years after they got involved but were still playing catch-up, I don't see how that holds up.

> interoperability will always be limited by shareholder interests

Again, history shows the opposite.

There were a number of attempts by large corps to create something like the web before the web, like Minitel and Compuserve, and to some extent AOL. (Also see X.400 messaging before SMTP email) They all lost out to the web because they had gatekeepers and no interoperability, whereas the web with its interoperable standards allowed anyone to come along and build anything.

The more you tighten you grip, Tarkin...


In a sense, it is dependent on big tech. Intel, AMD, et. al make the substrate upon which all of tech is dependent.

We don't notice it so much because the era of monopolistic dominance ended 30 years ago when AMD cloned the 386. The ability to write code that didn't utterly depend on Intel came along strong in the years since.

I think we're in that era now. People are beginning to notice that "social media" isn't really. People just want to send messages to friends and family, share photos with friends and family, and maybe catch up on what somebody famous, like Richard Stallman, is doing. (I have my finger on the pulse of public taste doncha know.)


The web's interoperability today is almost a miracle.

Take a look at other industries. Proprietary formats everywhere. You want to make music? Ok buy these VST and AU. Make games? FBX and PSD go burrrrr. And I'm sure it's only getting worse at more traditional, B2B industries.


Are you really sure of this? I can take any detergent for my dishwasher, any pots for my stove, plug any device into the sockets on my walls. I can continue this into eternity, you get the idea.

There are so many industries that really rely on interoperability and have been for so long, it’s so natural to you you don’t even notice it.

So, going back to your argument: it’s like Doctorov says - the phase of immense oligopolies right now is not the natural state of the web, and something we’ve seen with other industries in the past. We can do something about this.


> going back to your argument

...?

I don't even know if I made an argument .I was merely expressing my observation. If I had an argument that would've be "interoperability is not for granted".

By the way sockets are not globally unified. So yeah web, an interoperable standard across billions of users, is quite a miracle.


> Ok buy these VST and AU.

That is why CLAP, a open spec is being pushed by some DAW. Steinberg(Yamaha) literally forbids to distribute older VST SDK today. The VST format is not open. CLAP doesn't have that problem.


Yeah. I think we should protect this miracle at all costs. It's too good to just sit and watch it be destroyed.


Unfortunately, the Open Web just makes it easier for Google to smart snippet all the things. And it's not even about monetization (via ads) for the source content provider, but how about giving some credit to the source.


Having Google crawl everything on an Open Web is immensely preferable to the alternative of a closed web, or no web at all. Part of uploading things to the internet is reconciling that everyone can see, copy and distribute the content you provide. It's part of authoring anything digitally, and a poor boogeyman in a world where the Open Web has few demonstrable harms.

What should really scare people is the prospect of a common interface like the internet disappearing and being monetized by private interests. We take the Open Web today for granted, and while I partially feel like Doctorow is too fatalist, I also agree that interoperability is a core part of what makes the web function.


Even if that is true, that is not what users want. They do not want everything they have ever posted to be out on the internet with no way to delete it.

Scrapers are hostile actors against users which is one reason social media like sites invest resources to defend their users against scrapers.


> Scrapers are hostile actors against users

They are also user agents: the idea that the only viable web browser is one that meets a certain preconception (like Chrome) is a violation of the fundamental design principles of the web. If I want to scrape some pages and render them in a terminal (archive them to track silent edits to important news stories), via text to speech or whatever, websites should be produced in a format that’s amenable to it: the web will be richer and more vibrant as a result.


Search engine scrapers are friendly to me, in that they make it possible to find content that I want. A cool blog post is no good for me if I can't find it.


>> Part of uploading things to the internet is reconciling that everyone can see, copy and distribute the content you provide

In some ways this is a narrow definition of the Web. There is a lot of activity placed behind a login to expressly prevent the information from being public access.

If I upload a private repo to github I expect it to be private. If I interact with a Dr or lawyer on a site, I expect that to be private.

Of course inter-operability controlled by the -user- is different to the idea of interoperability controlled by the host, or by some external entity (scraper). The former is good, the latter less desired.


What you're saying is privacy. Lack of interoperability is like you uploaded code to github from one computer, and then fail to download it to another because the encryption algorithms or something are not compatible.


Interoperability isn't the problem. Leverage to enforce your own IP, or lack thereof as an individual, is.

Just because you publish content on the Web doesn't mean you give license to anyone to use it however they want. IP is rooted on a foundational principle of giving explicit consent. Copyleft is using that principle to explicitly state "anyone is free to use this however they want". Without that consent, it's assumed that the author can ask you to cease and desist. (Hence why e.g. wikipedia is plastered with creative Commons license mentions)

Sure, there are fair use exceptions. But if you take a close look at the conditions that need to be met before a published copy can be considered fair use, it's not as clear cut as it seems.

Thing is, only big media outlets with capital, like the NYTimes, are able to litigate against big actors who wholesale misuse interoperability after a tragedy of the commons kind of fashion.

This imbalance in resources and capital to enforce rights between a handful of big actors and everyone else is exactly what Doctorow draws attention to in the interview.


> the Open Web just makes it easier for Google to smart snippet all the things

How is this a problem? A simple `Disallow: Google` would solve your concern if you want to de-list from Google.


But then you have the tree falling in the woods problem.


Does Google have a track record of not respecting robots.txt? Otherwise why is it a problem?


It is a bit unclear to me what exactly is being proposed. That every app exposes some kind of common standard API that can be used to link with other apps? Take their twitter example. How would “taking your followers with you” work in practice? I can imagine some sort of indirection layer that associates each user with their respective provenance ___domain + an update protocol if the provenance changes, but there are likely challenges when it comes to scalability. Who is going to design/mandate these interfaces and where will they be applicable? What’s the criterium?


In one word, MyData: I should be able to authorize a new service Y to download all data about me from service X. I should be able to delete all data about me from service X and leave a redirect to my new profile URL at service Y.

Number portability is not a novelty requirement in competition law, but somehow we haven't expanded it to apply to online accounts.

Standardisation is needed but that's why RFCs, W3C etc. exist. The existing web standards go a long way if fully implemented.


Thank you! I totally agree that data should belong to the user (in fact, I find it shocking that some consider this opinion controversial). I’m still not quite clear however how the software can interoperability is supposed to be achieved. I understand that one can impose a standard, but a standard presupposes a certain data model. What if my application does not follow this data model? Would one ban social network apps that don’t want to implement a particular database schemas?


Interoperability is focused on what happens between services, not what happens inside. You don't have to use a specific database schema to be able to support standards in APIs.


Like Facebook messenger and Google chat both used to support xmpp, they don't run those services in that internally but they built gateways that let clients that talk xmpp talk to them.


The exact example you're describing is what the ActivityPub spec and the resulting "Fediverse" are aiming to solve. Mastodon and compatible microblogging platforms are distributed versions of Twitter, Lemmy & Kbin are compatible distributed versions of Reddit/HN style link aggregators.

Given the boost both of these parts of the fediverse have gotten from Twitter/Reddit's series of missteps recently, they're starting to look like they could become viable long-term alternatives.


For VOD platforms. it would be a dream if one day netflix / disney+ / AppleTV / ..., would offer iframes, and I could use other sites that offer me their own catalog browsing and recommendation ergonomics that are not limited to a catalog with their walled garden editorial choices.

Why have iframes lost so much use?


Bunch of issues:

- security (now improved but historically very problematic)

- accessibility (confusing for screen readers)

- responsiveness

- scrolling

- usability (for example if something in the iframe wants to display modal with backdrop covering whole screen)

But I agree that Netflix catalog UI sucks


I agree with the premise of the article. Part of me thinks that a lot of the drama around data privacy was manufactured by big tech precisely to get people riled up about privacy protection in order to ensure that such interoperability solutions (which are ultimately about sharing data) would not see the light of day.

But this is casting a shadow on the idea that some users may actually want to share some of their data broadly and across platforms in an open and interoperable way. Regulators were so busy focusing on data protection, they completely neglected the other side of the coin which is data propagation. As a user, just as I have the right to have some of my data protected, I should have the right to have some my data freely propagated. Kind of like copyright versus copyleft in the software industry.

It seems like it should be possible to force companies which have a monopoly (or near-monopoly) to at least allow users to opt in to data sharing. Like for example, there should be a way for me to tell Facebook to share my name and/or email address publicly via API...

Though on the flip side, I think Facebook should be allowed to delete my account (with appropriate notice) if they don't want to support making my data public since they shouldn't be forced to bear the bandwidth costs. In any case, I think this would offer people with the option to confront big tech platforms about appropriate use for their data. People own their data and should be able to set the terms and change the terms any time they want. Big platforms with a near-monopoly should not be able to make it an all-or-nothing (if you don't like it, delete your account) kind of deal.


This is a difficult topic. On one side, it seems wrong to force platforms to pay for hosting and bandwidth costs associated with bots scraping their open user data... Yet it also feels wrong for big tech to continue to use their monopoly position to take away users' bargaining power in terms of control over their own data.

It feels like the free market solution would be to form some kind of large group, like a union, a syndicate whose members would agree to delete their Facebook (or whatever platform) accounts en mass unless their demands are met (similar to how a union works when employees go on strike). Then Facebook (or whatever platform) could decide whether it makes sense to lose all these users/accounts or comply with their demands to make their data public and take the costs of hosting/opening up that data. Then this would not require government intervention.

I think one thing the government could do would be to facilitate the creation of such group/syndicate via a large advertising campaign. That seems like the right level of government involvement.


The union/syndicate idea is interesting, but as with unions, reprisals are possible since there is little to no anonymity on social media, and no legal protection. Reprisals in the form of affecting your work, for instance. Not just affecting your access to social media, but these conglomerates offer other goods and services, and maybe they'll just update their EULAs so that such actions forbid you from using their other services...


Prof. Mark Lemley hosted a conference on Interoperability in January 2023, which I was at. Doctorow's work was part of the reading list.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4412862

Much of this stuff was covered there, although I wouldn't claim the conference reached a focused, actionable conclusion.


This solution effectively comes down to "get regulators to police big tech more, in this particular way".

How much luck have we had with meaningful regulation in the tech industry recently? Doesn't feel like much, if any. Everyone hates GDPR and the cookie banners. Restriction of advertiser tracking — the closest thing to a consumer win that comes to mind — was not a product of policymaking but of Apple's competitive action.

I personally love the idea of government regulation but America clearly doesn't. This market needs solutions that don't involve top-down control.


Well Europe is doing it with the Digital Markets Act


yes, yes it can.


>Doctorow proposes forcing interoperability

No surprise that Doctorow proposes authoritarian solutions to create an "open web".

"It's open unless you run your server in ways me and my fellow ideologues disagree with"


Curious what your thoughts are on the paradox of tolerence? Google and the person hosting a static blog on a server in their basement are not the same.


Legal consequences, which are backed by the state's apparatus of violence comprising of police, judges and prisons, should only be used against those who engage in fraud or violence. Running a closed API, as much as I may dislike it, is neither of those things.

There are plenty of other ways to counter closed APIs that do not rely on initiating force.


There is also the tool of taxation and fines, which while in turn are backed by a threat of violence are not the black and white which you allude to.


Taxes and fines are entirely within the realm of legal consequences I was referring to.

Fining people who get abortions, or engage in promiscuous behavior, or don't subscribe to the dominant religion, doesn't strike you as authoritarian?


What's your take on Adam Smith?


Amazing man


I find that quite surprising.


> Cory Doctorow presents a strong case for disrupting Big Tech ... Doctorow proposes forcing interoperability

Forcing? Like, legally requiring every media company to maintain an RSS feed and support activity pub? Also, is this what disrupting means?


> Forcing? Like, legally requiring every media company to maintain an RSS feed and support activity pub?

What other way is there? Waiting, until they do it out of pity?

The EU was playing around with the idea already, they just went a little overboard in their first drafts of the Digital Markets Act. But I think we'll see something like that come out of Europe soon.


> What other way is there? Waiting, until they do it out of pity?

Did we legally force browser vendors to support web protocols and standards? Didn't the market solve this for us — i.e. if you created a browser that didn't do http, or https, or web sockets, or support a video or an audio html tag, then it would just die, because people won't use it. Are we admitting here that interoperability does not bring any competitive advantage, and that you need a state regulator to force you to do it?


And then Google slowly built a monopoly on browser implementations. Interoperability isn't appealing to corporations when users don't have much of a choice on using your platform.


> Did we legally force browser vendors to support web protocols and standards?

Did you not pay attention? Microsoft tried exactly that, for close to a decade. The only thing saving the ecosystem was their own incompetence at developing an at least half decent browser.

Now Google is trying the exact same trick, and unfortunately, they are doing much better. And the thing that's probably going to thwart their efforts is not the open market, but the closed market of Apples "absolutely-nothing-but-Webkit" walled garden.


Yes, what would be bad about that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: