I made a opensource alternative for these services. Although these worked very well, I was not so confident what they do. So I made my own and opensourced it.
It is written in Golang and is fully customizable.
I got the feeling that these features should be part of a browser extension the same way as there are AdBlock extensions. I guess the reason it is not is "personal preference" of the author, or is there some technical reason?
There's a few links in sibling comments. But basically they are collecting some identifying data to try to optimize load times, but if they don't get that data they just reject the request instead of allowing for longer load times. Archive promises that they throw away that data and aren't tracking users but I mean why rely on trust? And as others point out, there are weird aspects of the code too so even if you trust what about mistakes? And btw, it isn't just cloudflare that's affected.
Archive is intentionally violating copyright, and needs to know which country you're coming from, so they can serve you content from a country that isn't yours. they need that information to protect the service and keep it running.
I definitely am not buying that, especially since it isn't just Cloudflare users being affected, including Quad9, or the wiki[1] which claims the issue was resolved. Note that I can't reach them from cloudflare or even with mullvad's DNS. Not giving me faith tbh. See the linked thread and the links from there for more info[0]. And I trust Cloudflare way more than one guy who users are having difficulties with answering emails[2]. Sure, maybe he gets too many and is only one person, but that gives me less faith that things are being done correctly. Or see weird comments from this hn thread[3]
I don't know for sure, but I would imagine there are more severe actions taken against circumventing paid material (content behind a paywall) than there is for free content supplemented by advertisements..
Edit : The Digital Millennium Copyright Act (DMCA) prohibits circumventing an effective technological means of control that restricts access to a copyrighted work. I guess that would apply here.
Given how liberally the DMCA is applied, you definitely don't want to be on the wrong side of that.
I remember some guy that wrote a WoW bot and got sued using the DMCA, with the argument that his bot was circumventing the anti-cheat and the anti-cheat could be seen as a 'mechanism protecting copyrighted material', because it was safeguarding access to the game servers, the servers were generating parts of the game world (such as sounds) dynamically, and those were under copyright... Wild stuff.
It happened to Honorbuddy, a very advanced bot for World Of Warcraft made by a German company. The argument in relation to DMCA was that the bot was circumventing warden, the games anti-cheat system. The legal battle was long and they ultimately had to strip many features of the bot, until the company went under.
Isn't anything that can be circumvented ineffective?
Or, looking at it the other way, if you put a small sticker that says "do not do X" and even one person follows that, isn't that therefore an "effective" method?
> The Digital Millennium Copyright Act (DMCA) prohibits circumventing an effective technological means of control that restricts access to a copyrighted work. I guess that would apply here.
The docker image, and on the upside is fairly easy to get running. But I'm downside, I'm
zero for two actually using it.
I tried a Bloomberg article which gave me a "suspicious activity from your IP, please fill out this captcha" page, only the captcha was broken and didn't load.
Then I tried a WSJ article which loaded basically the same couple of paragraphs that I could get for free, but did not load any of the rest of the content.
I'm very new to this kind of service, but do you have to write your own rulesets for each site you want to bypass? The repo doesn't seem to include much...
The ladder applies custom rules to inject code. It basically modifies the origin website to remove the Paywall. It rewrites (most of) the links and assets in the origins HTML to avoid CORS Errors by routing thru the local proxy.
The ladder uses Golangs fiber/fasthttp, which is significantly faster than Python (biased opinion) .
> The ladder uses Golangs fiber/fasthttp, which is significantly faster than Python
I have a feeling that this performance difference is practically imperceptible to regular humans. It's like optimizing CPU performance when the bottleneck is the database.
Not for any publicly hosted instance, it’s not. We’re not talking about the time it takes to perform one request but the scalability it affords a small vm to handle so many requests in parallel when it is being used by the general public.
If the paywall is implemented in client code, then usually just disabling javascript for the site is enough to let you view it. If it is implemented server side, then there usually isn't a way around it without an account.
There's no real cat & mouse game here (yet*) - sites don't do anything to mitigate this. Sites deliberately make their content available to robots to gain SEO traction: they're left with the choice of allowing this kind of bypass or hurting their own SEO.
* I say "yet" because there could conceivably be ways to mitigate this, but afaik most would involve individual deals/contracts between every search engine & every subscription website - Google's monopoly simplifies this somewhat, but there's not much of an incentive from Google's perpsective to facilitate this at any scale.
Google publishes IP ranges for GoogleBot. You can also reverse-lookup the request IP address - the resolved ___domain should in turn resolve to the original address.
Does anyone else remember 10 years ago when Google would penalize sites for serving different content to GoogleBot than to normal users? Those were the days.
"Cloaking refers to the practice of presenting different content to users and search engines with the intent to manipulate search rankings and mislead users"
The top of the pages says sites that violate the policies may "rank lower or not appear in results at all".
It's infuriating when you see part of your desired information in the search results and then open the page to find a paywall. IIRC ExpertsExchange were doing that for a long enough time that it was obvious the policy was not enforced. At least not evenly.
(720 ILCS 5/17-51) (was 720 ILCS 5/16D-3)
Sec. 17-51. Computer tampering.
(a) A person commits computer tampering when he or she knowingly and without the authorization of a computer's owner or in excess of the authority granted to him or her:
(1) Accesses or causes to be accessed a computer or any part thereof, a computer network, or a program or data;
(a-10) For purposes of subsection (a), accessing a computer network is deemed to be with the authorization of a computer's owner if:
(1) the owner authorizes patrons, customers, or guests to access the computer network and the person accessing the computer network is an authorized patron, customer, or guest and complies with all terms or conditions for use of the computer network that are imposed by the owner;
Really dummy question: how do services like this work? As in, how do they bypass these paywalls?
The obvious thing is to mock Googlebot, but site owners can check that the request isn't coming from a Google-published IP and see that it's a fake, right?
Oh wow... I'm surprised that's enough. When I was researching scraping protection bypass, you had to do some real crazy stuff with the browser instance + using residential IPs at a minimum...
Thats not the full story. It works on many sites, but some (ft.com as an example) have more severe countermeasures to bypass the paywall. Therefore the ladders modifies the served HTML from origin to remove such.
Those rules still need to be build up. (by me or the OS-community)
> site owners can check that the request isn't coming from a Google-published IP and see that it's a fake, right?
just because they can doesn't mean they will... also most "site owners" are (by this point) a completely different people than "site operators" (who I take to be the 'engineers' who indeed can check this IP things)
It used to be against guidelines to serve different content to google vs what users would see. Not sure if still the case, but I don't think it's in google's interest to give a result that the user actually can't access.
I’m not aware that this policy has changed. What has changed is that Google will rank results it can’t (officially) index without showing their content. I’m guessing they do shadow index them but use the whole “if you outwardly can’t tell they did then it’s as if they didn’t” C++ compilers use to get away with insane optimizations.
Not relevant to the project but I usually check for earlier versions of the paywalled pages in the wayback machine (~75% success). I felt bad using these services (paywall removers), and just feeling a bit better checking in archive.org.
I have noticed that on a lot of websites, if you stop the page loading at just the right moment (you have to be quick), the whole content will display without the paywall. And that's without any external tools.
These kinds of tools seem, of course, much more convenient.
Really great and easy to use. I was trying to read an article that was on the front page of HN and couldn't due to paywall. Downloaded the binary and was reading it within 30 seconds. Awesome and very useful tool, thanks!
I followed this drama on Twitter. The author was breaking the terms of service and creating DMCA support burden for Vercel. They had proactively been in touch with him a few times to reach a solution.
I think it’s quite reasonable that they blocked the account rather than the project. You wouldn’t have got that level of service from big tech.
It’s just a matter of opinion. I think Vercel were acting reasonably with all of the context we have. Things like terms of business are at the account level, not project level.
One might consider Cloudflare as a very nice competitor to Vercel in terms of DX, although I suspect all companies use a ban the world approach, even banks.
Why would you assume independence? I'd expect an outage to put people "on edge" for a period of time following the outage, during which changes are scrutinized to a higher degree, and/or a greater engineering focus/budget is dedicated to reliability to reflect the changed business/image requirements.
- Support is failing us - I want my team to use you for vercel support, but it isn't there.
- Support is failing our customers - when you fail, I end up reverse-depending your repo to tell us why it's failing -- just give us a clear answer, we all move away happy, bullshit and I go to lambda where I just accept it.
- EOD: Vercel makes engineers happy to bullshit, but gives operations teams nothing acceptable - I want a deliverable product.
And if the support team had done so, I'd have nothing to converse about :)
After digging upwards, additional support seems like an option delivered too late, and too outside of 'proper' channels - if you want a sanitized rant I can probably deliver it tomorrow, but too-little too-late is where vercel has landed in the operations team.
Very disappointing that this was the path Vercel chose to take. This is something I would expect from Google or Amazon, but not a developer darling like Vercel. Seems like all companies shed their values is service of growth and capitalism at some point or another. A shame.
I don't know why anyone trusted Vercel in the first place. The vibes of VC money funding an unsustainable offering for a relatively niche market are so strong, it doesn't make any sense.
Shows the importance of controlling your own critical infrastructure, or at least not being dependent for critical functions. Other examples include Github and Discord, both having shown the tendency do arbitarily ban users with little recourse.
Taking down all his projects (not just 12ft) is heavy-handed, but otherwise Guillermo’s response in that thread seems pretty reasonable to me:
> Hey Thomas. Your paywall-bypassing site broke our ToS and created hundreds of hours of support time spent on all the outreach from the impacted businesses.
> Our support team reached out to you on Oct 14th to let you know this was unsustainable and to try to work with you.
The 12ft guy doesn’t look so great in that thread. He admits to ignoring the email (gosh I was busy mmkay?) and then argues that Vercel is lying about the extra work they had to do.
Sure, but if you go on vacation and don't check your email for two weeks, you can't really claim “no warning”. If two weeks isn’t sufficient notice because of vacation it’s fine to say so, but it’s not the same thing as “no warning” just because you’re not checking email.
To take it even further, if you're a one man operation, not checking emails regarding your operation for two weeks is pure negligence, vacation or not.
I might ignore personal project emails while I'm on vacation, but I also won't complain if one of those emails says my billing method is out of date and I come home and it's been turned off.
But if you missed one bill for a service and your account was nuked for all the other products you had paid without issue, you'd be right to get mad, and that's exactly what's happening here:
> I’m sympathetic to vercel here. Honestly very reasonable to take down 12ft with no response in 2 weeks.
> but otherwise Guillermo’s response in that thread seems pretty reasonable to me
“Other than the completely unreasonable thing, they seemed pretty reasonable”.
I mean he's not being a complete asshole in the discussion here[1], but nuking the entire customer's account for a ToS violation on one specific product still isn't a reasonable move. Yes Google and Amazon do it routinely, but if you're not a trillion dollar monopoly and you care about your business' reputation, you shouldn't behave like those.
[1] CEOs behaving like assholes in Twitter discussions isn't supposed to be the norm.
If you violate the TOS of a free service, it’s not on them to surgically split your account into “offending” and “non-offending” parts, especially if they reach out to you to try to work with you to remove the offending parts and you don’t respond for two weeks.
There's no surgery involved and yes it is on them. Actually they retroactively did so when he complained, and as such they agreed that nuking everything was the wrong move.
Also, the “offending part” has been well known from them for several years (they even claim it costed them a lot in support over the years) so it's not like they received a DMCA and had to take everything down in a hurry, they knew exactly what product they wanted to stop because they did stop it because it was too costly. The fact that it violated their TOS is just the legal justification for the closure, not its source.
First of all congrats on the project and thank you for open sourcing it.
> Freedom of information is an essential pillar of democracy
However, this reads like this tool saves democracy by letting you bypass a crappy pay wall on a site you visit once a year, and that whoever wants to get paid for their published content online is an enemy of democracy.
Ironically, the rest of the paragraph you quoted from gives their reasoning why they believe this tool is needed beyond "whoever wants to get paid for their published content online is an enemy of democracy". Double-plus democracy and all that...
It seemed to me like 12ft.io was useful for a couple of months, but then stopped being useful as they agreed to blacklist more and more URLs. I thought everybody switched to archive.is, which (so far) works 100% of the time, even if it is sometimes a pain in the butt.
The operator of archive.is must constantly re-up on hacked credentials for wsj and nyt. Given this is a critical aspect of the service, it is not really feasible/useful to open source it.
My assumption here is that affected websites sent multiple, persistent support tickets and engaged in back and forth communication, as well as updates to the client, support team contacting engineering/legal/management/meetings on how to deal with 12ft.
Access to private property is an essential pillar of democracy and the safe proliferation of ideas. While property owners have legitimate financial interests, it is crucial to strike a balance between property and the public's right to access property. The proliferation of locks on doors raises concerns about the erosion of this fundamental freedom, and it is imperative for society to find innovative ways to preserve access to people's homes and workspaces without compromising the sustainability of property ownership.. In a world where property should be shared and not commodified, locks should be critically examined to ensure that they do not undermine the principles of an open and informed society.
I use services like this as I often skip news site paywalls because I just can't afford, nor is it practical, to have so many subscriptions.
That said, I work in news media (and have been involved in building paywalls at different orgs - NYT and New Yorker). I know how money for these directly support journalism - salaries and the costs with associated with any story.
If you are skipping paywalls a lot, I would encourage you to pay for a subscription to at least one or two news sites you respect - bonus points if its a small or medium local newsroom that benefits!
For me that has been; NYTimes, New Yorker, Wired, Teen Vogue, and my wife's hometown paper in Illinois.
My personal experience with this has been that paying for a subscription still gets me inundated with ads and marketing (often more now that I'm on their official mailing list), is still inconvenient since I may not be logged in to every news site on every device where I may follow an article link, and leaves me to fight through dark patterns to unsubscribe, since a button to allow you to cancel online is clearly dark magic that has not yet been invented.
I do wish there was a better way for me to share an account across multiple news sites that let me properly pay for good journalism without these issues. I do subscribe to a very local news source that seems to handle this a lot better, but they also don't paywall (most) of their primary content.
In the meantime I do find it strange that so many sites wish to gain the advantage of advertising that they have put up an article on the web, without actually providing that article. I have no issue with paid content, but when that content gets listed in search engine results and social media links like a web page, but clicking on it does not behave like a web page, It feels something feels like something has broken from the idea of the linkable World Wide Web.
There's a huge need for subscription bundles. I'd gladly pay $20/mo for access to a bunch of big names, even if I'm limited to like 60 articles per month combined across those sources.
Instead I just don't pay anyone, turn back when I encounter a paywall and look for someone's summary if I'm really interested.
Apple News is a bit of a discovery pain in the neck. If I have a 5 year old Atlantic article, I can’t just click the article and have it open in Apple News. I can’t search for it. If the article is any older, the magazine won’t appear at all.
In my experience Apple news relies entirely on the app for reading articles, so if you are on a computer without the app or following a link that doesn't auto-redirect to the app then you still hit the paywall.
I'd rather have a system that was just a cross-website web account.
There used to be an app called scroll (https://twitter.com/tryscroll?lang=en), which got bought by Twitter, which is now part of subscription, but only for the top articles. Informed.so is doing something similar but different: https://www.informed.so/
The problem creating such a service is that most media houses believe that their content is the best thing since sliced bread and thus they often don't want to partner. Even though most of their content isn't that unique. Of course, some publications do have unique content, e.g. nyt, bloomberg.
I could see artifact being an interesting company to tackle this though (https://artifact.news/). They are already sending traffic to news sites and only serving what the user wants. If they now let me bypass paywalls for $20 that would be nice.
For some reason all the alternative "archive.XYZXDHWIQHDQ" type of sites always give me a captcha page, and I am never able to proceed. I'm assuming its to do with the cloudflare DNS, well if they don't care to fix it on their end, I don't care to use their service.
It's kind of a "everybody sucks" situation and there's no real winners.
Archive.[whatever] setup a server system to give you access from a country not your own, so that abusers have a harder time of archiving illegal content, then instantly reporting it to get the entire archive taken down. He uses EDNS to do this, but CF doesn't provide EDNS since it's a privacy issue to them.
So archive.[whatever] doesn't work for CF DNS because he doesn't want to risk bad actors being able to take down the archive.
Sensible reasons on both sides, especially for a service like archive.[whatever], and the real losers in this situation are the users.
Copying my previous comment over because I found a fix that works for me:
There's some issue with DNS over HTTPS, so you have to whitelist their sites in your settings, or turn off DNS over HTTPS (which I don't recommend).
To whitelist, on Firefox: Hamburger menu > settings > privacy and security > DNS over HTTPS > Manage exceptions > Add "archive.is", "archive.ph", and "archive.today"
Aside from not being censored at all, thereby enabling visiting sites which are blocked at DNS-level in some locations, there are several options for adblocking at DNS-level, too. Often eliminating the need for a Proxy or VPN to get access, with optional Adblock as a service.
What do you mean why not? The point is just to use anything other than cloudflare for archive.is, and mullvad is not cloudflare, so seems fine, go ahead.
There is no special reason not to use cloudflare dns in general though.
The problem is only between cloudflare and archive.is (and it's aliases) and it's hard to say if either side is wrong, except for the fact that either or both of them could figure out some special exception where they recognize each other's traffic if they cared to. Cloudflare are not censoring archive.is for example, and are not doing anything wrong.
Yes. I know. It's just that I had these problems too, when I used cf. Which I tried for speed, and some 'lawful' censoring reasons. Thereby running into the exact same problem.
Then I tried Mullvad-DNS, the speed was still there, the 'lawful' censoring was gone, the problems with archive-sites ceased to exist, and somewhat configurable adblocking-as-a-service.
It's a seamless 'plugin'-solution, not degrading anything.
Before they went down it seemed that there were many big publishers who got the owner to disable it for their sites. Either that or the sites learned to actually not send their articles unless the user is logged in (and didn't care about googlebot not scanning it).
It was just an effective way to get through substack/medium in my experience.
> Freedom of information is an essential pillar of democracy and informed decision-making. While media organizations have legitimate financial interests, it is crucial to strike a balance between profitability and the public's right to access information. The proliferation of paywalls raises concerns about the erosion of this fundamental freedom, and it is imperative for society to find innovative ways to preserve access to vital information without compromising the sustainability of journalism.
We live in a world, where we have more misinformation and poor journalism every day, and less money in the pockets of the people to afford paying for good journalism. So this might start a more open discussion on how to finance journalism. And while discussions are still going on, people can inform themselves with good journalism, which supports the democracy.
> And while discussions are still going on, people can inform themselves with good journalism, which supports the democracy.
That is a looters mentality, sorry to say that. Paywall jumping software is like robbing a disabled old veteran in public transport. It are the last blows to finish off what was once good journalism.