17 out of 20. Was super easy until #10 and I had to stop and think more carefully (which was actually my first mistake), and then I got #14 and #15 wrong. The score was about what I expected, though - would've been surprised if it was <15 correct.
I wonder how much of this would come down to screen calibration / color accuracy? If everything's consistently off in 1 direction I guess not much, but I would imagine certain shades might appear effectively the same on some cheaper screens?
I think a lot of the work is in new device bringup, and given that they can start from official Pixel trees, it shouldn't be too much work to adapt them for whatever GrapheneOS-specific build processes they have, and then I'm assuming the rest of the GrapheneOS customizations are framework side which should be device agnostic. I guess they could have kernel changes for hardening, but not sure how easy or hard porting those would be - is the Pixel 9 series on a newer kernel version than say the Pixel 8?
GrapheneOS has various features requiring hardware integration. Our hardening features also uncover bugs across the code especially in drivers, especially the hardware memory tagging integration in our hardened_malloc project and the standard Linux kernel hardware memory tagging support we use in the kernel. Pixels are very similar to each other though so this work is largely but not entirely done for them.
Adding new Pixels including whole new generations tends to be quite easy. When we still supported Snapdragon Pixels in the official GrapheneOS branch, it would have been fairly easy to add support for a theoretical Sony or Motorola device meeting our security requirements (https://grapheneos.org/faq#future-devices). Now that we don't have a Snapdragon device, we'd need to deal with fixing or working around all the bugs uncovered in Snapdragon drivers, integrating support for the USB-C controller into our USB-C port control feature (https://grapheneos.org/features#usb-c-port-and-pogo-pins-con...), adding back our code for coercing Qualcomm XTRA into being privacy respecting, etc. Snapdragon doesn't have memory tagging yet like Tensor (and recent flagship Exynos/MediaTek now), but pretending it did, we'd need to solve a lot of issues uncovered by it unless Qualcomm and the device OEM heavily tested with it.
See https://news.ycombinator.com/item?id=43669913 for more info including about kernels. 6th, 7th, 8th and 9th generation Pixels share the same Linux 6.1 source tree and kernel drivers since Android 15 QPR2 in March 2025.
Pixel 9a is still using a branch of Android 15 QPR1 due to how device launches work so most of the work involved taking our last Android 15 QPR1 release from early March and rebasing it onto the Pixel 9a device branch tag where they forked off from an earlier Android 15 QPR1 release and backported current security patches to it. We then had to backport our changes since early March. The device branch will go away in a couple months and it will be on the same branch as everything else until it's end-of-life as usual. We could spend more time to integrate it into our main Android 15 QPR2 based branch ourselves but we can also simply wait a couple months. As an earlier example, the Pixel 8a was released in May 2024 based on Android 14 QPR1 rather than the current Android 14 QPR2. It was incorporated into mainline Android 14 QPR3 in June only a few weeks later. We know Android 16 is already going to deal with this so spending our time on this instead of implementing new privacy, security, usability and compatibility improvements would be a waste.
You need to whitelist challenges.cloudflare.com for third-party cookies.
If you don't do this, the third-party cookie blocking that strict Enhanced Tracking Protection enables will completely destroy your ability to access websites hosted behind CloudFlare, because it is impossible for CloudFlare to know that you have solved the CAPTCHA.
This is what causes the infinite CAPTCHA loops. It doesn't matter how many of them you solve, Firefox won't let CloudFlare make a note that you have solved it, and then when it reloads the page you obviously must have just tried to load the page again without solving it.
> You're telling me cloudflare has to store something on my computer to let them know I passed a captcha?
Yes?
HTTP is stateless. It always has been and it always will be. If you want to pass state between page visits (like "I am logged in to account ..." or "My shopping cart contains ..." or "I solved a CAPTCHA at ..."), you need to be given, and return back to the server on subsequent requests, cookies that encapsulate that information, or encapsulate a reference to an identifier that the server can associate with that information.
This is nothing new. Like gruez said in a sibling comment; this is what session cookies do. Almost every website you ever visit will be giving you some form of session cookie.
Then don't visit the site. Cloudflare is in the loop because the owner of the site wanted to buy not build a solution to the problems that Cloudflare solves. This is well within their rights and a perfectly understandable reason for Cloudflare to be there. Just as you are perfectly within your rights to object and avoid the site.
What is not within your rights is to require the site owner to build their own solution to your specs to solve those problems or to require the site owner to just live with those problems because you want to view the content.
That would be a much stronger line of argument if cloudflare wasn't used by everyone and their consultant, including on a bunch of sites I very much don't have an option of not using.
When a solution is widely adopted or adopted by essential services it becomes reasonable to place constraints on it. This has happened repeatedly throughout history, often in the form of government regulations.
It usually becomes reasonable to object to the status quo long before the legislature is compelled to move to fix things.
Why? This isn't a contrarian complaint but the problems that Cloudflare solves for an essential service require verifying certain things about the client which places a burden on the client. The problems exist in many cases because the service is essential which makes it a higher profile target. Expecting the client to bear some of that burden for interacting with the service in order to protect that service is not in my mind problematic.
I do think that it's reasonable for the service to provide alternative methods of interacting with it when possible. Phone lines, Mail, Email could all be potential escape hatches. But if a site is on the internet it is going to need protecting eventually.
That's a fair point, but it doesn't follow that the current status quo is necessarily reasonable. You had earlier suggested that the fact that it broadly meets the needs of service operators somehow invalidates objections to it which clearly isn't the case.
I don't know that "3rd party session cookies" or "JS" are reasonable objections, but I definitely have privacy concerns. And I have encountered situations where I wasn't presented with a captcha but was instead unconditionally blocked. That's frustrating but legally acceptable if it's a small time operator. But when it's a contracted tech giant I think it's deserving of scrutiny. Their practices have an outsized footprint.
> service to provide alternative methods of interacting with it when possible
One of the most obvious alternative methods is logging in with an existing account, but on many websites I've found the login portal barricaded behind a screening measure which entirely defeats that.
> if a site is on the internet it is going to need protecting eventually
Ah yes, it needs "protection" from "bots" to ensure that your page visit is "secure". Preventing DoS is understandable, but many operators simply don't want their content scraped for reasons entirely unrelated to service uptime. Yet they try to mislead the visitor regarding the reason for the inconvenience.
Or worse, the government operations that don't care but are blindly implementing a compliance checklist. They sometimes stick captchas in the most nonsensical places.
Cloudflare is too stupid to realize that carrier grade NATs exist a lot in Germany. So there's that, sharing an IP with literally 20000 people around me doesn't make me suspicious when it's them that trigger that behavior.
Your assumption is that anyone at cloudflare cares. But guess what, it's a self fulfilling prophecy of a bot being blocked, because not a single process in the UX/UI allows any real user to complain about it, and therefore all blocked humans must also be bots.
Just pointing out the flaw of bot blocking in general, because you seem to be absolutely unaware of it. Success rate of bot blocking is always 100%, and never less, because that would imply actually realizing that your tech does nothing, really.
Statistically, the ones really using bots can bypass it easily.
>Cloudflare is too stupid to realize that carrier grade NATs exist a lot in Germany. So there's that, sharing an IP with literally 20000 people around me doesn't make me suspicious when it's them that trigger that behavior.
Tor and VPNs arguably have the same issue. I use both and haven't experienced "infinite loops" with either. The same can't be said of google, reddit, or many other sites using other security providers. Those either have outright bans, or show captchas that require far more effort to solve than clicking a checkbox.
If you want to try fighting it, you need to find someone with CF enterprise plan and bot management working, then get blocked and get them to report that as wrong. Yes it sucks and I'm not saying it's a reasonable process. Just in case you want to try fixing the situation for yourself.
Honestly it's a fair assumption on bot filtering software that no more than like 8 people will share an IPv4. This is going to make IP reputation solutions hard. Argh.
It's well within your rights to go out of your way to be suspicious (eg. obfuscating your user-agent). At the same time sites are within their rights to refuse service to you, just like banks can refuse service to you if you show up wearing a balaclava.
You're assuming too much. I'm not obfuscating/masking anything. I'm just using Firefox with some (to the user/me) useless web APIs disabled to reduce the attack surface of the browser and CF is not doing feature testing. It's not just websites that need to protect themselves.
Eg. Anubis here works fine for me, completely out-classing the CF interstitial page with its simplicity.
Apparently user-agent switchers don't work for fetch() requests, which means that Anubis can't work with people that do that. I know of someone that set up a version of brave from 2022 with a user-agent saying it's chrome 150 and then complaining about it not working for them.
How successful can you consider the alternatives to be when using them means you can't (easily, potentially at all in the future with hardware backed attestation) use ChatGPT, banking apps, order McDonalds, etc.?
It's funny because this only works on a single device, the Android emulator, and not even first-party Nexus and Pixel devices without first downloading hundreds of megabytes of proprietary blobs.
Oh, perfect, thanks! I've been using Niri for less than a week, hadn't got to using named workspaces yet, and missed the bit in the docs where it says they can be empty.
I wonder how much of this would come down to screen calibration / color accuracy? If everything's consistently off in 1 direction I guess not much, but I would imagine certain shades might appear effectively the same on some cheaper screens?