Hacker News new | past | comments | ask | show | jobs | submit | more Perceptes's comments login

Similar results for me. Does anyone know if it's possible to turn off WebGL, and if so, how? AFAIK I never use it for anything and I'd rather have increased anonymity. (Assuming disabling it prevents it from being used for fingerprinting.)

Edit: Answering my own question. In `about:config`, change the `webgl.disabled` preference from `false` to `true`. This reduced the "bits of identifying information" from WebGL from 11.26 to 2.56.

Edit 2: Apparently the CanvasBlocker add-on is a better solution as it randomizes the data used for fingerprinting on each read, and works for several exploitable APIs, not just WebGL. https://addons.mozilla.org/en-US/firefox/addon/canvasblocker...


CanvasBlocker actually increases your track-ability because the consistent factor is now that you have a changing canvas fingerprint (which almost no one does).

This is why Safari tries to give a universal canvas fingerprint so you can "blend in" with other users.


I agree that a universal canvas fingerprint is better in principle, but practically who is going to write a script to search for all visitors who only differ by their canvas fingerprint and then identify them as one browser because the fingerprints are non-standard?


Practically, it requires little more work than creating a canvas fingerprint framework itself! If someone puts in the effort to write a framework that tracks you via canvas fingerprints, it’s little more work to add to the script with another one that performs a simple diff to find people trying to evade it.


Looks like the last line needs to be updated to have the server listen on 8080 instead of 80. (I'm guessing this is left over from before you added the non-root user.)


Did this issue cause all add-on data to be wiped? After updating to 66.0.4, all of the containers I'd created with the multi-account containers add-on were gone and replaced with what appeared to be a default set of containers. I spent a lot of time setting that up—is there no way to get it all back if I don't have some sort of manual backup? And if not, what files do I need to manually back up to make sure I don't lose my data next time?

Edit: To be clear, at no point did I delete the add-ons I had installed.


From the release notes (https://www.mozilla.org/en-US/firefox/66.0.4/releasenotes/) :

> If add-ons that use Containers functionality (such as Multi-Account Containers and Facebook Container) were disabled as part of this problem, any lost site data or custom configurations for those add-ons will not be recovered by this release. Users may need to set them up and login again in about:addons (Bug 1549204).

> Themes may not be re-enabled. Users may need to re-enable them in about:addons (Bug 1549022).

> Home page or search settings customized by an add-on may be reset to defaults. Users may need to customize them again in about:preferences or about:addons

I don't know if there is a way to manually recover the settings you lost. It might be a good idea to check the related links in the release notes for more info.


Containers are notoriously finicky datawise.

I generally keep good backups of the non standard ~/.mozilla folder to compensate.

I think it was issue 339 on GitHub. They basically explain they won't add it to sync data because there is no containers on mobile.

https://github.com/mozilla/multi-account-containers/issues/3...


I could never understand why they were not synced. Its so incredibly frustrating, and the official reason does not help.

I simply don't understand Mozilla anymore. The power users are also the unpaid evangelists/marketers. They seem determined to alienate this demographic while iOS'ifying Firefox for a general audience. Which is great and all until they realize they don't have a marketing budget to complete with Google and MSFT.


> I think it was issue 339 on GitHub. They basically explain they won't add it to sync data because there is no containers on mobile.

Makes you wonder how on earth their data sync is working with regards to mobile. Surely if it doesn't have the components to leverage the data it just wouldn't read it..?


Its possible to implement the concern was that people would leak their cookies into the default container.

They should be essentially syncing but not syncing any non default container data into mobile.

Basically they had some work to do on the server end I think.


I too lost my multi-containers and assignments. I was just reading this (2017) discussion: https://github.com/mozilla/multi-account-containers/issues/3... and noticed mention of file 'containers.json', which I found in my current Firefox profile. It contains the descriptions of the containers I created since yesterday (for the 3rd time).

So it looks as though a manual backup of that (or the whole profile, knowing its in there) at least will end that chore.

But you still lose the Container -> websites associations. The same page recommends: "An effective way of exporting and importing containers safely is using ffcontainers."

The page for that syncer is here: https://github.com/pierlauro/ffcontainers ... but it's on hiatus at this moment.


containers.json wasn't enough in my experience. Still went through some kind of reset due to the way the plugin initializes.

ffcontainers looks promising but couldn't get it to work (spaces in my path..) probably needs a bit of a clean up.


Yes, the "Firefox Multi-Account Containers" was unfortunately reset for me too.


Mine seems fine. My currently open tabs are still in their containers, and when I open a new tab, my list of choices is still intact.


My Add-ons kept their settings as far as I can tell, but maybe there's something special about the container logic...


This is the biggest reason I'm still using Firefox. Containers have totally changed my workflow, much preferred over Chrome profiles.

I wish they added more colour and icon support, there's open cases for it but they're not prioritised. If only I was a good enough developer..


I found that the changes I made to container names and colors were reset, but my hostname assignments remained. I guess those were two different datasets and only one was reset.


Please, Mozilla, choose Matrix. https://www.ruma.io/docs/matrix/why/


Not a huge surprise. Here's another security issue with Docker Hub they've let sit for 4 years with no action: https://github.com/docker/hub-feedback/issues/590 (which is apparently a dupe of https://github.com/docker/hub-feedback/issues/260).


This is a good mostly-layman's overview of Matrix: https://www.ruma.io/docs/matrix/


But it does seem to be the case that the same SSH key pair that was used to access Jenkins also provided access to the production infrastructure. Unless I'm misunderstanding the nature of the attack.


It seems the issue was developers using SSH agent forwarding which was abused to access the production environment.


The history of TryFrom/TryInto has spanned 3 years, from when it was originally proposed as an RFC in 2016. For a seemingly simple API, it's gone through a lot. Especially unusual was that it was stabilized a few releases ago and then had to be destabilized when a last-minute issue was discovered with the never type (`!`). The never type had been the primary blocker for stabilizing these APIs for the last year or so, but it was finally decided to simply use this temporary `Infallible` type, which would be mostly forwards compatible with the never type itself.

I've followed the issue closely because it's one of the features used in Ruma, my Matrix homeserver and libraries. In fact, for the library components of the project, it was the last unstable feature. With the stabilization of these APIs, I'll finally be able to release versions of the libraries that work on stable Rust. This will happen later today!


Here's a funny quote about that...

>> Can you eli5 why TryFrom and TryInto matters, and why it’s been stuck for so long ? (the RFC seems to be 3 years old)

> If you stabilise Try{From,Into}, you also want implementations of the types in std. So you want things like impl TryFrom for u16. But that requires an error type, and that was (I believe) the problem.

> u8 to u16 cannot fail, so you want the error type to be !. Except using ! as a type isn’t stable yet. So use a placeholder enum! But that means that once ! is stabilised, we’ve got this Infallible type kicking around that is redundant. So change it? But that would be breaking. So make the two isomorphic? Woah, woah, hold on there, this is starting to get crazy…

> new person bursts into the room “Hey, should ! automatically implement all traits, or not?”

> “Yes!” “No!” “Yes, and so should all variant-less enums!”

> Everyone in the room is shouting, and the curtains spontaneously catching fire. In the corner, the person who proposed Try{From,Into} sits, sobbing. It was supposed to all be so simple… but this damn ! thing is just ruining everything.

> … That’s not what happened, but it’s more entertaining than just saying “many people were unsure exactly what to do about the ! situation, which turned out to be more complicated than expected”.

https://this-week-in-rust.org/blog/2019/03/05/this-week-in-r... https://www.reddit.com/r/rust/comments/avbkts/this_week_in_r...


> new person bursts into the room “Hey, should ! automatically implement all traits, or not?”

Lol; that was me (although there were probably others). I approve of the dramatic rendition.

https://github.com/rust-lang/rfcs/issues/2619


And a funny addendum for context (from the same r/rust thread as the above quote):

> The never type [is] for computations that don't resolve to a value. It's named after its stabilization date.


> so you want the error type to be !.

For others having trouble to grok that sentence, ! is the never type. Should never happen. https://doc.rust-lang.org/std/primitive.never.html


It makes me sad that people continue to put their time and effort into supporting these closed communication systems. If anyone else is considering making something like this, please base your efforts on Matrix. It's so much better for us and so much more deserving of our attention.


Chicken and Egg. People don't use Matrix.


That's less of a problem with Matrix. I set up a WhatsApp bridge because all my friends are on WhatsApp, and now I can use any matrix client, even a very lightweight CLI one, to talk with them.


To be fair, for an internal communications usecase you just need to convince your company, not half the world

I’d open source comms tools leaned more into business use cases (mattermost does this well) then there could be more usage


> To be fair, for an internal communications usecase you just need to convince your company, not half the world

I imagine that's true for companies whose users stick to internal workspaces, but often the most active/vocal users (1) participate in multiple (often professional) non-company workspaces, and (2) don't want to run Yet Another Chat client.


xmpp as well (matrix is getting a lot of love right now, but it's also got some warts deep in areas that are not the plushy UX)


I wish this had gone into some more technical detail about what "CNB" does that is actually better. Most of the article was just rehashing some problems with Dockerfiles, but the conclusion is just "CNB fixes it!" The one specific improvement they mention is being able to "rebase" an image without rebuilding the whole thing, which certainly sounds interesting, but is not explained. How does it work? What else is CNB other than a wrapper around `docker build`?


The presentation to the CNCF TOC covers some of the technical details: https://www.youtube.com/watch?v=uDLa5cc-B0E&feature=youtu.be

Some key points:

- CNBs can manipulate images directly on Docker registries without re-downloading layers from previous builds. The CNB tooling does this by remotely re-writing image manifests and re-uploading only layers that need to change (regardless of their order).

- CNB doesn't require a Docker daemon or `docker build` if it runs on a container platform like k8s or k8s+knative. The local-workstation CLI (pack) just uses Docker because it needs local Linux containers on macos/windows.


> How does it work?

The OCI image format expresses layer order as an array of digests. Essentially, "read the blobs with these SHAs in this order, please".

Cloud Native Buildpacks have predictable layouts and layering. A buildpack can know that layer `sha256:abcdef123` contains (say) a .node_modules directory. It can decide to update only that layer, without invalidating any other layer.

And the operation can be very fast, because you can do it directly against the registry. GET a small JSON file, make an edit, POST it back.

This is a big deal because under the classic Dockerfile model, changes in a lower layer invalidate the higher layers. But this means your image can be invalidated by OS layer changes, dependency changes and so on. It's the right policy for Docker to have -- a conservative policy -- but Buildpacks have the advantage of additional context that lets them rely on other guarantees. Most noticeably ABI guarantees.


There's some more detail here: https://buildpacks.io/docs/


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: