>Perhaps with 80% of their funding gone, Firefox will be forced to stop wasting money on all those harebrained non browser initiatives and concentrate on ... the Firefox browser.
Again, the whack-a-mole myth that simply won't die. I have asked people about this over and over and over again over I'm gonna say like the past year and a half and at this point I feel pretty confident that this was kind of a mass-hallucinated myth. If you try to be objective and actually look at the numbers and you look at the time period over which Firefox lost browser share and you look at their budgets in the time period over which they engaged in side bets, the math just doesn't add up. None of the side bets ever occurred at prohibitive development costs, and they did not occur over a time period over which Firefox's browser share crashed. There's no such thing as a missing browser feature which Firefox was unable to implement because they didn't have access to resources due to those resources being siphoned away by sidebeds. And people seem to have forgotten they're supposed to actually like make a real argument about these things before simply claiming it.
There is a kinda-sorta real version of the argument, which is that around 2016 or so, before Firefox released quantum, the quality of the browser was lagging behind alternatives, and they were investing significant resources in Firefox OS. That's the closest to a real thing that this argument can attach to. But no one making this claim even knows that no one making this claim has looked at their budget, how much it costs to run a VPN, or made any cause and effect connection between that and other things. This is a myth that kind of got hallucinated into existence by hn comment sections.
I do think the critique of straying from a commitment to privacy is a real thing, but the narrative that they wasted time and resources on side features while the core browser experience deteriorated, attempts to establish a cause-and-effect relationship between that and market share is not backed up by any facts. And if you look at my comment history, it's practically a year of pleading with people to cite any example whatsoever that would substantiate this argument.
Wholeheartedly agree. Opera. Before it pivoted to Chrome and sold to Chinese investors I think was the apex example of this. I will never stop singing the praises of Opera Unite, which was a brilliant and potentially revolutionary way of leveraging the browser for something that could have been the basis of peer-to-peer web and social connection.
Exactly. It's like the arguments that are sometimes made to that credit Genghis Kahn with creating an integration of the Eurasian landmass and rolling out administrative reforms. It doesn't tell us what the world could have been like if it wasn't steered towards consolidation, and it doesn't even pretend to morally justify the domination. It's an inevitable consequence of domination that no one but you has the power to roll out reforms or advancements of any type. Organic progression that might have happened anyway becomes something that only could have ever happened through you.
This adds an interesting nuance. It may be that the sycophancy (which I noticed and was a little odd to me), is a kind of excess of fidelity in honoring cues and instructions, which, when applied to custom instructions like yours... actually was reasonably well aligned with what you were hoping for.
This approach has made me wonder about the utility of a pin board style bookmark managing service where browser history and bookmarking amount to the same thing. As a way of kind of serving that process that's served by having all kinds of tabs. And maybe it could even overlap with tab management. Like if you name a tab group something, it gets named that as part of a persistent history. Like a tag for your bookmark or something.
Seconded. Sometimes when someone says XYZ was likely used it's because they've read something from a credible source, or maybe are a subject matter expert, or have grasped some other similarly solid chain of evidence.
But sometimes, they mean "likely" in the more colloquial sense of a guesstimation, which can range anywhere from informed guess to low effort fan-fiction. I default toward the latter unless otherwise specified.
>not disclosing or at least inputting your password might be a crime severely punished
And to your point, I believe it's now the case in the U.S. that you can be legally compelled to unlock a fingerprint lock, but not a pin for whatever reason.
Compiled unlock via biometrics is still somewhat contested. The general argument boils down to biometrics being something you can't really protect internally. A passcode that is only known inside of your gray matter can therefore can only be externalized via some sort of testimony. Being compelled to reveal a passcode violates your ride against compelled speech and self-inccrimination.
In US you are protected by 5th. But it seems like the question hasn't been addressed by the Supreme Court since currently the answer depends on your jurisdiction. Which inspired me to check: here in Pennsylvania, the court cannot compel you to unlock your device with the password.
But it has already been argued successfully that giving biometrics is analogous to giving blood, hair, fingerprints, standing in a lineup, providing a writing sample, or wearing certain clothes, all of which you can be compelled to do. To argue against being compelled to do or provide biometrics without using testimonial arguments would be going against a lot of case law and precedent.
There is a specific precedent where you being compelled into providing biometrics can be inadmissible, and that is where you are compelled to unlock the phone, since doing so, even with biometrics is akin to providing testimony that the phone is yours, you know how to unlock it, which finger to use, etc. (see United States v. Brown, No. 17-30191 [1]). But that doesn’t actually prevent them using your biometrics to unlock the phone, so its pretty niche.
This is a bizarre response. My comment was not a denial of legal history on biology-as-testimony, it was instead opening space for other legal or philosophical objections: autonomy, bodily integrity, possibly Fourth Amendment concerns about unreasonable searches.
But you're replying as if my comment was claiming courts haven’t treated biometrics like physical evidence. That's not what I was doing.
The reference to Brown here appears to be massively misleading: far from being niche, it complicates the biology != testimony in a way that cuts to the heart of the most common real-world application of biometric compulsion which is smartphone access; from chatgpt'ing about it it appears that it's not the only court to rule on this and it probably awaits a SCOTUS decision to resolve an existing split.
You didn't elucidate any of that, or actually make any argument, so I had no way of knowing what you were thinking. You said “being compelled to provide biometrics does not necessarily hinge on it being analogous to testimony” so I stated that there was a lot of established case law already on compelling individuals to provide analogous physical information/samples, which covers bodily autonomy, bodily integrity, and the 4th Amendment. The only contention as far as I know, is around the 5th Amendment, but if you had other information, I’d be interested to hear it.
The Supreme Court has declined multiple times to hear cases that would help settle the legal ambiguity. I don’t think United States v. Brown complicates things because the specifics on the case were not whether or not you can be compelled to provide biometrics, but whether being compelled to “unlock a device” and manipulating the device yourself constitutes protected testimony. [0] They even cited United States v. Payne [1] where the court upheld that forcibly taking your finger to unlock a phone did not violate the defendant’s Fifth Amendment rights, and at issue was only the wording of the order.
From my understanding, the current split about being compelled to provide passcodes, and to a much lesser extent biometrics, is the foregone conclusion exception stemming from the Fisher v. United States [2] case, where, as Justice White said “the existence and locations of the papers[were] a foregone conclusion and the [defendant’s physical act] adds little or nothing to the sum total of the Government’s information by conceding that he in fact has the papers… [And so] no constitutional rights [were] touched. The question [was] not of testimony but of surrender.”
This has been used in relation to court cases on biometrics and passcodes [3]. It appears that courts that rule that you can be compelled seem to look narrowly at the passcode itself i.e. the government knows you own the phone and knows you know how to unlock it, so it is a foregone conclusion to provide it. Courts that rule you cannot be compelled seem to look at the phones contents i.e. the government does not know what is on the phone so decrypting the data would be providing protected testimony, or a stricter interpretation that you cannot be compelled to disclose the contents of the mind.
>but that doesn’t make it an exclusive biosignature. There are plausible abiotic pathways for DMS formation, such as in geochemistry we can’t know entirely about because we live on earth.
I'm sorry, but this is ridiculous. You started by acknowledging that it is indeed exciting, that it is something we only understand to be produced by living organisms. I wholeheartedly agree with that. And so the plausibility of an abiotic alternative is the big question.
You suggest that there are plausible abiotic pathways, but I think that's where this all starts to go off the rails, because I don't think there are in fact plausible abiotic pathways. We absolutely should attempt to model such possibilities and should be extremely careful about assumptions before working that out. But the state of our knowledge thus far counts for something too and it would suggest that such a process is pretty rare or unique. And then it really goes off the rails because instead of an actual example, you suggest not any specific known pathway, but a kind of bizarre philosophical musing that maybe there's "geochemistry we can't entirely know about."
We most definitely are capable of modeling chemical processes even if they don't happen on Earth. And there sure as heck is no such thing as a principle that things beyond Earth's surface are things we "can't" know about. I truly can't stress enough how ridiculous an assumption like that is.
We know, for instance, that gas giants are capable of producing phosphine, even though that doesn't happen on Earth. We know that the moon likely has a molten core. We know all kinds of things about atmospheric chemistry of planets and stars, because even if the abiotic processes can't be witnessed directly on Earth, we know enough about the principles of chemistry to model them in new contexts with reasonably high confidence.
And that's before we get to the idea that such uncertainty about off-world chemistry can be treated as tantamount to evidence of known abiotic process. It's nothing of the sort, it's more like "who knows, maybe it's possible." We do indeed have to figure out if there are such things as an aviotic process, but just the idea that, hey, who knows, something offworld might be happening is nowhere near enough to count.
Everything about it feels like BFD category for potential microbial life if true. And all the circumstantial details seem to point in the same direction. Potential hydrogen-rich ocean planet in a habitable zone, in alignment with theory about most plausible models for environments that might support life.
It's got no known abiotic process for being generated, but a clearly understood connection to life, and is apparently very reactive and would have to be actively re-generated at mass scale to sustainably show up in an atmosphere.
Nothing should be taken as proven, but it feels staggeringly plausible, and in my opinion would be the biggest of the "big if true" space stories I've ever seen in my lifetime.
I had to finish your comment before realizing you truly meant BFD as such and not (as I think more often the use is) sarcastically BFD. That said I agree.
BFD in this context standing for "Big Fucking Deal" (Urban Dictionary), not "Binary File Descriptor" (V.E.R.A. - Virtual Entity of Relevant Acronyms) I guess?
A great question. And one fascinating but maybe disturbing thing we have seen from the ISS is the body seems to be pretty aggressive with bone decalcification in lower-G environments. I don't know if there's a corollary for higher-G, and the mechanism is orthogonal to questions about heritability, but meaningful changes happen even within the life span of a single person.
You can doubt it. If only it was possible to go to these locations in real life and verify it yourself? That's exactly what I did. I went to Dartmoor and the difference is quite noticeable. YMMV
All life is a stack of autonomic systems. My hunch is that a human in a high G environment would make a bunch of adaptations even if they were born in orbit. By 3-5 generations they might even be another species. I am sure there has been some research done on raising mice in a high-g environment.
Again, the whack-a-mole myth that simply won't die. I have asked people about this over and over and over again over I'm gonna say like the past year and a half and at this point I feel pretty confident that this was kind of a mass-hallucinated myth. If you try to be objective and actually look at the numbers and you look at the time period over which Firefox lost browser share and you look at their budgets in the time period over which they engaged in side bets, the math just doesn't add up. None of the side bets ever occurred at prohibitive development costs, and they did not occur over a time period over which Firefox's browser share crashed. There's no such thing as a missing browser feature which Firefox was unable to implement because they didn't have access to resources due to those resources being siphoned away by sidebeds. And people seem to have forgotten they're supposed to actually like make a real argument about these things before simply claiming it.
There is a kinda-sorta real version of the argument, which is that around 2016 or so, before Firefox released quantum, the quality of the browser was lagging behind alternatives, and they were investing significant resources in Firefox OS. That's the closest to a real thing that this argument can attach to. But no one making this claim even knows that no one making this claim has looked at their budget, how much it costs to run a VPN, or made any cause and effect connection between that and other things. This is a myth that kind of got hallucinated into existence by hn comment sections.
I do think the critique of straying from a commitment to privacy is a real thing, but the narrative that they wasted time and resources on side features while the core browser experience deteriorated, attempts to establish a cause-and-effect relationship between that and market share is not backed up by any facts. And if you look at my comment history, it's practically a year of pleading with people to cite any example whatsoever that would substantiate this argument.
reply