The weird thing about this one is how it seems super professional in some ways, and rather amateur in others. Professional in the sense of spending a long time building up an identity that seemed trustworthy enough to be made maintainer of an important package, of probably involving multiple people in social manipulation attacks, of not leaking the true identity and source of the attack, and the sophistication and obfuscation techniques used. Yet also a bit amateur-ish in the bugs and performance regressions that slipped out into production versions.
I'm not saying it's amateur-ish to have bugs. I'm saying, if this was developed by a highly competent state-sponsored organization, you'd think they would have developed the actual exploit and tested it heavily behind closed doors, fixing all of the bugs and ensuring there were no suspicion-creating performance regressions before any of it was submitted into a public project. If there was no performance regression, much higher chance this never would have been discovered at all.
You probably have too high expectations when you hear the "state-sponsored" part. Every large organization will inevitably end up like any other. They also have bureaucracy, deadlines, production cycle, poor communication between teams, the recent iOS "maybe-a-backdoor" story also shows that they don't always care about burning the vulnerabilities because they amassed a huge pile of them.
Welp. Ok, well now my newest worst nightmare is a jira board with tickets for "Iran" and "North Korea" stuck in the wrong column and late-night meetings with "product" about features.
I think this is a key insight with some details: there isn’t an entire shadowy org that operates without jira but there are teams of people, usually small who do amazing things without jira. I imagine the manhattan project ran this way but you still have elite teams like this in every org. Eventually they need to hand it off to a jira crew and that’s unavoidable.
Since it happened, I've said that the “Year of Snowden” turned “It's theoretically possible to backdoor the firmware of a hard drive” (for example), into “Somewhere in the NSA, there's a Kanban board with post-its for each major manufacturer of hard drives, and three of them are already in the 'done' column.”
Eh. Take a look at other state-sponsored attackers. We know they have 0-days for iOS, we know they've been used, but even Apple doesn't know what they are since they are so good at hiding their tracks. I don't think a state-sponsored attack would upload their payload to the git repo and tarball for all to stare at after it's been found out, which only took about a month.
NSO Group built a turing-complete VM out of a use-after-free exploit in some JBIG2 decompression code. Uploading a payload to the world wide web and calling it bad-3-corrupt_lzma2.xz is clownshoes by comparison.
My best guess as to what this is is an amateur hacker ring, possibly funded by a ransomware group.
You are severely underestimating this attacker and their sponsors. Amateur hacker rings do not spend two years actually diligently maintaining the software they will later backdoor. It would not make any sense in the world of ransomware attacks and bitcoin payouts.
> Uploading a payload to the world wide web and calling it bad-3-corrupt_lzma2.xz is clownshoes by comparison.
It has to be on the world wide web for distros to package and ship it. And this was actually the best disguise possible: this directory is one where it is normal and expected to have binary files that are not obviously analyzable, as this one wasn't -- another part of the malware rearranged it to become non-corrupt at exploitation-time.
See this note from the README for the test directory:
> This directory contains bunch of files to test handling of .xz, .lzma (LZMA_Alone), and .lz (lzip) files in decoder implementations. Many of the files have been created by hand with a hex editor, thus there is no better "source code" than the files themselves.
It is a brilliant solution to the problem of "okay, but where do I hide the malware payload, given the constraint that it has to be distributed alongside the code and tarballs?". The attack was detected, but not because of this file, and it's unlikely to me that it ever would have been detected purely by the means of this file, given the comment above.
> Uploading a payload to the world wide web and calling it bad-3-corrupt_lzma2.xz is clownshoes by comparison.
While the world is trying to understand the backdoor, you sir decided that it's "clowshoes". I can only blindly defer to your expertise... "Clownshoes, amateur, hacker ring, ransomeware group." Done.
That's why I mentioned the recent story. [0] [1] "Apple doesn't know" when the chain uses an internal backdoor in Apple hardware is a... stretch. And the chain gives the strong vibes of corporate-style development, with all its redundancy and mismatch between two parts. It's not alchemy, really.
Re the secret knock in the Apple silicon, a friend of mine once said "that's how you lose the NOBUS on a backdoor", and I think they were absolutely right.
The one thing which most leads me to believe this was an intentional backdoor? The S-boxes.
> The one thing which most leads me to believe this was an intentional backdoor? The S-boxes.
If you look at that HN discussion, you'll find a link to a Mastodon post from an Asahi Linux developer explaining that these "S-boxes" are actually an ECC calculation, and that the registers are probably cache debug registers, which allow writing directly to the cache bypassing the normal hardware ECC calculation, so you have to do the ECC calculation yourself if you don't want a hardware exception caused by an ECC mismatch on read (of course, when testing the cache, sometimes you do want to cause an ECC mismatch, to test the exception handling).
I don't believe it's intentional for the reason you mentioned. Although it could theoretically be like that for plausible deniability, Apple's reputation is definitely more valuable than one patchable backdoor of god knows how many others. But debug backdoor is still a backdoor.
Very large companies are definitely at the mercy of governments. Just look at how they are bending over backwards to comply with DMA etc. So, it is not at all inconceivable that they are forced to put backdoors into their product by the governments.
That’s true. On the other hand, Apple isn’t some kind of Borg like swarm intelligence. While Apple’s upper management doesn’t want back doors in their products, someone in middle management might have come to a different opinion.
The quality of work you attract in part depends on how much you pay. Go check out how much is paid for a persistent iOS exploit, compared to a Linux user space exploit. From that, you may draw conclusions about their relative perceived difficulty and desirability. This will explain why iOS exploits are done more professionally. They are rarer, much better paid, and thus attract a better audience and more work on guarding them from discovery.
state-sponsored organizations in this space could be usually military-like organizational structure that has people with dedication and motivation unlike corporate workers. command structure can mean less bureaucracy (can also mean more bureaucracy too in some ways) when it is directly aligned with their mission. national patriotism motivations mean they are more dedicated and focused than a typical corporate worker. so yeah, there would be quality differences.
I would say you put too much faith into military-like organization as well. However, from what I can tell it's usually just ordinary security researchers and devs with dubious morals (some are probably even former cybercriminals) that usually don't even have the need-to-know and aren't necessarily aware of every single aspect of their work. The entire thing is likely compartmentalized to hell.
You can't conjure quality from nothing (especially if it's pure patriotism/jingoism), large organizations are bound to work with mediocrities and dysfunctional processes, geniuses don't scale. (I feel like stating the obvious)
> it's usually just ordinary security researchers and devs with dubious morals (some are probably even former cybercriminals)
not sure if we should easily judge offsec and the private-public partnership that provides intel and offensive capabilities.
whether they're ex-criminals or must be accused of "dubious morals" would depend whether their clients (or targets) are what one considers the enemy.
and what about the guy who silently works on a "dubious project" patiently for years ... and then, at the right moment, knowingly throws a spanner in the works? aren't they the true hero?
The fact that the guys developing the code weren't also simultaneously running valgrind and watching performance isn't hard to believe. They were targeting servers and appliances, how many servers and appliances do you know of that are running valgrind in their default image?
Sure, in hindsight that's a "duh, why didn't we think of that" - but also it's not very hard at all to see why they didn't think of that. They were likely testing against the system images they were hoping to compromise, not joe-schmoe developer's custom image.
On 2024-02-29, a PR was sent to stop linking liblzma into libsystemd [0].
Kevin Beaumont speculated [1] that "Jia Tan" saw this and immediately realized it would have neutered the backdoor and thus began to rush to avoid losing the very significant amount of work that went into this exploit; I think he's right. That rush caused the crashing seen in 5.6.0 and the lack of polish which could have eliminated or reduced the performance regressions which were the entire reason this was caught in the first place; they simply didn't have the time because the window had started to close and they didn't want all their work to be for nothing.
If Google could release Gemini with a straight face, is it so hard to believe that this shadowy org might fail in an even subtler way?
Patrick McKenzie observes elsewhere that criminal orgs operate more or less like non-criminal ones — by committee and by consensus. By that light, it’s not so hard to fathom why this ship could have been sunk by performance degradation arising from feature bloat. Companies I’ve worked at have made more amateurish mistakes.
Events can happen to anyone, even competent state-sponsored organisations. And intelligence agencies are sometimes rather less ruthlessly competent than imagined (Kremlin assisinations in the UK have been a comedy of errors [1]).
Maybe another backdoor, or alternative access mechanism they were using, got closed and they wanted another one in a hurry.
> Maybe another backdoor, or alternative access mechanism they were using, got closed and they wanted another one in a hurry.
Or maybe the opportunity window for the mechanism this backdoor would use was closing. According to the timeline at https://research.swtch.com/xz-timeline there was a github comment from poettering at the end of January which implied that the relevant library would be changed soon to lazy load liblzma ("[...] Specifically the compression libs, i.e. libbz2, libz4 and suchlike at least. And also libpam. So expect more like this sooner or later."), as in fact happened a month later. The attacker had to get the backdoor into the targeted Linux distributions before the systemd change got into them.
Of course, the attacker could instead take the loss and abandon the approach, but since they had written that amount of complex code, it probably felt hard to throw it all away.
Russia wants everyone to know when they do assassinations abroad, that's why they use obviously-Russian methods like polonium and novichok. It's designed to send a message to other Russian dissidents and remind them that they aren't safe
There are thousands of ways that performance can be impacted. No matter how good you are at developing, there will be a workload that would have a performance hit. Phoronix has been several times reporting issues to the Linux kernel because performance regression with their test suite. Performance tests tend to take more time than correctness tests.
Not seeing that as a point. It's probably not possible to have no performance hit whatsoever when you're checking the exact nanosecond count of every little thing. But usually nobody is doing that. It shouldn't be hard to not cause a substantial enough performance regression in SSHD logins that somebody who wasn't already monitoring that would notice and decide to dig into what's going on.
I'm not sure if it's been revealed yet what this thing actually does, but it seems like all it really needs to do is to check for some kind of special token in the auth request or check against another keypair.
I read somewhere that some recent changes in systems would've made the backdoor useless so they had to rush out, which caused them to be reckless and get discovered
This refers to the fact that systemd was planning to drop the dependency on liblzma (the conpression library installed by xz), and instead dlopen it at runtime when needed. Not for security reasons, but to avoid pulling the libs into initramfs images.
The backdoor relies on sshd being patched to depend on libsystemd to call sd_notify(), which several distros had done.
OpenSSH has since merged a new patch upstream that implements similar logic to sd_notify() in sshd itself to allow distros to drop that patch.
So the attack surface of both sshd and libsystemd has since shrunk a bit.
> The backdoor relies on sshd being patched to depend on libsystemd to call sd_notify
I remember when we added sd_notify support to our services at work, I was wondering why one would pull in libsystemd as a dependency for this. I mean, there's a pure-Python library [1] that basically boils down to:
> With proper error handling, that's about 50 lines of C code.
Writing proper error handling in C is a very tedious and error prone task. So it doesn't surprise me that people would rather call another library instead.
> So it doesn't surprise me that people would rather call another library instead.
Which shall be harder to justify now: "You're calling a gigantic library full of potential security holes just to call one function, to save writing a few lines of code, are you trying to JIA TAN the project?".
> Writing proper error handling in C is a very tedious and error prone task.
Managing C dependencies is even more tedious and error prone. And even in C, opening a UNIX ___domain socket to write a single packet is not that hard.
> I was wondering why one would pull in libsystemd as a dependency for this. I mean, there's a pure-Python library [...] With proper error handling, that's about 50 lines of C code.
There's also a pure C library (libsystemd itself) which already does all that, and you don't need to test all the error handling cases in your 50 lines of C code. It makes sense to use the battle-tested code, instead of writing your own.
The problem is people keep focusing on the libsystemd element because systemd has it's big hate-on crew and the vector was for what's deemed "simple".
The better question though is...okay, what if the code involved was not simple? xz is a full compression algorithm, compressors have been exploit vectors for a while, so rolling your own is a terrifically bad idea in almost all cases. There's plenty of other more sophisticated libraries as well where you could've tried to pull the exact same trick - there's nothing about it being a "simple" inclusion in this case which implies vendoring or rolling your own is a good mitigation.
The saying goes that everyone is always preparing to fight the last war, not the next (particularly relevant because adversaries are likely scouring OSS looking for other projects that might be amenable to this sort of attack - how many applications have network access these days? An RCE doesn't need to be in sshd).
Frankly, I think it is idiotic to require each program to parse an environment variable, open that named file, write a magic string, and all the error handling that that requires -- all for the purpose of having a backchannel between the daemon and its supervisor?
Compare with how s6 does the same thing: instead of passing a random path via an environment variable, it opens the file descriptor itself before exec()ing the binary. All the binary has to do is write a newline to that fd and close it.
There was a recent Dwarkesh Patel interview of Dario Amodei, CEO of Anthropic, who now have ex. national security people on their staff. He said that the assumption is that if a tech company has 10,000 or more employees then not only will you almost certainly have leakers, but there is also a high likelihood there is an actual spy among your staff. This is why they use need-to-know compartmentalism, etc.
I wonder what our success rate is in identifying industrial spies? 50%?
Attacks on a given system or site can be expected to be removed (or auto-removed) after the operation ended, potentially w/o trace. But for supply-chain attacks there's always a history (if someone bothers to investigate).
The only thing that makes me think this was amateurs/criminals instead of professionals is that I tend to think that professionals are more interested in post attack security.
So if the gate was closing an amateur would say, "Act now! Let's get what we can!" A professional would say, "This is all going to come to light real soon - our exploit won't be read and there's a high chance of this all falling apart. Pull out everything, cover our tracks to the degree we can and find another opportunity to pull this off."
But then again I also think on professionals would work an exploit that takes years. Criminals by their nature want a quick payout (If I had the patience for a long con I'd just get a job) a motivated individual amateurs (i.e. crazy people) rarely have a wide enough skill set.
This particular backdoor might have become useless anyway, but control of liblzma would have continued to be very valuable. Not only is it used in so many places (the embedded version even in the kernel), it also is a common part of the toolchain used to build everything else allowing for trusting trust style attacks.
I thought performance was actually fine? It only dragged when using valgrind, hence the rhetoric that it took some really unlikely circumstances for it to be detected that quickly.
It was noticed because of it's CPU usage. It seems most people think "that was lucky". It wasn't lucky, it was inevitable.
It happened to be noticed by someone doing benchmarking, but there is another category of machines out there it would hit hard: low end VM's running an ssh server. I've had my low end VM's hit by exactly the same thing the postgres benchmarker saw, but on these box's it isn't just annoying. There are so many ssh logins happening the OOM killer comes out to play but these ssh logins are tiny so it tends to kill the real stuff you've got the box doing, like web serving.
A 2 or 3 fold increase in CPU would suddenly bring a lot of VM to their knees. At least thousands of them. It would be noticed.
Yea you're right. 500ms vs 10ms on an older server. Was thrown off by this statement and thought only the perf/valgrind/gdb attachments were what really brought it to surface.
> Initially starting sshd outside of systemd did not show the slowdown, despite
the backdoor briefly getting invoked. This appears to be part of some
countermeasures to make analysis harder.
Performance was much worse but not enough to actually nudge the author into digging into it until the random ssh logins started piling up:
You probably have seen lots of films deificating state-sponsored organisations.
SSO are made by humans under real constrainsts in time, money, personal. It is almost impossible to make something perfect that nobody in the world could detect.
In the world there are people with different backgrounds that could use techniques that you never accounted for. Maybe is a technique used in biology to study DNA, or for counting electrons in a microscope or the background radiation in astronomy.
For example, I have seen strange people like that reverse engineer encrypted chips that were "impossible" to in a very easy way because nobody expected it. They spend 10million dollars protecting something and someone removed the protection using $10.
Modifying the sshd behavior without crashes seems by itself pretty difficult. I mean, conceptually it isn't hard, if you are in the same process and can patch various functions, but I think doing so and having it be "production ready" to ship in multiple linux distros all the time is a challenge.
This thing wasn't around for very long but yet another thing to consider would be to keep it working across multiple versions of OpenSSH, libcrypto etc.
I picture some division in [nation-state] where they're constantly creating personas, slowly working all sorts of languishing open source packages with few maintainers (this is the actual hard, very slow part), then once they have a bit of an in, they could recruit more technical expertise. The division is run by some evil genius who knows this could pay off big, but others are skeptical, so their resources are pretty minimal.
Moxie's reasons for disallowing Signal distribution via F-droid always rang a little flat to me ( https://github.com/signalapp/Signal-Android/issues/127 ). Lots of chatter about the supposedly superior security model of Google Play Store, and as a result fewer eyes independently building and testing the Signal code base. Everyone is entitled to their opinions, but independent and reproducible builds seem like a net positive for everyone. Always struggled to understand releasing code as open source without taking advantage of the community's willingness to build and test. Looking at it in a new light after the XZ backdoor, and Jia Tan's interactions with other FOSS folk.
> supposedly superior security model of Google Play Store
Let's never forget that the google play store requires giving google the ability to modify your app code in any way they want before making it available for download. Oh sure, that backdoor will never be abused.
The link provides interesting reading, but I believe Moxie must have changed his opinion later: I have never had Google Play Store on my phone, but I could install Signal. I am pretty sure I did not install it from any dodgy site. It warned when it got outdated. Not sure how updates work, not using it anymore.
No, he hasn't changed his mind (93 closed issues over 8 years related to F-droid, many asking for F-Droid distribution: https://github.com/signalapp/Signal-Android/issues?q=is%3Ais... ). Signal distributes their own APK from their own site, but still does now allow F-Droid to distribute a version, or for any version built or distributed by anyone other than Signal to connect to the Signal servers. Imagine Jia Tan's build of XZ being the only one allowed, and you get the idea.
His standpoint is unchanged regarding F-Droid, but not regarding distributing APKs themselves. In the linked issue he still argues that having users to enable "allow 3rd party APKs" is such a bad idea, that they will not provide any APKs directly.
Cute how it's labeled "Danger Zone". So official Signal provided install methods include Google Play Store, or enabling third party APKs and downloading directly from Signal. How the second differs from an official Signal provided and signed F-Droid repository in Moxie's mind is anyone's guess.
What Signal _does not_ allow are APKs built by third parties being distributed under the Signal name, or connecting to Signal servers. Which calls into question the build process itself - the very thing exploited in the XZ backdoor. One either trusts Signal to build the software without backdoors, or doesn't use Signal at all. There is no allowed in between.
Which is to say, they don't trust 3rd parties to build the software without backdoors. Can't say I blame them. Allowing for 3rd party clients opens Signal to backdoored clients. I know you think that people would only make 3rd-party clients for good, and not do bad things with that power, and no one would be foolish enough to download Definitely-not-backdoored-Signal-client from hackers.ru, but I'm pretty sure that's exactly what would happen. An APT could exploit a Pegasus-like zero-day in iOS and install a replacement, backdoored client on a victim's device. Not allowing 3rd party clients doesn't totally protect against that, but it goes a long way.
> An APT could exploit a Pegasus-like zero-day in iOS and install a replacement
Nothing about the way Signal currently does things prevents this from happening today.
Disallowing third party builds only serves to reduce eyes on the build tooling, which we've learned is a great place to hide backdoors.
Equating F-Droid with hackers.ru is a distasteful strawman. F-Droid appear to run as transparent and credible a distribution as Debian or Fedora. Credible enough that the Tor project distributes it's privacy-focused software via F-Droid.
I wasn't even thinking of f-droid and I didn't mention them in my comment at all so I'm not sure why you think I'm linking the two when I didn't even mention them.
Signal could do more to be open with the build process, but opening the door to third party clients is opening the door for APTs to release backdoored Signal clients.
F-Droid was mentioned in the very first comment of this thread, and all of the issues linked in github. Seems like you haven't read them, and bringing other parties into the discussion seems like a distraction.
> but opening the door to third party clients is opening the door for APTs to release backdoored Signal clients.
Signal's source code is already public. APTs (or anyone who doesn't care about violating laws) can already produce and disseminate their own builds. There are no technical protections in place to stop them - nor do I know of any which could. The only people who can't currently distribute their own builds are the law abiding good guys trying to build secure software distributions. I'm not sure why you're confused about this, but your assertion that Signal making legal allowances for third party builds adds anything to the capabilities of APTs demonstrates a misunderstanding of what is already available and the (strictly legal) limitations Signal has placed on 3rd parties with regard to distributing independently verifiable builds.
Please take some time to read and understand the github issues, instead of continuing to assert falsehoods or introduce strawmen.
I'm sorry for not doing all of my homework before responding, but what's with you and the word strawman? It it your homework assignment to write that word seven times on the Internet or something? Say it a couple more times, it'll really help get your point across.
Getting Signal from anywhere else other than them opens up the door for someone to sneak in some code. I am not, in any way, insinuating that fdroid would intentionally do such a thing.
That seems like a great way to talk down to your end users, which seems like a security smell all by itself. Many users of F-Droid are technology professionals themselves and are quite aware of the security implications of the choices they make for the devices they own, and F-Droid is often a component of that outlook.
Further, I don't think it applies to the F-Droid maintainers, who routinely build hundreds of different Android apps for all our benefit. They even directly addressed his concerns about the signing key(s) and other issues by improving F-Droid and met with continued rejection.
I don't think we should assume a state actor. We don't know.
It's kind of similar to stuxnet but attacking Linux distros is so broad and has such a huge risk of being exposed, as it was within a few weeks of deployment. A good nation state attack would put more effort into not being caught.
Assuming a state-actor is a cope though. It's looking at the problem and saying "well we were fighting god himself, so really what could we have done?"
Whereas given the number of identities and time involved, the thing we really see is "it took what, 2-3 or burner email accounts and a few dozen hours over 2 years to almost hack the world?"
The entire exploit was within the scope of capability of one guy. Telling ourselves "nation-state" is pretending there isn't a really serious problem.
My thought is they have 50-200 systems programmers where it's like "Hey George, I know you like to contribute to linux, have at it, we just need you to put a little pinhole here here and here"
Then they have 5-20 security gurus / hackers who are like "Thanks George, I made this binary blob you need to insert, here's how you do it"
So the systems developer job is to build a reputation over years as we see, the security guy's job is to handle the equation group heavily encrypted/obfuscated exploits.
This would explain a mix of code quality / professionalism - multiple people, multiple roles. Of course the former "systems programmer" role need not be a full time employee. They good have motivated a white hat developer to change hats, either financially or through an offer they can't refuse.
This tracks with other nation state sponsored attack patterns. I've had that same reaction before. Most APTs are like this but some Chinese,US and Russian APTs are so well funded, every aspect of their attacks is impressive.
Many hackers who work for nation states also have side gigs as crimeware/ransomgang members or actual pentesting jobs.
> It still boggles my mind that americans are against banning companies like huawei and bytedance. The MSS and PLA don't mess around.
The problem is that many others would have as much reason to bann US companies. I mean the US has a much more extensive history of using their security apparatus both for intelligence and economic means even against their allies.
Now if everyone bans everyone else we will let the world economy grind to a halt pretty quickly.
China already bans US companies and this is an active malicious threat not a vague possibility of harm. You can jail mark zukerberg but you can't jail xi jiping
For bytedance/TikTok the question is not really "should the US ban this company" but rather "should the government be able to ban any companies it wants (and also make it illegal for VPNs to allow US users to access relevant services/websites) without having to provide any substantial evidence?
US companies? I'm with you, no! Foreign companies, absolutley. Even the suspicion of malicious abuse should be enough to ban a foreign company. Foreign persons and entities have no rights in the US and our government owes them as much explanation as they give us when they ban US companies on a whim.
> Foreign persons and entities have no rights in the US
"Yes, immigrants are protected by the U.S. Constitution.
The brief answer is “Yes.” When it comes to key constitutional provisions like due process and equal treatment under the law, the U.S. Constitution applies to all persons – which includes both documented and undocumented immigrants – and not just U.S. citizens.
Outside the context of immigration policy, the Constitution limits government power over individuals but this does not mean that constitutional rights are absolute."
We are not talking about immigrants, we are talking about foreign entities, as in not immigrated to the US. The Chinese communist party and the owners of bytedance have not immigrated to the US.
I don't get the point if the mental gymnastics here. We both know of an active threat to americans by a foreign entity. Companies are not people and being able to create and operate a company is not a protected right. Matter of fact, regulating interstate commerce is an explicit right of the government.
For example, you can't transport alcohol across state lines. The government doesn't need a good reason for that regulation, it's their constitutional right.
In this case, simply reciprocating china's bans on US companies would have done the trick.
Honestly, the US government should move to ban all trade with China within a decade or so.
As for bytedance, the government is not claiming chinese immigrants can't own it, they are claiming that the China based owners of bytedance have to sell their stake in the company since any company in China is under the influence of the MSS as evidenced by many examples, the boyusec/apt3 example I mentioned being one.
Any moderately competent developer could gain maintainership of a huge percentage of open source projects if they’re being paid to focus solely on that goal… after all, they’re competing mostly against devs working on it part side or as a hobby.
If this is state sponsored, they likely have similar programs in a large number of other projects.
Why do we assume the person building the trust is the attacker ?
Is not possible the attacker simply took over the account of some one genuinely getting involved in the community either hacked or just with $5 wrench and then committed the malicious code ?
> Is not possible the attacker simply took over the account of some one genuinely getting involved in the community either hacked or just with $5 wrench and then committed the malicious code ?
Given the behavior of the accounts that applied pressure on the original xz maintainer, this seems unlikely to me.
I find this the most plausible explanation by far:
* The highly professional outfit simply did not see teknoraver's commit to remove liblzma as standard dependency of systemd build scripts coming.
* The race was on between their compromised code and that commit. They had to win it, with as large a window as possible.
* This caused the amateuristic aspects: Haste.
* The performance regression is __not__ big. It's lucky Andres caught it at all. It's also not necessarily all that simple to remove it. It's not simply a bug in a loop or some such. If I was the xz team I'd have enough faith in all the work that was done to give it high odds that they'd get there before discovery. That they'd have time; months, even.
* The payload of the 'hack' contains fairly easy ways for the xz hackers to update the payload. They actually used it to remove a real issue where their hackery causes issues with valgrind that might lead to discovering it, and they also used it to release 5.6.1 which rewrites significant chunks; I've as yet not read, nor know of any analysis, as to why they changed so much. Point is, it's reasonable to think they had months, and therefore, months to find and fix issues that risk discovery.
* I really can't spell this out enough: WE GOT REALLY LUCKY / ANDRES GOT LUCKY. 9 times out of 10 this wouldn't have been found in time.
That's a commit that changes how liblzma is a dependency of systemd. Not because the author of this commit knew anything was wrong with it. But, pretty much entirely by accident (although removing deps was part of the point of that commit), almost entirely eliminates the value of all those 2 years of hard work.
And that was with the finish line in sight for the xz hackers: On 24 feb 2024, the xz hackers release liblzma 5.6.0 which is the first fully operational compromised version. __12 days later systemd merges a commit that means it won't work__.
So now the race is on. Can they get 5.6.0 integrated into stable releases of major OSes _before_ teknoraver's commit that removes liblzma's status as direct dep of systemd?
I find it plausible that they knew about teknoraver's commit _just before_ Feb 24th 2024 (when liblzma v5.6.0 was released, the first backdoored release), and rushed to release ASAP, before doing the testing you describe. Buoyed by their efforts to add ways to update the payload which they indeed used - March 8th (after teknoraver's commit was accepted) it was used to fix the valgrind issue.
So, no, I don't find this weird, and I don't think the amateurish aspects should be taken as some sort of indication that parts of the outfit were amateuristic. As long as it's plausible that the amateuristic aspects were simply due to time pressure, it sounds like a really bad idea to make assumptions in this regard.
> The performance regression is __not__ big. It's lucky Andres caught it at all. It's also not necessarily all that simple to remove it. It's not simply a bug in a loop or some such. If I was the xz team I'd have enough faith in all the work that was done to give it high odds that they'd get there before discovery. That they'd have time; months, even.
True in the contents of sshd logins it isn't that big, but ~500ms to get from _start() to main() isn't small either, compared to the normal cost of that phase of library startup. Their problem was that the sshd daemon fork+exec's itself to handle a connection, so they had to redo a lot of the work for each connection.
Afaict all of this happens before there's any indication of the attacker's keys being presented - that's not visible to the fork+exec'd sshd until a lot later.
They needed to some of the work before main(), to redirect RSA_public_decrypt(). That'd have been some measurable overhead, but not close to 500ms. The rest of the startup could have been deferred until after RSA_public_decrypt() was presented with something looking like their key as part of the ssh certificate.
If I understand things correctly the hooking of RSA_public_decrypt is done with an audit hook called for every symbol of newly loaded libraries. With this approach it doesn't matter how much is hooked since all functions are always processed. It's also harder to hook functiosn later because the GOT/PLT will have been marked read only. The exploit code also doesn't directly contain any of the strings (presumably for obfuscation reasons) and instead has a trie to map given strings to internal IDs which also requires an approach like this where you look at all symbols and then decide what to do with each symbol.
> The payload of the 'hack' contains fairly easy ways for the xz hackers to update the payload. They actually used it to remove a real issue where their hackery causes issues with valgrind that might lead to discovering it, and they also used it to release 5.6.1 which rewrites significant chunks;
The valgrind fix in 5.6.1 overwrites the same test files used in 5.6.0 instead of using the injection code's extension hooks. This is done with what should have been a highly suspicious commit: https://github.com/tukaani-project/xz/commit/6e636819e8f0703... - this replaces "random" test files with other "random" test files. The state reson is questionable to begin with but not including the seed used when the the purpoted reason was to be able to re-create the files in the future is highly suspicous. This should have raised red flags bug no one was watching. I'd say this is another part of the operation that was much more sloppy than it needed to be.
> almost entirely eliminates the value of all those 2 years of hard work.
Except control over xz-utils/liblzma would have still been very valuable even without the sshd exploit path as it's central use in the toolchain used to build Linux distributions would have allowed for many other attacks.
They could also have regrouped and found another way to do the exploit, given the relative ease of updating the payload (though it's probably a limited number of times you could change the test blobs without causing suspicion?). But I agree this explanation is plausible.
If lzma isn't loaded as part of sshd, the path from an lzma backdoor to sshd get a hell of a lot more circuitous and/or easier to catch. You'd pretty much need to modify the sshd binary while compressing a package build, or do something like that to the compiler, to then modify sshd components while compiling.
Perhaps but sshd is also not the only potential exploit. E.g. the landlock commit is a hint that they were also planning an exploit via the xz-utils commands directly. Seems rash to burn over two years of gaining trust for a very central library and set of tools just because the initially chosen exploit path disappeared.
For all we know earlier operations may have been high quality (and still undetected), this one for some reason may have been comparatively not that important and the actor decided to cut costs.
At this point we don't know who did this. It could have been a single really smart person, it could have been criminal, it could have been a state intelligence agency.
It’s also possible that this could be a change in personnel. Maybe the one who earned trust and took over was no more working for them. And an amateur took over with tight deadlines that lead to this gaffe for them.
The text near the box makes it sound like these are just the fixes - not adding the test files but updating them.
At that point it would have been clear “the race is on” to avoid detection, so it’s not too surprising someone would work late to salvage the operation.
Whoops, you're right. So this isn't really evidence of anything.
Out of interest I looked up the other commit at that time of day visible in that graph, laying on the arrow. It's [1], which changes the git URL from git.tukaani.org to github.com. Of course, moving the project hosting to github was part of the attack.
Sometimes I think it could be someone who was forced to embed the backdoor but was smart enough to make it detectable by others without raising suspicion by the entity that was forcing him.
IIRC, according to Andres Freund the perf regression only happened in machines using the -fno-omit-frame-pointer setting, which was not the default at that point.
The -fno-omit-frame-pointer bit is separate from the slowdown. -fno-omit-frame-pointer lead to valgrind warnings, but no perceptible slowdown in most (including this) cases.
What's also surprising is how quickly the community seems to be giving someone the benefit of the doubt. A compromised maintainer would probably exactly introduce a fake member joining the project to make certain commits. They might have a contact providing the sophisticated backdoor that they need to (amateurishly) implement.
My favorite theory is that Jia Tan is a troll. They tried some silly patches and were surprised they got accepted. What started as a little joke on the side because covid made you stay at home slowly spiraled into "I wonder how far I can push this?"
Two years are enough to make yourself familiar with open ssh, ifuncs etc.
Then you do silly things like "hey um I need to replace the bad test data with newly generated data, this time using a fixed seed so that they are reproducible", but you don't actually tell anyone the seed for the new data. Then you lol when that gets past the maintainers no questions asked.
In the end they maybe just wanted to troll a coworker, like play some fart noises while they listen to music, and since they use Debian well, you better find a way to backdoor something in Debian to get into their machine.
Like back in the day when sasser sabotaged half the internet and "security experts" said they have a plausible lead to Russia – which as is turned out was because said security experts ran strings on the binary and found Russian text – put there by the German teen who wrote sasser "for teh lulz".
This hat is way too dirty to be white. Even if well intentioned (which is into fairy tale theory level of likelyness), the implementation is well beyond ethical.
The sophistication here is really interesting. And it all got caught because of a fairly obvious perf regression. It reminds of a quote I heard in one of those "real crime" shows: "There's a million ways to get caught for murder, and if you can think of half of them, you're a genius."
I can believe it’s because it was a team behind the account. Someone developed the feature and another more careless or less experienced one integrated it. Another one possibly managing sock puppets and interacting in comments and PRs.
It's called AIMS (Advanced Impact Media Solutions) and is used by several state-level actors these days, both pro- and contra-NATO.
Well, at least that one is the most sophisticated one on the market (as of now) and Team Jorge is probably making shitloads of money with it while not giving a damn about who uses their software in the end.
Maybe I’m just being naive or too trusting, but this is sort of what I think when folks are getting worried about other backdoors like this in the wild.
Is it that they just got unlucky to get caught, or is this type of attack just too hard to pull off in practice?
I’d like to think the later. But, we really don’t know.
Note he's not a cybersecurity researcher, he's mostly a database engineer (a great one, making significant PGSQL contributions), so I'm not sure he's familiar with statistics and variety of backdoor attempts.
One measure might be that we never really found that many backdoors. Over time there is quite a large accumulation of hackers looking at the most mundane technical details.
This may be confirmed by regular vulnerabilities that are found in sometimes many decades old software, since vulnerabilities are much harder to find than backdoors. For example shellshock was 30 year old code, PwnKit 12 and log4j was ~10 ish.
So if backdoors were commonplace, we probably would've found more by now.
Perhaps that's changing now, the xz backdoor will for sure attract many copycats.
Doesn't your data prove the opposite point? There are so many vulnerabilities and so few people looking for them that even the thirty year old ones have barely been found.
A healthy feedback loop would have trended the average age of each vulnerability at the time of detection to be *short".
I’m not convinced that if I found a bug that I’d notice all the security implications of fixing it. Occasionally yes, but I wonder how many people have closed back doors just by fixing robustness issues and not appreciated how big of a bug they found.
Maybe something could be built to put more eyeballs on things.
A kind of online-tool that collects the sources to build some relevant distributions, a web front-end to show a random piece of code (filtered by language, probability to show inreasing by less-recently/frequently/qualified viewed) to a volunteering visitor to review. The reviewer leaves a self assesment about their own skills (feed back into selection probability) and any potential findings. Tool-staff double-checks findings (so that the tool does not create too much noise) and forwards to the original authors (bugs) or elsewhere (backdoors).
Fedora and Ubuntu both enable frame pointers / disable -fomit-frame-pointer by default now[1]. That’s quite recent news in comparison to the backdoor’s history, admittedly.
Nah, it applies to the person trying to get away with the murder. People will do really, really intricate jobs of trying to cover up, then slip up because like, they leave a receipt in their car that accidentally breaks their alibi.
My favorite get away with murder stories are the imperfect frame up type stories. So commit a crime and lay a trail of bread crumbs to a false path that will be picked up by the investigators and then later on easily refuted by yourself - because you did it but not in the way you're accused of.
A clever murderer will disguise the murder as an accident, suicide or natural death. It will not even show in the stats as unsolved.
I got the idea from fiction (specifically Dorothy Sayers), but the number of murders Harold Shipman committed before anyone even noticed makes it plausible that people with relevant expertise (doctors, pharmacists, cops, etc.) could easily get away with murder. If Shipman had stopped after the first 100 or so he would have.
That's from Body Heat, said by Mickey Rourke to William Hurt. "...you got fifty ways you're gonna fuck up. If you think of twenty-five of them, then you're a genius - and you ain't no genius." (But a million sounds closer to the truth.)
Depends on locale. In Germany something like 90% of murder cases are solved/cleared.
In the U.S., I suspect a majority of the murders technically unsolved by police are cases where the identity of the perpetrators is somewhat of an open secret within communities that don't trust law enforcement (and LE similarly has little interest in working with them either.)
>In Germany something like 90% of murder cases are solved
You must watch out when reading the German crime statistics. "Solved" which is marked as "aufgeklärt" in those statistics just means that a suspect has been named. Not that someone actually did it/has been sentenced for the crime.
Surely it's pretty common everywhere to have at some point a suspect ('solved!') who is then released, because you lack evidence, realise it's not them, whatever. A suspect isn't necessarily convicted even if you do ultimately convict someone.
Turns out it was someone else, and you convict that other person. You thought you had them, were wrong, but did then ultimately solve the case.
It happens loads too, frequently in high profile stuff on the news they'll have a suspect who's somehow close to it, arrest them, but then they're released once satisfied with their allibi or whatever.
Or most murder investigations are (by definition) incompetent.
Or (more likely): The old idiom quoted above is stupid and useless. (That it presumes that murdering and getting away with it is somehow a noble or esteemed deed should be damning enough.)
There’s no money or benefits in solving crimes. It could be done easily in many cases but nobody cares about certain people like gang members. Lots of cases where the murderer tells everyone but nobody cares.
The 4th option they may appear to propose suggests that murder investigators don't get paid -- neither in money, nor in benefits.
So, to that end: As far as I know, that's not usually the case with government employees, and it is always actionable when it does happen to be the case.
It seems like a much more suitable parallel construction story to invent in this instance would be something like "there were valgrind issues reported, but I couldn't reproduce them, so I sanity checked the tarball was the same as the git source. It wasn't."
Wouldn't it have been easier to just have someone drive-by comment on the changes in the source tree in the comment? Like "what's up with this?"
Though I guess you end up with some other questions if it's totally anonymous. But I often will do a quick look over commits of things that I upgrade (more for backwards compat questions than anything but)
There also isn't really a reason for some contrived parallel construction here - whoever found the issue could just point to it without explaining how it was detected. They could even do that anonymously.
> Classic conspiracy theory.
I would not be too quick to shit on "conspiracy theories" however as there are plenty of proven cases of people conspiring against the interests of the public.
I often wonder with these sort of things, where there are lots of write-ups by geeks who dive-deep into the details, why do people say "This has to be state sponsored!", "Look at the timestamps! Irrefutable proof!"
Yes this was sneaky and yes this was a "slow burn" but is there really anything in the xz case that requires more than just a single competent person? Anything that requires state-level of sponsorship? The fact that random individuals online are able to dissect it and work things out suggests that it is comprehendible by a single person.
What is to say it that this was not just one smart-yet-disgruntled person acting alone?
The biggest thing for me that points to a state actor is the amount of time committed to the social engineering attack versus the expected value of the prize. A for-profit scheme built this way would be irrational, which doesn't preclude it being an irrational actor (or an individual with a motive other than profit) but does point to a state actor as a likely candidate.
The total value of the prize, if successful, would be worth a lot if you could sell it, but the odds of successfully getting an exploit into major infrastructure that goes undetected for long enough for your customers to buy and use it are tiny. States can afford moonshots, but I tend to expect private individuals to seek targets with a higher expected value.
Of course, that doesn't mean it was a competent state actor or that they allocated a ton of resources to it.
> I tend to expect private individuals to seek targets with a higher expected value.
If you're a very skilled and dedicated hacker, what other targets do you have that can net you many millions of dollars?
> or an individual with a motive other than profit
Isn't one of the most striking things about the hacker community the extreme amounts of time and effort that are put into things that are not expected to generate any profit?
I mean, there are people who spend all their free time over several decades just digging tunnels under their property. Or build a 6502 CPU from discrete transistors. Etc.
> Isn't one of the most striking things about the hacker community the extreme amounts of time and effort that are put into things that are not expected to generate any profit?
You're conflating the hacker-as-in-threat-actor community with the hacker-as-in-Linux-maintainer community.
Sure, generally speaking, people who try to break into computer systems for profit do not have a lot of overlap with people who spends lots of time writing open source software for fun.
But in this case it is not hard to imagine that the XZ-perpetrator came from the second group, right?
Edit: I mean, this wouldn't be that different from when Ken Thompson demonstrated how to do a hidden backdoor in the C compiler?
Has anybody done a writeup of the obfuscation in the backdoor itself (not the build script that installs it)? I threw the binary into Ghidra and looked thru the functions it found, but having no familiarity with the ifunc mechanism it uses to intercept execution I have up and set it aside for others.
I'd have to assume since there's anti-debug functionality that the code is also obfuscated. Since it shipped as an opaque binary I assumed at least some of the code would be encrypted with keys we don't have (similar to parts of the STUXNET payload).
No full dissemination of the backdoor itself has been done yet, as for the anti-debug, sure you can avoid things like that with flags. But this was done at compile level so its a bit more tricky.
> I'd have to assume since there's anti-debug functionality that the code is also obfuscated.
Not really, as above it was done at build time.. So you have already set your home up.
It's shown the problems with package managers not taking source from the right place.
I assume they are advocating for package managers to preferably grab signed git tags from repositories rather than download tarballs.
The backdoor relied on the source in the tarballs being different from the git tag, adding additional script code. This is common for projects that uses GNU autotools as build system; maintainers traditionally run autoconf so that users don't have to and ship the results in the tarballs.
I agree that this should be discouraged, and that distros should, when possible, at least verify that tarbal contents are reproducible / match git tags when importing new versions.
I think saying that the backdoor relied on it is too strong. The changes were obfuscated enough that it's unlikely anyone would have noticed if they were pushed to git, not doing that is just an additional layer of safety.
Correct. The onus should be now be on the package delivery to provide transperant packages maybe? Maybe add the extra step of pulling instead of trusting the push from maintainers? It's just an extra step the might get more eyes. All said, even in hindsight I wouldn't have called this one out.
It's runtime, but I think dynamic linker. The Windows equivalent would be if a library patched some code at dllmain. Actually the Detours library in the Windows world is similar. But it's for performance; the idea is you would patch some function references based on the CPU revision to get faster code specific to your CPU.
This is a really nice windows analogy, however it goes without saying this package wasn't aiming for Windows, ironically, it chose the path (as we seen so far) of least resistance. If you're hooked to an sshd service, your golden. They put 5 checks (maybe comically) in a row to make sure it was linux I this case .. who knows what's next.
> However, I believe that he is actually from somewhere in the UTC+02 (winter)/UTC+03 (DST) timezone, which includes Eastern Europe (EET), but also Israel (IST)
I had the same thought myself initially, but the analysis suggests a work-week that includes Fri, which precludes Israel (where the work week is Sun-Thu and not Mon-Fri), as well as celebrating Christmas and New Year's which are not official holidays in Israel. It isn't uncommon for younger people to take a day off for New Year's since it is an excuse to party, or for Jews with eastern European origins to celebrate Novy God, but I don't know of any Christmas celebrations.
Obviously these could be faked, but then why fake a Mon-Fri work week and not also fake the work hours to match it? To me it seems like an unlikely hypothesis.
and I believe someone pointed out that there were commits on yom kippur? that is a day basically no one works. The skies are closed, the roads are empty and everyone is bicycling on all the available streets, including highways.
Israel isn't home only for Jewish people who don't work on Yom Kippur, there are significant populations of both Muslims and Chirstians
I don't think that you can rule out any country based on email and commit timestamps, the attacker could have been further east and had a late work day, or further east with an early work day
I hate to defend Israel’s work week here, because it sucks to an unimaginable extent, but while you’re free not to observe Jewish holidays in Israel, unless you’re lucky to live in one of a handful of places you’ll struggle heavily to get anything done on Fri or Sat. If you try, you’ll find nothing works during most of that time including public transport—except for Friday morning when you’re going to have to scramble to get stuff done so you don’t run out of food before Sunday morning. On Yom Kippur, even driving around in your own car is going to get you fined. And, of course, nearly nothing managed by the governments works on the weekend or holidays, except for the military.
(I wouldn’t put it past the intelligence services to keep working during official holidays as a form of obfuscation, mind you. I just wanted to point out that it isn’t as straightforward a deal as Christians—or atheists for that matter—existing: Israel sure makes daily life hard for them.)
I am an Israeli, and I agree with you. I just wanted to point out that trying to speculate a ___location for the attacker based on what days they worked isn't going to be remotely accurate
Speaking of innacurate things, since the actual XZ git.repository isn't hosted on GitHub, could there be logs of which IP address the attacker was operating from?
I would be interested in semantic analysis of the communication from the involved online personas, similar to what was done for Satoshi, to point to a cultural direction. Would also be interesting to see if there were semantic style differences over time pointing to different people acting as the personas.
Since it would be quite a lot of code that has been committed as well, would also be interesting to see if code style differences could be found pointing to more people involved than only one.
Since when does does satisfying curiosity not provide any value. This has the potential to be a real life spy story ffs. Yeah you won't know anything for sure, that doesn't make this any less interesting.
Has anyone studied yet how well this kind of analysis holds up to simply asking GPT to rephrase your words? The whole thing goes out the window if that kind of attack works nowadays.
There's a lot of metadata about when/how they used git and IRC, and some preliminary analysis on same. Another surname in one of the commits. An apparent LinkedIn account. (See heading "OSINT" in https://boehs.org/node/everything-i-know-about-the-xz-backdo... .)
A lot of these tracks could be intentionally manipulated by a sophisticated actor to disguise their identity, but it's not "nothing".
Like I said, we don't know anything worth having a real discussion about. Maybe he was in the +03 time zone, and pretending to be in +08, but that's not enough to base a discussion on.
So this whole thread is based on the premise that the OP was using the formal definition of "discuss" rather than the informal one? Which is almost certainly not true?
Either kind of discussion is useless since the "evidence" (timezone IDs in strings that are not required to have any relation whatsoever with reality) is flimsy to begin with.
it's a rather good thing that this was found before it made it out broadly.
Not just for obvious reason of not wanting an unknown party to have RCE on your infrastructure. I think as people keep digging they will eventually formulate a payload which will allow the backdoor to be used by anyone.
As bad as it is for a single party to have access, it's much worse for any (every?) party to have access.
See https://en.wikipedia.org/wiki/Dual_EC_DRBG for another backdoor requiring a private key, in which the key was simply replaced in a subsequent supply chain attack(!) with a key known to the attacker:
"In December 2015, Juniper Networks announced[55] that some revisions of their ScreenOS firmware used Dual_EC_DRBG with the suspect P and Q points, creating a backdoor in their firewall. Originally it was supposed to use a Q point chosen by Juniper which may or may not have been generated in provably safe way. Dual_EC_DRBG was then used to seed ANSI X9.17 PRNG. This would have obfuscated the Dual_EC_DRBG output thus killing the backdoor. However, a "bug" in the code exposed the raw output of the Dual_EC_DRBG, hence compromising the security of the system. This backdoor was then backdoored itself by an unknown party which changed the Q point and some test vectors.[56][57][58] Allegations that the NSA had persistent backdoor access through Juniper firewalls had already been published in 2013 by Der Spiegel.[59] The kleptographic backdoor is an example of NSA's NOBUS policy, of having security holes that only they can exploit."
From my understanding, the command payload is not an RSA private key. It is an SSH certificate's public key field, a section of which contains the signed command to be executed.
If it is known to belong to a widely deployed backdoor that can't be patched away in time, then it is worth to recover the key by brute force using supercomputers. Of course, such capabilities are rather restricted to nation states.
It’s wild that days later it still hasn’t been unravelled. For something with so many global eyes on it & in theory the key pieces being available that is quite an achievement in obfuscation
I'm concerned about the long game nature of things here.
1-Sure they bid their time to setup the "infrastructure" to create the backdoors.
2-I'm sure their plan was to play another long game after the exploit got in the wild, in production. It's the right way to spend lottery money. Invest.
3-That makes me wonder if such games are being played today.
Does anyone have a good explanation or introduction into the performance testing that was done to find this? And how to get started? Actually measuring performance always seemed to be a very hard task and I'd like to be able to do similar testing as the person which found this backdoor.
From what I understand, it wasn't even the performance testing itself that caught the backdoor. The developer wanted to have the machine as quiescent as possible (so that nothing else running on it would interfere) before starting the performance tests, but sshd was using much more CPU than expected (and this could be observed with simple tools like "top"). My guess is that the usual "backscatter" of password guessing ssh login attempts from all over the Internet normally uses very little CPU time in sshd before being rejected, but the backdoor made each login attempt use a significant amount of CPU time to check whether it was an encrypted request from the attacker (and this particular machine had its sshd open to the Internet because it was a cloud machine being accessed via ssh through the open Internet).
Nearly spot on. I indeed was seeing sshd usage via top.
> but the backdoor made each login attempt use a significant amount of CPU time to check whether it was an encrypted request from the attacker
Absurdly enough, the high cpu usage is well before it even tries to figure that out. Due to avoiding "suspicious" strings in both the binary and memory, it has a more expensive string matching routine. Finding all the symbols it from the in-memory symbol tables ends up slow due to that.
That's why even sshd -h (i.e. printing help) is slow when started in the right environment. There's not enough visibility into other things that early during process startup (this happens long before main() is called), so they couldn't check what key is being presented or such. They really "should" have deferred much more of their initialization until after the ed448 check happened.
> (and this particular machine had its sshd open to the Internet because it was a cloud machine being accessed via ssh through the open Internet)
Unfortunately not. It's a machine I have at home, with some port of my public IP redirected to it, as I often need it when not at home. Oddly enough, my threat model did not include getting attacked with something as sophisticated as this, so I thought that was fine (only pubkey, no root).
> Unfortunately not. It's a machine I have at home, with some port of my public IP redirected to it, as I often need it when not at home.
Wow, that must have really sucked. It's one thing to have a rented office downtown (a VPS or similar) be backdoored, but to find a burglar broke into your home while it was under construction (or renovation in case it was a distro upgrade) and added a secret passage to it from the outside? What else did the burglar mess with while you weren't looking?
> Oddly enough, my threat model did not include getting attacked with something as sophisticated as this, so I thought that was fine (only pubkey, no root).
Most people's threat model is "everything except sshd is risky; openssh is from these cool paranoid people at OpenBSD and, as long as nobody can password guess, it's safe from non-authenticated attackers". That is, often sshd is the only open port (which helps explain why the attacker fixated on it).
> measuring performance always seemed to be a very hard task
It's not hard: it's either easy, or impossible, depending on the culture at your shop.
At the basic level it's trivial - you instrument code and interpret the results against the HW and lower level machine counters.
Depending on $lang you have many good libraries for this kicking around, the hard part is working with a corp that values performance (99% do not) so none of the required mindset and surrounding infra will be setup to permit this in any useful capacity.
Sure, but almost no knowledge that is interesting, valuable, useful, etc., is absolutely, positively critical to your life. Almost nothing on HN has that level of importance, but you are here learning interesting things, and unfortunately the first place some of those things appear is still Twitter.
Where does it end, fellow person? What is going to be the excuse/defense/workaround whenever Nitter instances are completely suffocated? Just suck it up and sign up so you can continue to participate on a increasingly hostile, toxic, manipulated platform in service of a narcasists deranged ego? Because Joe Bob Expert is too lazy to post elsewhere? No, I'm sorry, but when is enough, enough?
I know I'm missing out on good content, and I don't care. I have _some_ self-respect.
I don't even know if that's true, but I don't care. Twitter, at this point, is far more egregious than reddit, and I swore months ago I'd never contribute there. It blows my mind that people still play in Elon's piss-filled sandbox because they love the dopamine hits of bot-inflated engagement metrics.
Yes, you casual reader that keeps posting on Twitter due to laziness and momemtum, I'm absolutely talking about you. Your laziness is hurting everyone. And I'm not alone, you're limiting your audience and prioritizing, well, people too lazy to get off Twitter, and ignoring the technical, prescient (observant, at this point?), informed crowd that have left for elsehwhere. /shrug
I'd say the original author should just post something as an article instead of tweets. A blog post. Github md file. Github gist. Even pastebin. I don't care its format or where it is hosted, it does not need to be well formatted and could be as casual as it could be -- I don't expect to read a well-written article, and I know that would take a lot of effort. I just want to see something that is not a series of tweets.
Twitter is literally unusable if not logged in. All I see is the first tweet, and every link I can find that might reveal the rest of the thread takes me to a login page.
Also unusable on simpler hardware. The browser on the Kindle Paperwhite I am typing this on is just slightly too old to run Twitter. I get the unsupported browser page, which funnily enough still uses the old colors and logo.
I agree with you 200%, especially because now it’s not only an idiotic format, reading it lends support to Elon’s X (and I no longer have an X account for that reason so can’t read it). But I’m afraid this sentiment long since reached the point of “yelling at clouds”.
Amusing. I was always irritated by the very concept of threadreaderapp and by people's propensity for posting the links (just read it on the website! There's no need to spend extra compute to join up some divs!) - but Elon's ever-increasing breakage of the site now makes it genuinely useful.
"ever increasing"? Twitter is completely, 110% unusable without an account (and dear god, I dare some of you to make a new account and see what the process and default content is. It's gross).
I say 110% not to be hyperbolic -- It shows you non-latest tweets on profiles, it doesn't let you see tweet threads or replies, even from the original poster when they post a chain of tweets. I literally can't read any of this content save for the threadreadapp link.
I don’t think it means currently it is reasonable , just that things are continuing to worsen and we have not yet reached a bottom even if a bottom exists.
Indeed. For those of us without an account, the decline has long-ago reached the point where it's unusable. I assume, and hear, that for those with an account, the usability continues to decline.
I made an account after Musk took over and Mysterious Twitter X started mandating logging in, in large part since he made the place tolerable and I follow illustrators and official accounts for games I play anyway.
Making the account wasn't that annoying. Once upon a time they demanded my phone number and that was obnoxious to the point of noping out, but nowadays (after Musk took over?) they also take email instead. Email for registering accounts is nothing new, so no big deal; been doing that since 2002 when I registered my first forum account.
After I made my account I went and followed all the accounts I usually follow, and my recommendations got relevant in very short order: Posts from illustrators, the games, and players who play those games.
So, thanks Musk. You've at least convinced one guy to make an account where Dorsey flatly couldn't, and made the guy even happy about it which was pleasantly surprising.
oh yeah? Interesting you chose not to elaborate on how, given the statements I made about ruining public access stand.
But in summary, you're saying you used the platform the same way it was usable before Elon bought it, other than all of the things I mentioned that make it unusable for those not-logged-in? Let's be frank, Elon made it 200% worse, than relinquished to only 150% worse, and that's a win?
The site conventions are to post original sources and workarounds in the thread. And to avoid gumming up threads with annoyances-of-everyday-web-life meta.
Without having a Twitter account I have a really hard time following these threads. Is there some write up?
Edit : Check comments.
Yes, the backdoor hasn't been decompiled/reverse engineered yet. But it feels like clickbait to say : "It goes deeper"... Obviously. Nobody knows what it fully does yet. There was no assumption of knowing what it did.
RCE as root is already the worst case though. The auth bypass is basically just a convenience feature of the backdoor. So yeah it's mildly interesting but not really a new development of the story.