Meta: This post is yet another victim of the HN verbatim title rule despite the verbatim title making little sense as one of many headlines on a news page.
How is "Now using Zstandard instead of xz for package compression" followed by the minuscule low-contrast grey "(archlinux.org)" better than "Arch Linux now using Zstandard instead of xz for package compression" like it was when I originally read this a few hours ago?
Saying it's "yet another victim" seems slightly too emotive to me.
If people can't read the source site's ___domain after the headline then I agree there wouldn't be much context, but equally, if they can't read that, surely their best solution is to adjust the zoom level in the browser.
It's clear you won't get complete context from the headline list plus ___domain, but a hint of it is provided and if you want more you click the link. Maybe I'm being a little uncharitable but I don't see a big problem here.
I just woke up to this and was surprised the title was eddied as well. I looked up the guidelines and it looks like I violated the "If the title includes the name of the site, please take it out, because the site name will be displayed after the link." guideline.
In your defense, the only reason I knew about this being Arch is because I got the email first last night. The belief that "everyone reads the ___domain in light grey parens on the right" is false, as a reader I 100% ignore that information subconsciously. This article would be a lot better if it started with "ArchLinux: ...." as it apparently used to be last night. This is a 100% bad title edit, "guidelines" be damned - it made your article submission worse not better.
Yes, this is a real problem, verbatim titles are often far from the "optimal" title. In some cases the original title provides almost no information about the content.
The question is what's better than a strict "no editorialization" rule.
The exact guideline is "If the title includes the name of the site, please take it out, because the site name will be displayed after the link", and I think that wording speaks from an outdated mindset where every submission is a standalone web page that _has_ a title, for one thing. This submission is a web page with its own title, of course, but that makes it sound like the guideline hasn't been rethought in too-long of a time.
For a contrived example of how dated the guideline seems, what if somebody submitted a tweet thread criticizing Twitter the company with a headline/sitebit like "Twitter now banning third-party clients. (twitter.com)". Would it have to be renamed to "Now banning third-party clients. (twitter.com)"? That would make it appear to be a more official statement instead of an unsponsored opinion.
I'm picking on Twitter out of recent memory of this submission of mine a couple weeks ago, where the submission title "Tracking down the true origin of a font used in many games and shareware titles" was 100% my own editorializing for lack of title-worthy material in the linked tweet itself: https://news.ycombinator.com/item?id=21667238
> The exact guideline is "If the title includes the name of the site, please take it out, because the site name will be displayed after the link", and I think that wording speaks from an outdated mindset where every submission is a standalone web page that _has_ a title, for one thing.
I suspect the original intent of the rule was to get rid of pointless redundancy in the title. "The 10 craziest things you don't know about X - clickbait.com" is the sort of thing you see very often in the <title> element, but it adds no new information. Actually, you'll notice even Hacker News posts have " | Hacker News" appended to them.
In an article about Arch Linux, the text "Arch Linux" is much less likely to be redundant than an article about something else that just happens to be on the archlinux.org ___domain.
The window title of the submission is "Arch Linux - News: Now using Zstandard instead of xz for package compression". There is no need to invent a new title.
Nope. Some sources, such as Medium embed viewer-dependent identifiers in the url which can confound de-duping based on URL alone. I don't know if 'dang et al have figured out a way to handle these cases.
I'm talking about the difference in size and contrast between the actual headline and the trailing HN sitebit "(archlinux.org)". The final size they end up on my screen is irrelevant to my point, because my point is about the size and contrast difference _between_ the two, whatever final sizes those might happen to be. The default HN stylesheet calls for 10pt and 8pt for those, respectively, so it's not like I'm just making this up. I'm saying the verbatim title rule is a poor fit here because it took a relevant (central, even!) part of the headline and moved it to a spot of secondary importance and size. There are cases where I defend the rule, but right now I am talking about this case and only this case :)
FWIW 12 vs 15pt text used like this is called visual hierarchy (or typographic hierarchy if the differences are limited to typography, they're not here, there's also positional difference between the titles and domains).
Desaturating and down-sizing suggests the information is not important, but in this case is critical to understanding and of equal weight and so should probably have the same visual hierarchy, that's easily achieved by including "Arch Linux" in the title.
You don't get to declare that something is an accessibility problem just because you don't like it, though.
You would rate the exact same formatting good or bad based entirely on whether the text next to it is a couple pixels larger. That does not sound like "accessibility".
Whether it's hard to read is an accessibility issue. But that's not your complaint.
I'm happy to agree to disagree. Please forgive me if I am (hopefully!) just misreading the tone of these comments, but this exchange has seemed tiringly mean-spirited and argumentative to me. I'm not trying to convince you of some objective fault in HN's design, just sharing my experience to see if anyone else's is similar. My experiences will still be my reality even if the answer to that question is "no" :)
I got snarky because you said "I'm glad it's not a problem for you" when I never said the current design wasn't a problem. I read that as unwarrantedly dismissive!
I'm not disagreeing with your experience, I'm just disagreeing with part of the way you want to fix it.
I have to agree with the previous poster. Sometimes i have absolutely no clue what the article is about because the original title has been edited and the url is not immediately recognizable.
> Sometimes i have absolutely no clue what the article is about because the original title has been edited and the url is not immediately recognizable.
Yes, that happens... but I'm not sure how it applies to this specific case? The URL is the same two words that were removed from the title.
Earlier last year I was doing some research that involved repeatedly grepping through over a terabyte of data, most of which were tiny text files that I had to un-zip/7zip/rar/tar and it was painful (maybe I needed a better laptop).
With Zstd I was able to re-compress the whole thing down to a few hundred gigs and use ripgrep which solved the problem beautifully.
Out of curiosity I tested compression with (single-threaded) lz4 and found that multi-threaded zstd was pretty close. It was an unscientific and maybe unfair test but I found it amazing that I could get lz4-ish compression speeds at the cost of more CPU but with much better compression ratios.
BSD tar on my Mac (running 10.15.2) has -a for tarfile creation (-c mode); it always autodetects the compression format in extraction mode (-z, -j, etc. are ignored if -x is specified). Not sure when either behavior would've been introduced; the somewhat-older machine I tested on (running 10.13) does not have -a but does have the autodetection behavior on extract.
-I, on the other hand (which, in gnutar, specifies a compression program to run the output through), appears to actually be GNU-specific. BSD tar makes -I synonymous with -T (specifying a file containing the list of filenames to be extracted or archived).
(Please don't use Zstandard if you care about cross-platform compatibility at all, though-- it's fine in controlled environments like an OS package manager, but I don't have it on my Mac, nor do I have it by default on my Ubuntu server (which is still sitting back on 16.04; I should fix that sooner or later).)
As a FreeBSD user I was curious since I wouldn't want to learn to depend on an option only to find it doesn't exist in version n-1, and today I learned that FreeBSD's tar is contributed code from libarchive and not an in-tree thing like I assumed! https://github.com/freebsd/freebsd/blob/master/usr.bin/tar/M...
The textual libarchive changelog isn't super clear on it, and I didn't feel like digging further, but appears to have been introduced some time between libarchive 3.0.4 in 2012 and libarchive 3.1.2 in 2013: https://github.com/freebsd/freebsd/commit/366f42737cba40ceb2...
This got me digging a little further, and (as a consequence) I just learned something new and fun: the current libarchive-based BSD tar can create and extract a ton [1] of other formats, including ZIP, RAR, and 7Zip files!
That'll save me some time next time I need to extract one at the command line; I always end up having to look at unzip's man page to make sure it's actually going to do what I want, and the 7zip command line utility's kind of funky (and not installed by default most places). But 'tar -xvf filename' is permanently burned into my mind and pretty much always does exactly what I want.
You don't need it with libarchive-based BSD tar either. It can also extract zipfiles, ISO archives, and many other formats with just "tar xf file.zip".
libarchive is really awesome! I've started using bsdtar everywhere, it's so well done and polished I never felt the need to bother with anything else (same goes for bsdcpio)
It's actually (at least in the implicit tar -xf foo.tar.gz) made it to all major implementations by now. OpenBSD was the sole exception last time I checked, and OpenBSD tar only untars, it won't decompress even with flags.
tar automatically detects and supports unpacking zstd-compressed archives (as well as other compression types). there's no reason to use -x combined with other compression flags.
Package installations are quite a bit faster, and while I don't have any numbers I expect that the ISO image compose times are faster, since it performs an installation from RPM to create each of the images.
Hopefully in the near future the squashfs image on those ISOs will use zstd, not only for the client side speed boost for boot and install, but it cuts the CPU hit for lzma decompression by a lot (more than 50%).
https://pagure.io/releng/issue/8581
BTW, Fedora recently switched to zstd compression for its packages as well. For the same resons basically - much better overall de/compression speed while keeping the result mostly the same size.
Also one more benefit of zstd compression, that is not widely noted - a zstd file conpressed with multiple threads is binary the same as file compressed with single thread. So you can use multi threaded compression and you will end up with the same file cheksum, which is very important for package signing.
On the other hand xz, which has been used before, produces a binary different file if compressed by single or multiple threads. This basucally precludes multi threaded compression at package build time, as the compressed file checksums would not match if the package was rebuild with a different number of compression threads. (the unpacked payload will be always the same, but the compressed xz file will be binary different)
Zstd has an enormous advantage in compression and, especially, decompression speed. It often doesn't compress quite as much, but we don't care as much as we once did. We rebuild packages more than we once did.
This looks like a very good move. Debian should follow suit.
I build packages periodically from the AUR, and compression is the longest part of the process much of the time. For a while, I disabled compression on AUR packages because it was becoming enough of a problem for me to look into solutions. If it's annoying for me, I can imagine it's especially problematic for package maintainers. I can only imagine how much CPU time switching the compression tool will save.
I love the AUR, but every single time I have to wait for it to compress Firefox nightly, and then wait for it to immediately decompress it again because the only reason I was building the package in the first place was to install it I about lose my mind. Hopefully this helps, but I really wish AUR helpers would just disable compression and call it a day so I don't have to go mess with config files that would also change my manual package building routine.
EDIT: nevermind, this doesn't seem to have made this the default for building packages locally, just for ones you download from the official repos. Guess I'll go change that by hand and then still be sad that I can't have it easily disabled entirely for AUR helpers but build my packages with compression.
You can also override PKGEXT as an environment variable when invoking makepkg, so you can have compression by default but easily skip it when it matters:
PKGEXT=.pkg.tar makepkg
EDIT: By the way, it's often a big win to use multithreaded compression (pigz, zstd -T0, etc.) in makepkg.conf. With this, it's fast enough that I hardly ever override PKGEXT anymore.
If you care about space more than you care about speed you may want to stick with xz, it is hard to beat or impossible by zstd. So set your own priorities rather than adopt the ones of Arch devs.
As long as there will be support within the tools for xz individual builders of packages for their own use can use either or more.
I don't really care about space all that much, but packages I build tend to get uploaded and the people downloading them may not have fast internet. Meanwhile, packages built by my AUR helper I care about speed (seriously, it takes ages to compress then immediately decompress firefox). The problem isn't that I want to optimize for one or the other, it's that AUR helpers generally have a different need than I do when building my packages myself, but for some reason AUR helpers don't override the compression setting for just their install. Probably due to caching like I said which means they can't assume everyone will want compression off all the time, but I'm not sure, that's just a guess.
I doubt anybody today has legitimately enough reason to care about space to prefer xz over zstd. The only plausible reason is that some customer, or customer's customer, is only equipped to unpack archaic formats.
> EDIT: nevermind, this doesn't seem to have made this the default for building packages locally, just for ones you download from the official repos. Guess I'll go change that by hand and then still be sad that I can't have it easily disabled entirely for AUR helpers but build my packages with compression.
I believe you can supply it via an environment variable if your AUR helper has the ability to set those for `makepkg`.
I'll have to experiment with that, thanks. Still, I don't understand why they don't just do it for me. Maybe they're aggressively caching packages and don't want to take the size hit or something.
> Recompressing all packages to zstd with our options yields a total ~0.8% increase in package size on all of our packages combined, but the decompression time for all packages saw a ~1300% speedup.
Impressive. As a AUR package maintainer I am also wondering how the compression speed is though.
After reading these comments, I can't help but wonder, what is the benefit of Zstd over lz4? Why didn't they switch to lz4 if it was the speed of the algorithm that they favored even with marginally worse compression ratios?
Where Zstd will reduce, say, 3x, Lz4 reduces only 2.5x. This doesn't seem very different until you look at it from the other end: my .zst file is 3.3 GB, but the .lz4 would have been 4 GB, which is 700 MB more.
Was a time when 700 MB mattered; it was as much as you could get onto a CD.
So, there is a place for each. I would set up the process to use Lz4 when testing, and Zstd for actual delivery to download archives.
In some circumstances, particularly when using a shared file server, Lz4 can be quite a lot faster than writing and reading data uncompressed.
Guessing that 0.8x size increase for 1300% speedup was worth the tradeoff but maybe ≥1.5 size increase or more was not (especially considering a 1300%->2000% increase is not going to be user visible for 99% of the packages).
While the speedup is nice pacman still seems to operate sequentially, i.e. download, then decompress one by one. Decompressing while downloading or decompressing in parallel seems like a low-hanging fruit that hasn't been plucked yet that wouldn't have needed any changes to the compressor.
I might be wrong, but wouldn't it be prudent to first verify the checksum/signature of the downloaded archive before unpacking it? Even when just decompressing there's at least the danger of being zip-bombed (assuming a zip bomb can be constructed for any dictionary-based compression algorithm.)
FWIW I really applaud Arch here. Even if it's just a small step. Commercial operating systems should take notice. OS updates should really not take as long as they (mostly) do.
Even then it still could be pipelined. download, check signature, decompress while the next download is running. But yeah, pacman is plenty fast already.
Since most people are interested in the time taken to compress/decompress rather than the speed at which it happens, seems to me a better metric would be:
"... decompression time dropped to 14% of what it was..." (s/14/actual_value)
I learned about this one the hard way when I went to update a really crufty (~ 1 year since last update) Arch system I use infrequently the other day. I had failed to update my libarchive version prior to the change and the package manager could not process the new format.
Luckily updating libarchive manually with an intermediate version resolved my issue and everything proceeded fine.
This is a good change, but it's a reminder to pay attention to the Arch Linux news feed, because every now and then something important will change. The maintainers provided ample warning about this change there (and indeed I had updated by other systems in response) so we procrastinators really had no excuse :)
I used zstd for on-the-fly compression of game data for p2p multiplayer synchronization, and got 2-5x as much data (depends on the payload type) in each TCP packet. Sad that it still doesn't get much adoption in the industry.
I'd love to see Zstandard accepted in other places where the current option is only the venerable zlib. E.g., git packing, ssh -C. It's got more breadth and is better (ratio / cpu) than zlib at every point in the curve where zlib even participates.
Edit: Also, if browsers do adopt zstd, it's likely you'll end up with the same situation where they fork their own implementation of zstd. Upstreaming requires signing Facebook's CLA, which has patent clauses that don't work for most.
Brotli has wide browser support (https://caniuse.com/#feat=brotli) and comes closer to zstd in compression ratio and compression speed, but its decompression speed is significantly lower and closer to zlib.
Brotli compresses about 5-10 % more than zstd. Benchmarks showing equal compression performance use different window sizes (smaller window sizes for brotli) or do not run at maximum compression density.
zstd does decompress fast, but this is not free. The cost is the compression density -- and lesser streaming properties than brotli.
For typical linux package use, one could save 5 % more in density by moving from zstd to large window brotli. The decompression speed for a typical package would be slowed down by 1 ms, but the decompression could happen during the transfer or file I/O if that is an issue.
The linked series of comments (which, to be clear, I've only skimmed — there's a ton there) show zstd 22 sometimes coming behind Brotli 11d29, sometimes ahead on compression ratio; usually coming ahead of Brotli 11 on compression ratio; ~5x faster on compression throughput and ~2-2.5x faster on decompress throughput. To cherry-pick some numbers (the table after "259,707,904 bytes long open_watcom_1.9.0-src.tar", dated "TurboBench: - Mon Apr 30 07:51:32 2018"):
So in that particular instance, zstd 22 comes out about 5% worse (+1.1 MB over Brotli 11d29's 20.1 MiB) on compressed size, but 3% better (-640kiB) vs Brotli 11 at 21.8 MiB. So... maximum compression is within a small margin; compression and decompression speeds are much quicker.
I think it's fair to say that zstd struggles the most at the extremes. On the fast extreme it loses (marginally) to lz4; on the slow extreme it (maybe) loses (marginally) to brotli. But it's relatively quick across the spectrum and provides a lot of flexibility.
It may make sense to continue to use Brotli or xz for static assets that are compressed infrequently and read often. But for something like HTTP Content-Encoding, where dynamic pages are compressed on the fly? Zstd would shine here, over both Brotli and (vanilla) zlib. (I know Chrome has some hacked up zlib on the client side, but I do not know too much about it.)
That's interesting. Brotli has wide browser support although its less than 5 years old but webp is reaching a decade and Safari still doesn't support it...
WebP has an excellent lossless image compressor (like PNG just 25-40 % more dense), but the lossy format has weaknesses that people focused on, and slowed down the adoption. The initial lossy encoder had weaknesses in quality -- it had bugs or was a port of a video coder. Nowadays, the quality is much better, but the format forces YUV420 coding (does not allow YUV444 coding) which limits the quality of colors and fine textures.
> but webp is reaching a decade and Safari still doesn't support it...
That’s a philosophical objection. For a long while Mozilla also was of the opinion that WebP is not “better enough” than JPEG/PNG to warrant the addition of another image format which the entire web must support forever using only one available implementation.
Plus I think there are still some unresolved patent claims on the VP8/9 video codec (which are the basis for WebP).
AUR users -- the default settings in /etc/makepkpg.conf (delivered by the pacman package as of 5.2.1-1) are still at xz, you must manually edit your local config:
PKGEXT='.pkg.tar.zst'
The largest package I always wait on perfect for this scenario is `google-cloud-sdk` (the re-compression is a killer -- `zoom` is another one in AUR that's a beast) so I used it as a test on my laptop here in "real world conditions" (browsers running, music playing, etc.). It's an old Dell m4600 (i7-2760QM, rotating disk), nothing special. What matters is using default xz, compression takes twice as long and appears to drive the CPU harder. Using xz my fans always kick in for a bit (normal behaviour), testing zst here did not kick the fans on the same way.
After warming up all my caches with a few pre-builds to try and keep it fair by reducing disk I/O, here's a sampling of the results:
xz defaults - Size: 33649964
real 2m23.016s
user 1m49.340s
sys 0m35.132s
----
zst defaults - Size: 47521947
real 1m5.904s
user 0m30.971s
sys 0m34.021s
----
zst mpthread - Size: 47521114
real 1m3.943s
user 0m30.905s
sys 0m33.355s
I can re-run them and get a pretty consistent return (so that's good, we're "fair" to a degree); there's disk activity building this package (seds, etc.) so it's not pure compression only. It's a scenario I live every time this AUR package (google-cloud-sdk) is refreshed and we get to upgrade. Trying to stick with real world, not synthetic benchmarks. :)
I did not seem to notice any appreciable difference in adding the `--threads=0` to `COMPRESSZST=` (from the Arch wiki), they both consistently gave me right around what you see above. This was compression only testing which is where my wait time is when upgrading these packages, huge improvement with zst seen here...
It should be noted that the makepkg.conf file distributed with pacman does not contain the same compression settings as the one used to build official packages.
The man page for zstd mentions that using the --ultra flag will cause decompression to take more RAM as well when used to compress. Does this indicate a huge increase in memory to decompress, or just a trivial amount per package, say something large like... `libreoffice-fresh`? Or `go`? They're two of the largest main repo packages I have installed... (followed by linux-firmware)
The respective flag for brotli would be `--large_window 25 --quality 11`
Brotli defines memory use as log2 on command line, i.e., 32 MB = 1 << 25
zstd uses a lookup table where the number given by the user is mapped to a decoding-time memory use. The user just needs to look it up if they want to control decoder memory use.
If one benchmarks zstd with `20` and brotli with `20`, zstd may be using 32 MB of decoding memory, where one is specifying 1 MB for brotli. By default zstd tends to use 8 MB for decoding (however it is variable with encoding effort setting) and brotli 4 MB for decoding (not changing with the encoding effort setting).
I’ve used LZ4 and Snappy in production for compressing cache/mq payloads. This is on a service serving billions of clicks in a day. So far very happy with the results, I know zstd requires more CPU than LZ4 or snappy on average but has someone used it under heavy traffic loads on web services. I am really interested trying it out but at the same time held back by “don’t fix it if it ain’t broken”.
Use Lz4 where latency matters, Zstd if you can afford some CPU.
I have a server that spools off the entire New York stock and options market every day, plus Chicago futures, using Lz4. But when we copy to archive, we recompress it with Zstd, in parallel using all the cores that were tied up all day.
There is not much size benefit to more than compression level 3: I would never use more than 6. And, there's not much CPU benefit for less than 1, even though it will go into negative numbers; switch to Lz4 instead.
Maybe. The thing is; zstd is quite close, and unlike lz4, zstd has a broad curve of supported speed/time tradeoffs. Unless you're huge and engineering effort is essentially free or at least the microoptimization for one specific ratio is worth the tradeoff - you may be better off choosing the solution that's less opinionated about the settings. If it then turns out that you care mostly about decompression speed + compression ratio and a little less about compression speed, it's trivial to go there. Or maybe it turns out you only sometimes need the speed, but usually can afford spending a little more CPU time - so you default to higher compression ratios, but under load use lower ones (there's even a streaming mode built-in that does this for you for large streams). Or maybe your dataset is friendly to the parallization options, and zstd actually outperforms lz4.
If you know your use case well and are sure the situation won't change (or don't mind swapping compression algorithms when they do), then lz4 still has a solid niche, especially where compression speed matters more than decompression speed. But in many if not most cases I'd say it's probably a kind of premature optimization at this point, even if you think you're close to lz4's sweet spot.
For those who want a TLDR :
The trade off is 0.8% increase of package size for 1300% increase in decompression speed.
Those numbers come from a sample of 542 packages.
> If you nevertheless haven't updated libarchive since 2018, all hope is not lost! Binary builds of pacman-static are available from Eli Schwartz' personal repository, signed with their Trusted User keys, with which you can perform the update.
I am a little shocked that they bothered; Arch is rolling release and explicitly does not support partial upgrades (https://wiki.archlinux.org/index.php/System_maintenance#Part...). So to hit this means that you didn't update a rather important library for over a year, which officially implies that you didn't update at all for over a year, which... is unlikely to be sensible.
Arch is actually surprisingly stable and even with infrequent updates on the order of months still upgrades cleanly most of the time. The caveats to this were the great period of instability when switching to systemd, changing the /usr/lib layout, etc but those changes are now pretty far in the past.
Sure, and I've done partial upgrades and it was mostly fine:) It just surprised me to see the devs going out of their way to support it on volunteer time. On the other hand, maybe that's exactly the reason; maybe someone said "hey look, I can make static packages that are immune to library changes! I guess I'll publish these in case they're useful". Open source is fun like that:)
I remember that time. I had successfully migrate 2 systems to systemd from init. One was a production server. I felt like a genius at the time. Of course all the arch devs did all the real work :)
(I wanted the challenge of running arch in production just to learn, good times)
That's only not sensible if you continued to use that computer for the year. You might have just not used it for a year, which doesn't seem unlikely. In fact I just updated my Arch desktop, which I had indeed not used for more than a year :)
pacman-static existed already, and can be used to fix some of the most broken systems in a variety of circumstances. So, they didn't make it just for this, might as well mention it as the right tool to fix the problem should it occur.
A little known fact is that parallel XZ do compress worse than XZ !
I measured pixz as being approximately ~2% worse than xz.
That's because input is split into independent chunks.
In comparison, the 0.8% of zstd looks like a bargain.
The xz tool is not deterministic when compressing. The packaging team might change upstream for a few things, but diving into the innards of a compression tool is expecting a bit much.
May be the point is that compressed package can change every time, which is an issue for reproducible builds idea many distros now are using. Though I'm not sure why parallelized xz can't behave in predictable fashion.
Compression speed can matter in general (to improve build times).
For xz, you need to compress with chunking (and may be indexing for more benefit), in order to allow parallel decompression to begin with. Otherwise xz produces a blob which you can't split into independent parts during decompression, which makes using many decompression threads pointless.
But yes, if parallel compression is creating non determinism, you can do all the compression work with chunking without parallelism, still allowing parallel decompression. But I'm not sure why it even has to create non determinism in the first place.
Most of the results published show very little positive or negative speed in decompression, where is all this -1300% coming from?
edit: Sorry, my fault that was decompression RAM I was thinking about, not speed, although I was influenced by my test that without measuring both xz and zstd seemed instant.
I couldn't care less about decompression speed, because the bottleneck is the network, which means that I want my packages as small as possible. Smaller packages mean faster installation; at 54 MB/s or faster decompression rate of xz, I couldn't care less about a few milliseconds saved during decompression. For me, this decision is dumbass stupid.
Per the post, the speedup on decompress is _13x_ while the size is 1.008x.
For those figures, this will be better total time for you if your computer network connection is faster than about 1.25mbit/sec. For a slow arm computer with an XZ decompress speed of 3MB/s the bandwidth threshold for a speedup drops to _dialup_ speeds.
And no matter how slow your network connection is and how fast your computer is you'll never take more than 0.8% longer with this change.
For many realistic setups it will be faster, in some cases quite a bit. Your 54MB XZ host should be about 3% faster if you're on a 6mbit/sec link-- assuming your disk can keep up. A slow host that decompresses xz at 3MB/s w/ a 6mbit link would a wopping 40% faster.
Why do you care so much about the few extra miliseconds wasted downloading, then? (0.8% size increase is ~ 0). Also don't forget that Arch can also be used on machines with very slow CPU but very fast network connections, like many VPSs. I think this will make a tangible difference on mine. This is also a big improvement for package maintainers and anyone building their own packages without bothering to modify the makepkg defaults, eg. most people using an AUR helper.
There are nice plots [1] to see the tranfer+decompression speedup depening on the network bandwidth.
This is for html web compression, but the results are similar for other datasets. For internet transfer more compression is better than more decompression speed.
You can make your own experiments incl. the plots with turbobench [2]
I use xz on my A1200 all of the time, and Amiga is the stereotypical system where maximum possible compression matters over everything else. Don't make assumptions about me.
Because my Amiga: volume is actually an NFS share on my Solaris 10 central storage server in the basement. With xz(1) I'm easily portable across systems.
And you're talking to a guy who had his C128D spend all nights crunching intros + pic + game at 1 MHz in Cruel Cruncher 2.5+ and Time Cruncher 5.0 before that. xz(1) on a MC68030 or the Vampire is super fast in comparison.
I don't know whether it'll build because I'm interested in maximum compression -- time spent compressing is immaterial to me since it's only done once. It's like compiling -- I don't care how long it takes if it generates fast machine code, because it's running the generated machine code that will matter many, many times afterwards.
Is Core2 Quad with 8GB of RAM or a Sun X4100 M2 "older hardware"?
How is "Now using Zstandard instead of xz for package compression" followed by the minuscule low-contrast grey "(archlinux.org)" better than "Arch Linux now using Zstandard instead of xz for package compression" like it was when I originally read this a few hours ago?