Hacker News new | past | comments | ask | show | jobs | submit login

The real problem is OS:es that write to disk for no good reason.

Windows 10 writes 100KB/s constantly.

That should be illegal.




Try running Windows 10 off of an older spinning laptop drive. It can take upwards of 40 minutes to display the desktop on the first boot after Windows Update runs. Even in normal operation those constant low level writes leave barely any breathing room for your actual applications. Full size hard drives do a bit better, but even then it can be pretty painful when the drive indexing service kicks off or .NET is updated.


Windows 10 on an SSD feels like Windows 7 on a spinning disk. Microsoft has wiped out the gains we got from SSDs.


This doesn't actually matter in a practical sense. Assuming 24/7, it's 3TB a year. Which is ~1% drive endurance.

Also, if you are worried about overwriting the same files over and over, it also doesn't matter. Block device addresses are not physical addresses, controller maps them to wear the drive evenly.


On the other hand, lots of tiny writes scattered all over will tend to produce much higher write amplification than large sequential writes. So you'll get more actual wear to the drive from the 3TB of constant background churn than if you copied in 3TB of movies.


Those writes would have to be significantly smaller than the SSD's page (sector) size which is 512 bytes or 4 KiB. And would have to be written to different pages in rapid succession (to be flushed apart) - a standard serial write wouldn't trigger this even if it's 1 byte at a time, the OS FS cache would buffer it.

It would have to be very misbehaving software or deliberate sabotage.


The logical block size presented by SSDs to the host system is 512 bytes or rarely 4kB. But the native page size of the flash memory itself is usually more like 16kB, and the erase block size is several MB at a minimum. Those larger sizes are why random writes (and especially random overwrites) can cause high write amplification within the SSD: because what looks like a series of single-sector writes to the host will at a minimum cause fragmentation within the SSD, and can easily cause large read-modify-write operations within the SSD.

Normally, SSDs and operating systems both use aggressive caching to combine writes. That's the only way a drive can turn in extremely high random write benchmark numbers. Consumer SSDs do this caching even though they do not have power loss protection capacitors to ensure that data cached in volatile SRAM will be flushed to the flash in an emergency. But it wouldn't be smart for the caching to wait forever for more writes to combine with a sub-page write, which is why I'd be concerned that a slow and steady trickle of write activity may be able to cause serious real write amplification.


I’m pretty sure SSDs can only do 4kib aligned writes regardless of the FS sector size (under the hood it’s a write amplification unless the OS or controller manage to coalesce them. But yea, it depends on how things are getting flushed, but generally I wouldn’t expect too much magic unless you get lucky. It sounds like a small bug in the OS (ie these kinds of wires should be matched in memory in the application).


Almost all SSDs internally track allocations in a 4kB granularity. That size is what leads to the convention of equipping the drive with 1GB of DRAM for every 1TB of NAND flash, when the drive is designed to hold the entire table of logical to physical address mappings in DRAM.

It's now common for consumer SSDs to have less DRAM than the normal 1GB per 1TB ratio, but they run their FTL with the same 4kB granularity and just don't have the full lookup table in RAM. There are at least a handful of special-purpose enterprise drives that use a larger sector size in their FTL, such as the 32kB used by WD's SN340: https://www.anandtech.com/show/14723


I do wonder if perhaps the good NVME SSD controllers come with magic. It would take a single instance of malware ruining SSD's with 4000x write amplification to taint some brands while aiding the marketing of others.


I thought some of them even do 8KB. I’ve seen ZFS tips that claim you should use 8KB blocks on things like an 850 Pro.


Not familiar with that. I know QLC disks have a block size of 64kib.


Except "evenly" is not a standard, or something that anyone other than the manufacturer can verify, it's hidden in the firmware so we have no idea really.


Do we know the ___location of those writes, perhaps they can be redirected to a ramdisk?


Sysinternals tools will show up the writes and what is causing them.

I've dug down and found random things doing dumb stuff in the past. Verbose logging turned on by default for some services, for example.


management/windows logs about >100 active logs. Performance/data collector set about >50 active Event Trace Sessions running.


browsers are much worse, both chrome based and firefox




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: