No. The most effective way to remove HDD noise is to remove HDDs and add SSDs. I don't have any HDDs since 2016.
P.S. I also talked to a customer in the past who stored their backups in an SSD-only Ceph cluster. They were citing higher reliability of SSDs and higher density, which was important because they had very limited physical space in the datacenter. In other words, traditional 3.5" HDDs would not have allowed them to store that much data in that many rack units.
SSDs are great. Quieter, can be denser, faster, available in small sizes for small money, more reliable, etc.
But they're not great for low cost bulk storage. If you're putting together a home NAS, you probably want to do well on $/TB and don't care so much about transfer speeds.
But if you've found 10TB+ ssds for under $200, let us know where to find them.
The 4TB M.2 SSDs are getting to a price point where one might consider them. The problem is that it's not trivial to connect a whole bunch of them in a homebrew NAS without spending tons of money.
Best I've found so far is cards like this[1] that allow for 8 U.2 drives, and then some M.2 to U.2 adapters like this[2] or this[3].
In a 2x RAID-Z1 or single RAID-Z2 setup that would give 24TB of redundant flash storage for a tad more than a single 16TB enterprise SSD.
On AM5 you can do 6 M.2 drives without much difficulty, and with considerably better perf. Your motherboard will need to support x4/x4/x4/x4 bifurcation on the x16 slot, but you put 4 there [0], and then use the two on board x4 slots, one will use the CPU lanes and the other will be connected via the chipset.
You can do without bifurcation if you use a PCIe switch such as [1]. This is more expensive but also can achieve more speed, and will work in machines without bifurcation. Downside is it uses more W.
Right, and whilst 3.0 switches are semi-affordable, 4.0 or 5.0 costs significantly more, though how much that matters obviously depends on your workload.
True. I think a switch which could do for example PCIe 5.0 on the host side and 3.0 on the device side would be sufficient for many cases, as one lane of 5.0 can serve all four lanes of a 3.0 NMVe.
But I realize we probably won't see that.
Perhaps it will be realized with higher PCIe versions, given how tight signalling margins will get. But the big guys have money to throw at this so yeah...
P.S. I also talked to a customer in the past who stored their backups in an SSD-only Ceph cluster. They were citing higher reliability of SSDs and higher density, which was important because they had very limited physical space in the datacenter. In other words, traditional 3.5" HDDs would not have allowed them to store that much data in that many rack units.