The feature that caught my eye is the concept of having different targets.
A fast SSD can be set as the target for foreground writes, but that data will be transparently copied in the background to a "background" target, i.e. a large/slow disk.
You can also have that at block level (which is where bcache itself comes from). Facebook used it years ago and I had it on an SSD+HDD laptop... a decade ago at least? Unless you want the filesystem to know about it, it's ready to go now.
See the lvmcache(7) manpage, which I think may be what the earlier poster was thinking of. It isn't an asymmetric RAID mode, but a tiered caching scheme where you can, for example, put a faster and smaller enterprise SSD in front of a larger and slower bulk store. So you can have a large bulk volume but the recently/frequently used blocks get the performance of the fast cache volume.
I set it up in the past with an mdadm RAID1 array over SSDs as a caching layer in front of another mdadm array over HDDs. It performed quite well in a developer/compute workstation environment.
> A fast SSD can be set as the target for foreground writes, but that data will be transparently copied in the background to a "background" target, i.e. a large/slow disk.
This is very similar in concept to (or an evolution of?) ZFS's ZIL:
When this feature was first introduced to ZFS in the Solaris 10 days there was an interesting demo from a person at Sun that I ran across: he was based in a Sun office on the US East Coast where he did stuff, but had access to Sun lab equipment across the US. He mounted iSCSI drives that were based in (IIRC) Colorado as a ZFS poool, and was using them for Postgres stuff: the performance was unsurprisingly not good. He then add a local ZIL to the ZFS pool and got I/O that was not too far off from some local (near-LAN) disks he was using for another pool.
ZIL is just a fast place to write the data for sync operations. If everything is working then the ZIL is never read from, ZFS uses RAM as that foreground bit.
Async writes on a default configuration don't hit the ZIL, only RAM for a few seconds then disk. Sync writes are RAM to ZIL, confirm write, then RAM to pool.
But ZIL is a cache, and not usable for long-term storage. If I combine a 1TB SSD with a 1TB HDD, I get 1TB of usable space. In bcachefs, that's 2TB of usable space.
A fast SSD can be set as the target for foreground writes, but that data will be transparently copied in the background to a "background" target, i.e. a large/slow disk.
If this works, it will be awesome.