It all depends on how much reliability are you willing to give up for performance.
Because I have the best storage performance you'll ever find anywhere, 100% money-back guaranteed: write to /dev/null. It comes with the downside of 0% reliability.
You can write to a disk without a file-system, sequentially, until space ends. Quite fast actually, and reliable, until you reach the end, then reliability drops dramatically.
Trouble is you can't use /dev/null as a filesystem, even for testing.
On a related note, though, I've considered the idea of creating a "minimally POSIX-compliant" filesystem that randomly reorders and delays I/O operations whenever standards permit it to do so, along with any other odd behavior I can find that remains within the letter of published standards (unusual path limitations, support for exactly two hard links per file, sparse files that require holes to be aligned on 4,099-byte boundaries in spite of the filesystem's reported 509-byte block size, etc., all properly reported by applicable APIs).
Yeah, I've had good experience with bypassing fs layer in the past, especially on a HDD the gains can be insane. But it won't help as I still need a more-or-less posixy read/write API.
P.S. I'm fairly certain that /dev/null would lose my data a bit more often than once a week.
Because I have the best storage performance you'll ever find anywhere, 100% money-back guaranteed: write to /dev/null. It comes with the downside of 0% reliability.
You can write to a disk without a file-system, sequentially, until space ends. Quite fast actually, and reliable, until you reach the end, then reliability drops dramatically.