A 1% error rates for corrupting other blocks is prohibitively. A file system would have do extensive forward error correction in addition to checksumming to have a chance to work with this. It would also have to perform a lot of background scrubbing to stay ahead of the rot. While interesting to model and maybe even relevant as a research problem given the steadily worsening bandwidth to capacity ratio of affordable bulk storage I don't expect there are many users willing to accept the overhead required to come even close to a usable file system an a device as bad as the one you described.
Well, it’s sort of redundant. According to [1], the raw bit error rate of the flash memory inside today’s SSDs is already in the 0.1%-1% range. And so the controllers inside the SSDs already do forward error correction, more efficiently than the host CPU could do it since they have dedicated hardware for it. Adding another layer of error correction at the filesystem level could help with some of the remaining failure modes, but you would still have to worry about RAM bitflips after the data has already been read into RAM and validated.
ZFS will do this. Give it a RAIDz-{1..3} setup and you've got the FEC/Parity calculations that happen. Every read has it's checksum checked, and when reading if it finds issues it'll start resilvering them asap. You are of course right in that it will eventually start getting to worse and worse performance as it's having to do much more rewriting and full on scrubbing if there are constant amounts of errors happening but it can generally handle things pretty well.
Let's say you have 6 drives in raidz2. If you have a 1% silent failure chance per block, then writing a set of 6 blocks has a 0.002% silent failure rate. And ZFS doesn't immediately verify writes, so it won't try again.
If that's applied to 4KB blocks, then we have a 0.002% failure rate per 16KB of data. It will take about 36 thousand sets of blocks to reach 50% odds of losing data, which is only half a gigabyte. If we look at the larger block ZFS uses internally then it's a handful of gigabytes.
And that's without even adding the feature where writing one block will corrupt other blocks.