I understand that an analog computer can be simulated by a digital computer and vice versa, given enough error correction. That doesn't mean that the speed-up scaling will be the same.
This isn't something that can be answered purely mathematically (with a Turing machine argument); it depends on the specific physics and engineering realities. The trade-off between precision and error rate is determined by math, but the trade-off between error rate and speed is determined by physics.
This isn't something that can be answered purely mathematically (with a Turing machine argument); it depends on the specific physics and engineering realities. The trade-off between precision and error rate is determined by math, but the trade-off between error rate and speed is determined by physics.