Hacker News new | past | comments | ask | show | jobs | submit login

> ... the received signal strength was roughly 100 times weaker than the background noise.

Okay, this sniped my curiosity. Where can I read more about the math/physics that makes this possible?




Apart from the cool story about error correction, in general detecting a signal that's weaker than background noise is something we do in statistics all the time.

If you have big enough samples, you can tell a normal distribution with mean 0.01 apart from one with mean 0, even if their standard deviation is both 1000.

Error correcting codes are in some sense a very cool and efficient way to get the equivalent of that bigger sample.

(You could also just send your signal many, many times. But simple repetition is a vastly inferior solution to the ingenious codes people came up with.)


Error correcting codes are certainly a part of it (from simply looking up the definition of FT8), but what can also help generally is knowing very specifically the frequency range of your signal so that you can filter out the out-of-band noise. A lock-in amplifier is one example of this: https://en.wikipedia.org/wiki/Lock-in_amplifier


In college, we had a class called "Detection Theory", which handled this topic - and was required for people that studied signal processing / telecom / telemetry / remote sensing / etc.

Here's a quick wiki link for that topic:

https://en.wikipedia.org/wiki/Detection_theory


search error correcting codes. turbo codes. Viterbi codes. Golay codes. Reed-Soloman codes, Gallagher codes... the list goes on.

Basically, error correction can supply “coding gain”. So in the path-loss equation, you get to add gain for error correction.

In the above case, if the signal is 100 times weaker than the noise, you need a code with 20db of coding gain to reach break-even, if that makes sense.


I think in addition to all the other replies, a key in how this happens on more of a physics level is how spread spectrum communications work. In short, it’s a technique to ‘squish’ a high-amplitude, narrow bandwidth signal down to a low-amplitude, wide bandwidth signal. The resulting low-amplitude signal can be lower than the background noise. Look at Fig. 6 here: https://www.maximintegrated.com/en/design/technical-document...


The general field of signal processing - in particular to pulling data out of noise there are error-correcting codes, algorithms like Viterbi and putting them together let's you pull off pulling data out of noise, really signal from interference. There's an entry for FT8 on wikipedia that I haven't digested but it has some other starting points.

As to the physics there's also the fact that the phenomenon that creates a good transmission line between participants is in flux and you have to get your data through while the transmission line passes enough signal that the receiver can decode it.


To add to the things that make this possible. Take a look at DSSS or signal spreading.

This is a method of spreading the power of your transmission over a much wider part of the spectrum. Hence, this doesn't help much if your transmit power is limited. However, it does help if your transmit power per slice of bandwidth is limited.

This is used for a few methods. The first is spectrum-sharing (which requires orthogonal codes). The second is detection avoidance (lower power per bandwidth makes it much harder to recognize the signal unless you already know how to de-spread). The third method is to avoid 'point noise sources' from blocking your signal. E.g. a wifi burst of 1MHz might drown out your signal if you use 1Mhz of bandwidth. But if you use 10 MHz, it only blocks one tenth of your signal.


100 weaker is not yet a limit by far. GPS can be 10000000 times below the noise floor, and some advanced system may take it even further


I never understood why most GPS receivers seem to only use 1 bit ADC's. Surely you're just making the already hard problem harder for yourself by adding a massive amount of quantization noise?


GPS is a very old system. I think just a lot of GPS IP has been used as copy and paste since, without much rethinking.


1-bit ADCs make the initial correlation processing easier and cheaper.


The Deep Space Network is capable of communicating with the Voyager probes 100s of AU from earth with so little signal it’s incredible


The term you're looking for is Forward Error Correction, send extra bits so you can lose a few along the way.


That takes care of lost information, but how do you reliabiliy filter out signal from noise? Can a simple FIR filter work in this situation?


FT8 syncs everyone's clock on 15s intervals, you can also send well known sequences at the start of a transmission and search for those.


FIR filter alone cannot do this, but combined with special coding like Direct Sequence Spread Spectrum (DSSS) does it pretty well. For example, CDMA and GNSS benefit from using DSSS.


More accurately, extra bits are added to the transmitted stream so that bit errors in the received signal can be corrected.


See also: information theory and channel capacity




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: