I think he is referencing several theorems about fault tolerant computing. I think the reasoning goes that if we assume some level of correlated noise, then these theorems tell us we need an error correcting code that runs in O(e^n) where n is the number of quibits in the program you want to check for errors. I could be wrong though, see my comment above where I link to the paper.