Hacker News new | past | comments | ask | show | jobs | submit login

Interestingly, the same effect shows up in communications systems. The more efficient an error correction code (ie. the closer it approaches the Shannon Bound), the more catastrophically it fails when the channel capacity is reached. The "perfect" code delivers no errors up until the Shannon bound then meaningless garble (50% error rate) beyond the Shannon Bound.

My point is that error correction codes have a precise mathematical definition and have been deeply studied. Maybe there is a general principle at work in the wider world, and it is amenable to a precise proof and analysis? (My guess is that mileage may be made by applying Information Theory, as used to analyse error correcting codes.)




An interesting idea but I'd imagine you would have to operate within something like the "100 year flood" boundaries that insurance companies do in order to define a constrained ___domain such as the Shannon Bound. I suspect you would also have to define the scope of this principle within the company and/or deal with the compounding effects of the multiple layers of the system and its "effective inefficiency."




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: