Hacker News new | past | comments | ask | show | jobs | submit login
What's wrong with using double floats as the only numerical type? (john.freml.in)
4 points by gnosis on Jan 27, 2011 | hide | past | favorite | 1 comment



The author writes: "This is not true in floating point — also (a+b)+c may not be a+(b+c)."

But in the case he mentions, for integral numbers up to 2^53 double floating point would also preserve the above identity." The doubles are not bad as long as you understand them.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: