Hacker News new | past | comments | ask | show | jobs | submit login

I've had similar thoughts when it comes to recognizing the underlying (potential) simplicity of a phenomena of interest.

For example, consider a toy experiment where you take dozens of high speed sensors pointed a rig in order to study basic spring dynamics (i.e. Hooke's law).

You could apply "big data analytics" or ML methods to break apart the dynamics to predict future positions past on past positions.

But hopefully, somewhere along the way, you have some means of recognizing that it is a simple 1D phenomena and that most of the volume of data that you collected is fairly pointless for that goal.




Almost all deep learning progress is optimization on a scale going from 'incredibly inefficient use of space and time' to 'quite wasteful' to 'optimal'. You're jumping the gap from 'quite wasteful' to 'optimal' in one step because you understand the problem. If you could find a way to do that algorithmically you likely would have created an actual AI.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: