Hacker News new | past | comments | ask | show | jobs | submit login

Brooks' 'Intelligence Without Representation' (http://people.csail.mit.edu/brooks/papers/representation.pdf) starts with a pretty strong argument imo against the story of 'stick-together' AGI you're describing.



I think Brooks' Cog initiative was an attempt to 'ground' the robot's perceptions of the physical world into forming a rich scalable representation model. But it looks like that line of investigation ended ~2003 with Brooks' retirement. Too bad, given the seeming suitability of using deep nets to implement it.

http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/o...


Thanks for the link to this interesting paper.

I think we're seeing some recapitulation of those arguments WRT 'ensembles of DL models' approaches.


I agree. Google has come out with some papers that are, to put it harshly, basic gluing together of DL models followed by loads of training on their compute resources.


Not just Google. The FractalNet paper comes to mind.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: