Hacker News new | past | comments | ask | show | jobs | submit login

I applaud your clever replacement of the now trite "paradigm shift" with "Kuhnian".

I haven't read the latest from Nagel so can't speak to that, but Searle's arguments work against only the most simplistic approaches to symbolic AI (which is no longer a very common framework) and his continued willful misunderstanding of how people in the field tend to think about cognition betray a lot of essentialism on the topic. Searle's attacks may stand against a naive form of functionalism but no one cares that "Watson doesn't know it won".

Added: I think non-materialist thinking hasn't caught on in most fields of scientific endeavors because holding non-materialistic views in no way advances those occupations. It isn't the funding, it's that no one has come up with a way that having dualistic approaches lets you generate better testable hypotheses! If they did, the funding would come.

I'm not sure if you're trolling, I'm mostly responding because your comment is just on the edge of being reasonable.




> but no one cares that "Watson doesn't know it won".

I care, especially that Watson doesn't know when it lost or learn from its mistakes, because it doesn't actually understand anything. Watson's fragile intelligence relies on the programmers tweaking its algorithms for every particular subject. It's a glorified search engine, as opposed to a true advance in general AI.


I doubt parent meant that literally. It was probably more a catchy way of saying "we don't care what happens in the Chinese room". Of course IBM Watson, specifically, doesn't exhibit human level intelligence.


Yes, the classic definition of "general AI" is "whatever computers can't do". That is definition is thoroughly out of fashion now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: