Hacker News new | past | comments | ask | show | jobs | submit login

> Supervised learning in machine learning is nothing remotely like a human teaching anyone anything.

I disagree, I think it's exactly the same. As an example, a human teaching a human how to use an orbital sander to smooth out the rough grain of a piece of wood.

The teacher sees the student bearing down really hard with the sander and hears the RPM's of the sander declining as measured by the frequency of the sound.

The teacher would help the student improve by saying, "Decrease pressure such that you maximize the RPM's of the sander. Let the velocity of the sander do the work, not the pressure from your hand."

That's a good application of supervised learning. Hiring the right candidate for your company is not.




But that's not at all how "supervised learning" works. You would do something like have a thousand sanded pieces of wood and columns of attributes of the sanding parameters that were used, and have a human label the wood pieces that meet the spec. Then you solve for the parameters that were likely to generate those acceptable results. ML is brute force compared with the heuristics that human learning can apply. And ML never* gives you results that can be generalized with simple rules.

* excepting some classes of expert systems


One of the columns of sanding parameters is the sound of the sander.


Machine learning really almost nothing in common with most types of human learning. The only type of learning that has similarities is associative learning (think Pavlovs dogs studies).

The human learning situation you describe works quite differently, though: The student sees either the device alone or the teacher using the device to demonstrate its functionality. This is the moment most of the actual learning happens: The student creates internal concepts of the device and its interactions with the surroundings. As a result the student can immediately use the decive more or less correctly. What's left is just some finetuning of parameters like movement vectors, movement speed, applied pressure etc.

If the student would work like ML, it would: hold the device in random ways, like on the cord, the disc, the actual grip. After a bunch of right/wrong responses she would settle on using the grip mostly. Then (or in parallel) the student would try out random surfaces to use the device on: the own hand (wrong), the face of the teacher (wrong), the wall (wrong), the wood (right), the table (wrong) etc. After a bunch of retries she would settle on using the device on the wood mostly.

It's easy to overlook the actual cognitive accomplishments of us humans in menial tasks like this one because most of it happens unconsciously. It's not the "I" that is creating the cognitive concepts.


That is such a horrible metaphor




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: