What kind of random projections? Reservoir computing can be seen as random projections: echo state networks, liquid state machines, extreme learning machines. Does it involve extra constraints? Sparsity? Nonnegativity? Which random projection schemes do not conform similar to humans?
It randomly projects to a lower dimensional space (contrary to the reservoir computing methods I mentioned). That's indeed one of the standard ways to do dimension reduction.
What looks new to me is to use random weights in a sliding window and by combining FAST features. It introduces computer vision structure to random projections in such a way. Interesting!
I'm not sure if this is really how humans do it. Actually I'm not even sure humans recogise all types of images in the same way. It seems to me that humans use logic in some situations and lower level features in others, or maybe both to some degree.
For instance, imagine a view from a car onto a dark country road ahead, lined with trees. The road bends so you can't see around the corner, but you see a light getting brighter. Humans would conclude that there must be an oncoming car and they would focus on where exactly the headlights appear relative to the middle of the road.
Nothing else in the picture would matter much. The one thing that does matter cannot be concluded from the picture itself. So humans can compress that image as two points (the headlights) and the a line that represents the middle of the road. Pretty good dimensionality reduction I would say.
Machine learning systems can obviously be trained to do something very similar. But my point is that if you take away the situational context necessary for focus, what's left may not tell us much about how humans process images in most cases.
Also, how many examples of car crashes does a machine learning system need in order to recognise where the headlights should definitely not be? How many head-on car crashes do humans need in order to learn from their mistakes? But that's a wholly different subject.
Please don't take this as a criticism of the paper. I haven't (fully) read it and I know next to nothing about random projection. I'm just generally wondering about how lower level features and high level reasoning are interconnected. It doesn't seem to be a one way street (which doesn't necessarily mean that they are on a collision course though).
yea, I agree w/ a lot of this, esp. the stuff a/b the importance of context.
the context of the article for me is that it's from a cs dept, yet the researchers ran some cog-sci -like experiments w/ humans, and I thought that, as a methodology, is something interesting and valuable. a lot of cs ppl will say flat out that they've intentionally put blinders on re results about the human mind / brain, which makes sense on one level b/c it's a lot to think a/b. OTOH, how does one even define intelligence w/o biology? like, to say "intelligence is what ppl do" means you need to know, quantifiably, what that is, even if you don't care how it's implemented.
now, it's generally understood, and I think this is your critique if any, that we're a long way from answering such questions. one should take most descriptions of human behavior as rough, first-order approximations.
The point here is that dimensionality reduction does not need to be done by inferring higher level concepts like headlights. That there are scenarios in which humans have easy models at hand is common knowledge.
You can randomly throw stuff away! That's the new thing here.
Of course it is a very rough setup. However that the brain performs random projections is an interesting hypothesis and they actually compared it with human subjects which is a plus.
Compared to model-heavy or deterministic scenarios in which there are "boring" dimensionality reduction techniques, random projections might also explain secondary effects, such as better resilience against overfitting.
>The point here is that dimensionality reduction does not need to be done by inferring higher level concepts like headlights.
I get that, and it is a very useful result.
But what I was wondering about is whether humans can ever "switch off" high level reasoning when they perceive sensory information. Even when the image is supposedly abstract, humans may be making stuff up, and that might influence perception and dimensionality reduction, similar to what's happening in a Rorschach test.
And that's why I don't believe that what the paper shows is actually "How the Brain Can Handle So Much Data" -- the title of the linked article.
Here's a good video explaining random projection. It's basically taking high dimensional data and reducing the dimensionality so that it's faster to process. I don't have any background in statistics but it was still intuitive to follow this talk.
I skimmed through the OP article, wasn't immediately obvious how they deduced that brain does something similar, but I would have thought better understanding of actual neural processing on a biological level is required.
That's not to say neurons/brains don't reduce data complexity to process it fast - I'm sure they do. But probably how exactly that happens is also important. Nature has had a long time to work this out.
"We extracted small patches from images, just like they do in neural networks research. Then we used neural networks and humans to identify the images patches were drawn from. Because humans can do this and neural networks can too humans must work like neural networks."
What kind of random projections? Reservoir computing can be seen as random projections: echo state networks, liquid state machines, extreme learning machines. Does it involve extra constraints? Sparsity? Nonnegativity? Which random projection schemes do not conform similar to humans?
Very difficult to distill more info out of this!