>As far as we know the brain is just a "linear algebra blackbox"...Likely they use similar principles.
I'm not an expert, but my impression is that this is not really a reasonable claim, unless you're only considering very small function-like subsystems of the brain (e.g. visual cortex). Neural nets (of the nonrecurrent sort) are strict feed-forward function approximators, whereas the brain appears to be a big mess of feedback loops that is capable of (sloppily, and with much grumbling) modeling any algorithm you could want, and, importantly, adding small recursions/loops to the architecture as needed rather than a) unrolling them all into nonrecursive operations (like a feedforward net) or b) building them all into one central singly-nested loop (like an RNN).
The brain definitely seems to be using something backprop-like (in that it identifies pathways responsible for negative outcomes and penalizes them). But brains also seem to make efficiency improvements really aggressively (see: muscle memory, chunking, and other markers of proficiency), even in the absence of any external reward signal, which seems like something we don't really have a good analogue for in ANNs.
There are some parts of the brain we have no clue about. Episodic memory or our higher level ability to reason. But most of the brain is just low level pattern matching just like what NNs do.
The constraints you mention aren't deal breakers. We can make RNNs without maintaining a global state and fully unrolling the loop. See synthetic gradients for instance. NNs can do unsupervised learning as well, through things like autoencoders.
I'm not an expert, but my impression is that this is not really a reasonable claim, unless you're only considering very small function-like subsystems of the brain (e.g. visual cortex). Neural nets (of the nonrecurrent sort) are strict feed-forward function approximators, whereas the brain appears to be a big mess of feedback loops that is capable of (sloppily, and with much grumbling) modeling any algorithm you could want, and, importantly, adding small recursions/loops to the architecture as needed rather than a) unrolling them all into nonrecursive operations (like a feedforward net) or b) building them all into one central singly-nested loop (like an RNN).
The brain definitely seems to be using something backprop-like (in that it identifies pathways responsible for negative outcomes and penalizes them). But brains also seem to make efficiency improvements really aggressively (see: muscle memory, chunking, and other markers of proficiency), even in the absence of any external reward signal, which seems like something we don't really have a good analogue for in ANNs.