It isn't anthropomorphizing. There are undeniable architectural similarities between ANN's and biological neural networks. We don't understand either very well yet but the parts we do understand have led to a lot of cross pollination. I don't think computational intelligence will ever match biological networks detail by detail due to the different substrates and resource usage tradeoffs, and they don't need to match. Intelligence can develop in different ways and we are learning about the universal aspects of it.
This is exactly my point - the danger of "anthropomorphization" lies in taking the brain analogy too far. That is, there shouldn't necessarily be a link between research in neuroscience and advances that make deep learning models more accurate. The tasks are completely different (human learning vs. minimizing a loss function), and it's important for researchers in both fields - neuroscience and AI - to keep that in mind.
However, there definitely are analogies! E.g. early work in convnets was inspired by the architecture of cat brains.
I think the fields have useful things to say to each other, but we're getting over a (maybe justified) taboo in talking about machine learning methods being biologically inspired.
1) Hubel and Wiesel discover simple and complex cells in cat's V1 in the 60's. They came up with an ad hoc explanation that somehow the complex cells "pool" among many simple cells of the same orientation. No one to date knows how such pooling would be accomplished (that selects exactly simple cells of similar orientation and different phase, not vice versa), or whether that pooling is only on V1 or elsewhere in the cortex.
2) Fukushima expanded that ad hoc model into neocognitron in 80's, though there is exactly zero evidence for similar "pooling" in higher cortical areas. In fact, higher cortical areas are essentially impossible to disentangle and characterize even today.
3) Yann Lecun took neocognitron and made a convnet which worked OK for MNIST in the late 80's. Afterward the thing was forgotten for many years.
4) Some few years ago Hinton and some dude who could write good GPU code (Alex Krizhevsky), took the convent and won ImageNet. That is when the current wave of "AI" started.
In summary, covnets and very loosely based on an ad hoc explanation to Hubel and Wiesel findings in primary visual cortex, which today in neuroscience are regarded as "incomplete" to say the least (more likely completely wrong). Now this stuff works to a degree, but really all these biological inspirations are very minimal.
For the analogy to hold, it's more of a question of whether or not ML algorithms operate in the same way as the brain. Right now, ML models use algorithms from continuous optimization that require certain structure. Namely, we require a Hilbert space, so that we can define things like derivatives and gradients. This puts certain requirements on the kinds of functions that we can minimize and the kinds of spaces that we can work with. These are requirements that are difficult to have precise analogies in biology. What does it mean to have an inner product in the brain? We does twice continuously differentiable mean in the context of a neuron? Even if there is a minimization principle, which I am not sure there is or is not, if ML uses algorithms, which are fundamentally not realizable in biology, how can we say it replicates the brain?
Based on what goes on in every cell in our bodies when it comes to the information processing involved with DNA, I don't think there is any such algorithm which is fundamentally not realizable in biology. I'll grant you, I don't think biological neurons are calculating derivatives across connection strengths, but there must be some analogous process to control neural connection strengths.
That may very well be and I think it's a fantastic area to do research on. Namely, can we accurately model the body with an algorithmic process and what does this process look like? However, unless ML directly mirrors: the algorithms involved in the body, the models used by the body, and the the misfit function used by the body, which together already assumes that the body really does operate on a strict minimization principle, then I contend it's improper to anthropomorphize the algorithms. They're good algorithms, but a better name would be empirical modeling since we're creating models from empirical data.