Hi there, I'm a member of the team at Google responsible for this dataset.
I don't see an artfact at exactly those coordinates, but is this nearby ___location what you are referring to:
tinyurl.com/4rpas73s
That appears to be a knife mark. This dataset was assembled from over 5000 tissue sections each cut at a thickness of just 30 nanometers using a device called an ultramicrotome. Tiny imperfections can develop in the diamond knife used by the ultramicrotome, which unfortunately can result in artifacts like that. Fortunately such artifacts are unlikely to affect the same ___location on multiple consecutive sections, and our segmentation algorithms are robust enough to segment through those locations.
By the way, one cool thing about the viewer, neuroglancer, is that you can just copy the URL to link directly to any particular view.
This dataset was collected using physical ~33nm-thick sections that were cut with a diamond knife and stored (and preserved) on a carbon tape before imaging with a multi-beam scanning electron microscope.
The latter. The tissue block is sliced first in the ATUM which collects the sections on tape. These are then mounted on silicon wafers, and imaged with the MultiSEM.
So you would need about a million of these datasets to fully image the brain, or a zettabyte. This is about as much information as traveled through the Internet in 2015 [1].
> The dataset comprises imaging data that covers roughly one cubic millimeter of brain tissue, and includes tens of thousands of reconstructed neurons, millions of neuron fragments, 130 million annotated synapses, 104 proofread cells, and many additional subcellular annotations and structures.
On one hand it is interesting to know what percent of 130 million synapses changes on second by second bases. On another hand some believe and argue we can approximate human level dynamic behavior using GPT and a like models, in like 10 years :)
Yeah, modern CPUs already have hundreds of millions of logic gates and can do calculations much faster than the human brain, yet we need an ridiculous amount of code hours to display some stuff in a screen in a way that barely works.
My 100% baseless opinion is that animal brains are fundamentally different from computers, although they might share some characteristics (like, we can do maths right?).
I think much of our understanding of brains is biased by beginning with studying neurons before fully understanding how single cells are able to make complex decisions. Single celled organisms can do incredible things and understanding how they do that using molecular machinery is probably the key to understanding how large networks of them can produce macro scale behaviour by communicating with each other.
Great point. There is ongoing research into whether memory is necessarily a network effect or if single neurons can hold memories. It's looking like they can.
There are some very sophisticated behaviors displayed by single/low-cell-count micro-organisms. It is something I find very challenging to put into the accepted framework of how cognition works. I think there are some are order-of-magnitude issues that we are missing, probably due to the technology we have available.
Your baseless opinion is 100% correct. Biological brains are not Turing machines and do not have a Harvard or even a Von Neumann architecture.
If I understand it correctly, biological brains are essentially associative memory machines where memory and computation are intertwined into one physical structural system.
In short: To think is to remember. And to remember is to think.
> My 100% baseless opinion is that animal brains are fundamentally different from computers
You are correct in that opinion.
I’m not or specialist in the field, but FWIW key difference between neural circuit vs processor is that former is specialised and analog while latter is programmable and digital.
Your brain is not running any software, instead its hardwired to process series of electric signals („trains”). Neurons can be used to model logic gates, but nature uses different approach where there are „dumb” neurons that just transmit signals and then there are „functional” neurons that emit signal only if certain conditions have been met, be it right frequency of electric signals in „electric train”, current chemical state of neuron or number of received signals from other neurons at given time.
But their lifespan is way shorter and their number is way higher; which means way more feedback from darwinistic evolution.
To me it means their 'intelligent' behavior should be much more a function of being hardcoded in the neuron layout at the genetic level than being in a generic intelligent meat processor at an emergent level.
I honestly don't know the answer to this, but how large would it be if we tried to do the same thing to a modern CPU? Map our way backwards from its structure instead of the underlying logic?
CPUs are much much more (understandably to us?) structured than the brain. We still don't know if there are repeated canonical cortical micro-circuits. Seems like there should be, but it's still an open research question.
I would assume so as well, but that doesn't mean if we were trying to analyze it this same way, by "building a connectome" of the CPU, the map would also be extremely large, larger than the amount of data necessary to understand/replicate it at least.
My core point is that the size of this dataset is amazing but, possibly, unrelated to how close or how far away from understanding the brain's behavior.
AFAICT this is a static picture. It doesn't show how synapses change. But merely knowing what is connected to what, at this fine scale, should lead to new insights.
Really excited to see this. I've been following the Licthman Lab's work for a while now.
We need this for every tissue! This should be some kind of X-prize. I want sub-cellular 3d models of all the tissues. If we look closely enough, we might discover a thing or two.
Its like they try to make the awesome data you came to see hard to get to or even find if it really exists. There are 20+ links in that release with ZERO direction as to what one is the headline data I came to see.
5 links deep before you even get to anything headline relevant.
I immediately went back to Neal Stephenson's "Fall; or, Dodge in Hell" novel. As with so many of his novels, he has extended from the edge of research to ... well, something more, but I don't want to give away too much of the plot.
The Thousand Brains reference made me immediately think of Minsky's "Society of Mind."[1][2] Minsky's work was quite an insightful perspective on intelligence.
For example, at x, y, z = 234004, 246203, 2436 (you can manually enter these in the corresponding coordinate boxes on the top left).
Edit: should have added that the above coordinates are for the online dataset browser [0]
[0] https://h01-release-dot-neuroglancer-demo.appspot.com/#!gs:/...