Original title is "Updated imaging and phylogenetic comparative methods reassess relative temporal lobe size in anthropoids and modern humans" which to fit HN title submissions could maybe be condensed to "Reassessing temporal lobe size in humans and anthropoids with updated methods"
Submitters: "Please use the original title, unless it is misleading or linkbait; don't editorialize" (https://news.ycombinator.com/newsguidelines.html) If a title doesn't fit HN's 80 char limit, please shorten it in a way that preserves the meaning of the original title. If you do need to change it because the original title was misleading or linkbait (not the case with this submission), please do so using representative language from the article itself (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...).
(Submitted title was "Newer data once again shows: the human brain is just a scaled up primate brain".)
Provide an alternative title for "Updated imaging and phylogenetic comparative methods reassess relative temporal lobe size in anthropoids and modern humans" so that layman people would understand.
"Reevaluating Brain Size in Humans and Primates: New Techniques Show Surprising Results"
Without clickbait
"How Modern Imaging and Comparative Analysis are Helping us Understand the Evolution of Brain Size in Humans and Primates"
What is wrong with laypeople learning the words "phylogenetic" and "anthropoid"?
I am not a biologist, and my knowledge in the field is limited to what I learned in high school plus afterwards self study out of curiosity. I did know what "phylogenetic" refers to but I do not remember ever encountering the word "anthropoid". Yet I was perfectly able to infer what it means based on the meaning of the base and suffix.
If I don't know a word I look it up. Have people stopped doing this?
There is nothing wrong with laypeople learning words. But if you use words that laypeople don't know, it's not a good laymen definition. You seem to be under the impression that laymen terms means it's outside the capability or expected behavior of layman to understand ever.
Fair, but we were not discussing providing definitions using laymen terms. We were discussing using words in reformulations of titles of articles posted on HN. I expect people on HN to be able to deal with a title containing an unfamiliar word and seek either a precise or a laymens terms definition when needed.
> Provide an alternative title for "Updated imaging and phylogenetic comparative methods reassess relative temporal lobe size in anthropoids and modern humans" so that layman people would understand.
>If I don't know a word I look it up. Have people stopped doing this?
You should feel lucky that you are surrounded by curious people because it's not the norm for the world - though it's a great thing obviously. The process you describe is only done by an extreme minority of people.
Not-wrong in the sense that in practice most adults are not curious about 99% of topics out there.
Wrong in the sense that humans are naturally curious. I've seen it now with kids.
It seems to me that what happens through childhood is that not only are neural pathways pruned for efficiency, but curiosity is actively stifled and arguably totally destroyed in most kids, by adults. Coupled with adulthood time/money pressures, I'm not surprised that a large number of adults become completely intellectually inert, or worse, fully anti-intellectual.
I would argue that it should not be a minority of people for a community (HN) which values curiosity as claimed by the guidelines:
"the primary use of the site should be for curiosity"
"On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity. "
I think on a higher level, the ability to fast might be the most important trait here (combined with the capability to innovate). At least, Robinson Crusoe type economics has me believe that one of the important things for a civilization to advance is the ability to save and invest. So (simplified) the human can catch some fish, take some time off from the day-to-day grind of catching fish and try to build some device to catch even more fish (so basically investing the initial fish). With the ability to fast, this process can be sort of forced more (you can still gamble on building a better net even if you don't catch enough fish in the first place). Rinse and repeat in other areas, and slowly "wealth" is built up. The longer you can go "save" without being forced to do something else, the more you can advance, and that's exponential gains.
I'd be very curious if there are any studies on time sensibility across species. Obviously, it varies even among humans (cue children with candy experiment here). My hypothesis would be that the ability to delay satisfaction is quite vital for the development of a species. On a completely tangential note, I'd also be curious how it relates to "procrastination". Anecdotally, I fell like people who need everything "right now" struggle a lot less with procrastination.
> Squirrels, ants and bees can build wealth as well.
investing isn't just storing "wealth" (ala, a storage of foods). Investing is using the time that such wealth affords you to invent new tools, to make more efficient methods/machines for making wealth! Animals are, currently, missing this part.
Mammals? Some do. Most other animal I can think of right now? Nope, a snake just.. does its “pre-programmed” thing, the same way a spider creates its web. Jellyfish are barely alive, fish are also not particularly intelligent.
People really did believe even just a few hundred years ago that animals were just “animatronics”, and let’s be honest, plenty of animals don’t disprove it too well.
So long as you mean "after several hours the horse will be tired and I'll still be going", probably yes. I'm not sure exactly what range a horse has in a day before exhaustion, but I did walk (the equivalent of) a marathon with absolutely no prior training and not even bothering to sort out optimal shoes.
Domesticated horses have humans pay their rent. My boss's wife does horse riding and has her own horse (rich people things). The cost of vet visits, rent and upkeep in a stable is pretty insane.
Surely the point of the exercise is to find a horse, and the biggest difficulty is what you do with it when you actually catch up.
And now consider this from the perspective of a wild horse: a strange being approaches, your instinctual neophobia tells you to flee. At a safe distance, you stop, only for the being to arrive again and the cycle to continue. You sleep, only to be awoken by its approach. Finally, too exhausted to run any further, you wait for the inevitable… only for the being to feed you a carrot, brush you mane, and wander off.
In that race, Humans won 3 times, and in several other races the margin was in the low minutes. So I wouldn't say "generally" is true. And overall, it seems Humans benefit is to have more endurance than horses, so the longer the race, the more likely will they win.
If we take a look at the numbers of humans doing a 24-hour-run[1], and compare it with Endurance-Riding[2], then humans seem to be able to run far longer, at peak even significant longer. Though, to be fair, it's possible that the numbers are skewed for the horse's safety, and riders are not going all out till the possible mile.
Basically we can. Not everyone, but you can imagine someone that relied on it for their survival definitely could. In these races the horse gets to subtract hold time. Take a look at this interview for example where one of the racers explains this, and won over a horse by an hour and 15 minutes: https://www.irunfar.com/catching-up-with-nick-coury .
Almost any human trained from a childhood would be able to do endurance hunting. Human endurance thanks to sweating and other traits is especially advantageous in hot climate: horses can't ourtun humans indefinitely as they overheat and can't cool down as effectively as humans, so eventually a trained human will catch up a horse (and eat it).
It is actually remarkable how quickly we can go from “untrained, can barely go down to the shop on foot without a racing heart” to running being barely a problem (unfortunately to many — it really is hard to burn more energy than what you normally eat with even quite long runs, we are that energy efficient at that).
But the actual point is that people before the agricultural revolution was more than fit enough for that, and evolutionarily speaking that is no time at all, so.. yes, not even being trained from childhood, just deciding to train for that hard, most people could definitely do that.
People skilled in persistence hunting [0] track their prey even if it runs away from their sight. It still gets exhausted / overheated faster than the pursuing human hunters.
I'm not sure about wild horses, but I have spent some time observing moose, which seem like a similar design. They don't seem to have much of a sense of purpose. So once scared into running, I don't think they would run very far before stopping to browse on some berries or nice grass.
And hunter gatherers even to this day are absolutely great at that, plus we could/can pass the knowledge down generations. We literally hunted down many of the megafauna to extinction.
Most people's internal alarm for self preservation starts yelling long before endurance is truly tested. Nobody sprints for 24 hours, but I am certain that even the average couch potato can keep walking for 24 hours. They'd be hurting after, of course. From learning to jog, I can attest that it mostly about willpower, and I suppose, desperation at times.
In death marches, such as in WWII, even many starving prisoners, walking from dawn to dusk, with beatings, lasted for days on the trail.
The Americans and Filipinos on the Bataan Deathmarch are one example:
>The total distance marched from Mariveles to San Fernando and from the Capas Train Station to various camps was 65 miles long.
For the British there was The Burma Rail:
>Camp Nong Pladuk was initially used as a transit camp from where the prisoners were transported or had to walk to work camps along the Burma Railway.
And of course, the Jews and other victims of the Nazis were often force marched.
My great great grandmother returned to her Volga German village in Russia after the rise of the Soviets, was arrested, sent to Siberia, where she worked in a camp for 7 years until her death from malnutrition and other neglects. And she was a grandmother at the time.
> Most people's internal alarm for self preservation starts yelling long before endurance is truly tested. Nobody sprints for 24 hours, but I am certain that even the average couch potato can keep walking for 24 hours. They'd be hurting after, of course. From learning to jog, I can attest that it mostly about willpower, and I suppose, desperation at times.
I'm not disputing this, but the response was that humans can't generally outrun a horse. Which is true. The average human will not be able to outrun the average horse.
> How long does it take to train for marathon?
Most marathon training plans range from 12 to 20 weeks. Beginning marathoners should aim to build their weekly mileage up to 50 miles over the four months leading up to race day.
So even untrained couch potatoes could learn in 4 month to run a marathon. (And indeed many do and test themselves that way.)
All these are normal and necessary adaptations when switching from a vegetarian diet to animal hunting.
All predators need the ability to fast.
All non-ambush predators, like wolves or hyenas or humans, need adaptations that ensure enough endurance to pursue their prey for hours, without overheating or becoming too tired.
Every time the hardware gets better we compensate with more bloated software. Fragile code, countless bugs, poor patches. We keep adding more and more features that spend more cognition as if it's an unlimited free resource.
From walking to a tree to grab some fruit to a complex network of farmers, merchants, machines, oil drills etc
From fighting with fists over simple basic things to intercontinental ballistic missiles and autonomous drone fights over issues we don't even understand anymore.
The most complex mating dance on this side of the galaxy.
I'm happy we didn't lose our monkey sense of humor[1.4.42]
We are on track to potentially learn there's no deep mysterious secret to human cognition. If and when we crack that, this planet will experience the technological singularity.
I thought the singularity was a crackpot idea not much different from string theory in how it tries to sell itself. We were stuck in the post WWII, post globalization, smartphone incrementalism age. Now we're moving on to something exciting again.
What I'm getting at is that you'll not only see more articles like this, but that collectively as a species we're going to start feeling a whole lot less special.
> collectively as a species we're going to start feeling a whole lot less special.
If that's the case, expect a massive backlash from the spiritual types, which basically means every other person. The centrality of man is essential to the abrahamic religions that dominate the planet, and other ones too will struggle to come to terms with seeing ghosts in the machine. I suspect organized religions will find ways to officially ostracize Machine Learning over the next decade. They'll form an alliance with people whose jobs are threatened, and pass laws to smother or ban research and deployment.
I am not a believer in an "AI revolution" or singularity, I think they're still one step up from parlor tricks, and anyway the world can evolve as fast as it can devolve (a nuke or two and we're in the ancient world again), but it feels inevitable that we're going to experience some significant neoluddite movement very, very soon.
> The centrality of man is essential to the abrahamic religions that dominate the planet
Do they really dominate? They're big and all, but both economically (and to an even greater extent by population) China + Japan + India are also pretty substantial, and not predominantly Abrahamic. And even in the west, religiosity is in decline.
> inevitable that we're going to experience some significant neoluddite movement very, very soon.
Absolutely agree, the discourse from those artists who vehemently dislike AI art speaks to this.
How powerful they are, I do not know. But they are there, they don't like what they see, and they have already turned the counter-arguments into bingo cards.
> I am not a believer in an "AI revolution" or singularity
Check back in 12 months.
I bet my entire life -- not just the sum total of my earnings -- that this the moment humans duplicate the spark that makes us who we are. And from there, God only knows what is possible.
The problem is that we don't even know what might be there to understand. Since it is pretty much unknown how humans think (from a brain processing perspective), we also don't really know if the things we see in LLMs are just dumber versions of human thought processes or if they are qualitatively different.
LLMs literally can’t compute any complex algorithm, they can’t simulate different steps in chess to get which version is best, nor execute some non-trivial algorithm.
They are nowhere near close to human intelligence.
We also have a good idea of how individual neurons work, but struggle a lot with understanding brains. Both LLMs and human brains have a lot of emergent behavior, which is why our understanding the low-level primitives only provides very limited insight.
I have to admit I never got the “99% similar” and “octopi are so smart” thing.
May sound like an ignorant meme, but if they’re really so smart and similar, why can’t they build cities or create cultures or do anything better than just survive?
“For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.”
― Douglas Adams, The Hitchhiker's Guide to the Galaxy
This is what scares me so much about AI researches, many assume that more "human like intelligence" is a good thing, it's been hyper destructive so far, even if unintentional, so I can't see it being much different into the future. Useful yes, not by default always "good".
The hope I have is that we actually do create something that is "actually" really intelligent in the we like to think of ourselves and not the way we actually are.
> This is what scares me so much about AI researches, many assume that more "human like intelligence" is a good thing, it's been hyper destructive so far, even if unintentional, so I can't see it being much different into the future. Useful yes, not by default always "good".
It is our imperative for self-preservation that drives destruction. Everything else is just human psychology corollaries of the same thing (aggression, deceit, greed). Human drive for destruction doesn't originate in human intelligence, intelligence just amplifies the ability to destroy.
Arguably this is true for all reproducing mammals (whose intelligence is not comparable to humans). If you trap most mammals in a cage and threaten them, they will destroy whatever comes at them to the best of their ability.
So probably more relevant would be to make AGI without a sense of self or at least without an imperative to self-preserve or reproduce, rather than just fearing the development of something that happens to matches human intelligence.
Are you mad? I say this to grab your attention more firmly!
Do you think unchecked locusts, would not eat the planet bare? Goats, with no predator, would do the same, destroying all vegetation.
Cats torture their prey for fun. Hippos attack with little provication. Even male beavers, during mating season, are deadly and attack without cause.
Everything on this planet expands endlessly, without predation. Apex predators only die off due to starvarion, see prey/predator population cycles for more info.
Humans are perhaps the most benevolent species, for we actually try to reduce our impact!
Heck, even vegetation cares not for anything but itself. Vegetation changed the entire atmosphere of the planet!
By violence, I don't just mean physical violence. Humanity's emotional violence is astounding. Also, a lot of those animals you mentioned do those things for survival. Humanity is discontented and will often reach for violence that is otherwise not necessary. Animals in the wild will often avoid conflict at all costs.
How do you square "humans try to reduce their impact" with the fact that, exactly like vegetation, we're changing the entire atmosphere of the planet? And getting plastic everywhere from Everest to Mariana Trench, to boot?
It might just be a threshold thing; that the brain just crossed a size threshold where it could actually make useful connections that led to where we are now.
Similar to how GPT finally crossed a size threshold where its responses are actually useful to us now rather than just random words. The models are just bigger.
It’s worth noting that a human living in caves and hunting 20k years ago would’ve been perfectly capable of being a modern-day software developer hanging out on the internet. So it’s not even the cities part that matters, it’s something more fundamental, something that must be impossible to achieve without that extra 1%.
I suddenly had this image of a CV by a hunter gatherer looking for dev job. “Dev looking for a job. Skills: Able to work in small tribes, likes to hunt Mammoths, Bears and small rodents for team mates. Basic arithmetic : can count to ten, more with help from others.”
Octopi die at around 3 years old (1 - 5 years) from a mutation that closes their digestive tract upon sexual maturity, genetic ailments that occur after reproduction do not get weeded out
A genetically modified octupi that lived far longer may well become something we would have to coexist and collaborate with. Right now we’re just taking advantage of children, who may be far more intelligent than our own children.
This seems a different topic. It doesn't seem very difficult to imagine why creating civilisations is evidence of intelligence. Even within our species, we'd call subsisting tribes 5000 years ago "primitive". This is just applying that same reasoning to other species.
I think it is an important distinction to make in every similar discussion - certain animals just have some behavior “hard-coded”. Humans realized the need and the how of building cities/homes.
Technology does not imply intelligence and vice versa. Animals that live in the ocean are highly constrained by their environment. Animals that lack opposable thumbs are constrained by biomechanics. Plenty of animal species have culture.
Good point. It's still not clear if our way of surviving as a species is more successful as the "old" selection method. For the individual this is also an arbitrary goal even if some buy it as-is.
Wow these AI people are really shilling for their “scale is all you need” hypothesis eh? Now they are planting this stuff to gaslight us into thinking the same? :)
Looks like it. The linked article does NOT show what HN title does. Even the original title does not imply what the HN title does. The article is about one specific area of the brain, which is not the most obvious difference.
There were bigger models than ChatGPT 3.5 before it was released and they didn't perform better. In fact, the hype isn't built around large parameters but in fact on an interactive LLM architecture. ChatGPT and GPT do very different things but it is ChatGPT that gets the hype despite essentially having the same parameter count.
The parameter count thing appeals to people believing in linear scaling per parameter.
Why am I seeing more of this lately? "just a", "just a meat bag", "just monkeys".
I think it's a degrading and disrespectful way to speak, not so much that I'm offended, but I think it's actually a non-intelligent way to view the world. Also probably not very healthy.
Feels like there is a little bit too much of this self-degrading language used in science and technology circles now.
That is not intentional in case of GP [0]. But this disrespectful perspective towards other animals is rooted within us culturally. As example some religions: there is heaven free from other animals, because they are not worth it.
[0] Which was probably just take against nihilism that sometimes occurs in current AI discourse.
For more than two thousand years the lunatics have been running the asylum spewing all kinds of horsefeather metaphysics about spirits, souls, heavens, hells, gods, demons, turtles, triangles [1], and so forth. We will need to remind ourselves for the next 4,000 years at least that down there is just physics, chemistry, voltage-gated information processing with collective emergent properties, functionalities [2], and so forth, to clean and escape our selves, our languages, our behaviour, our conceptual frameworks from all the drivel. After those 4,000 years, yes, it will probably feel as demeaning to call the universe, the only one we've got, Everett be damned [3], just physics.
Watched recently on the Applied Science channel how to "identify chemicals with radio frequencies - Nuclear Quadrupole Resonance" [4]: just the universe as it is is more impressive than one could ever imagine.
The logos with which they apply the rationalism of the position that as biological creatures we do not hold a special place in the cosmos is not the offensive part. It's the ethos part that they really don't want you looking at.
Human history has always been tumultuous, and now we get to the present day, where we have extreme inequality, world-ending weaponry, and destabilizing global climate, contracting resources, and social upheaval as a result. There's no pretty solution to be found. Now we have the latest zeitgeist of AI, already threatening the economic wellbeing of the multitudes if you believe Altman & co.
The rights of humans throughout history has been predicated on the idea that humans are special somehow, deserving of dignity. Get enough people to really lean into the idea that they're just a meaty LLM and the foundation of human rights crumbles. You can get away with quite a lot of evil if you take care to dehumanize those you want gone.
Part and parcel of the desacralisation of the human being, unfortunately, which is so hot right now, and has been since the start of the twentieth century.
It is, absolutely. Even to the level of socialisation where chatbots start to replace in-person connections.
I truly feel things are about to get a lot worse for most people (.i. Those who don't control the systems) and it won't be pretty. Sadly I don't think those who do do this stuff care one bit. The whole 'Did it because we could without asking whether we should' type problem.
It might be a reaction to the centuries of the opposite position, that humans are divinely created, and rule the world with the Devine right handed down by god. So now as the human is picked apart by science, and we see we are just primates, monkeys, and we tend to form groups and throw our crap at each other, the 'just monkeys' seems apt.
IMHO, there could be a tipping point of scaling "primate" brain. For a given task, when size of brain can handle so much knowledge that unknowns are fewer than knowns, the advantage grows exponentially. Because you can deduce outcomes correctly by eliminating the wrong answers without actually "understands" the problem itself.
Idk. I think the real tipping point is being able to succesfully use external memory so that you don't have to keep everything in your working memory when doing reasoning or solving problems.
Yes, humans have been around for 300,000 years and only in the last 10,000 years human crossed that tipping point to be able to think higher order things.
Do you have any evidence for this claim? Just because people didn’t start agriculture doesn’t mean they weren’t as smart as people today. Hell, 300000 years is nothing from an evolutionary perspective, they were for all practical purposes genetically identical.
It was just most probably a nurture thing — there was not as much accumulated knowledge to “bootstrap” a more advanced civilization (or perhaps, no will to do so. Why change when your system works just fine?). It’s not like Einstein could have achieved much back then without “standing on the shoulder of giants” before him.
That is basically what continuous learning is and so far everything I have seen is that it is discounted as not important or something you could easily work around. There is a difference between waiting for the next minor release which updates the weights based on whatever OpenAI gathered from your chat history and having the model update its weights as you query it.
This is exactly what happened with large language models: they are quite useless until a certain number of parameters, after which their abilities quickly become impressive (read more here: http://arxiv.org/abs/2001.08361)
I was merely commenting on the apparent similarity between the phase transition in ability observed in the brain of primates and artificial neural networks, without necessarily equating the capabilities of the two.
On that note, maybe in the future we will discover some neural architectures that are able to "algorithmically think", who knows.
Nah, we've just spent a long fucking time not dying with increasing efficiency.
Distributed autoadversarial algorithms have honed our minds over millions of years.
You may be able to show that something is largely a scaled form, but you're leaving out optimised mass/volume/energy scaling optimisations that have one hell of an impact.
Feedback loops of minor advantages tend to cascade and the propagate across the entirety of breeding populations.
(Some advantages are more aggressively curbed, such as the sickle cell trait.)
Then it's missing things like FOXP2 - weird shit happens to language capabilities when you break it.
Not only that, but:
Three amino acid substitutions distinguish the human FOXP2 protein from that found in mice, while two amino acid substitutions distinguish the human FOXP2 protein from that found in chimpanzees,[17] but only one of these changes is unique to humans.[10] Evidence from genetically manipulated mice[22] and human neuronal cell models[23] suggests that these changes affect the neural functions of FOXP2.
So there are major mutations that make differences on scales that they aren't even looking at it.
"HN-ifying the title" should be the opposite of rewriting it to be more clickbait - from https://news.ycombinator.com/newsguidelines.html: "Please use the original title, unless it is misleading or linkbait; don't editorialize."
The submitted title was "Newer data once again shows: the human brain is just a scaled up primate brain". I've HN-ified it now :)
The above is interesting because it highlights the fact that humans do not have truly unique cognitive abilities, and hence must differ from other animals not qualitatively, but rather in the combination and extent of abilities such as theory of mind, imitation and social cognition. Moreover, viewing the human brain as a linearly scaled-up primate brain in its cellular composition does not diminish the role that particular neuroanatomical arrangements may play in human cognition. Rather, such arrangements should contribute to brain function in combination with the large number of neurons in the human brain.
Its speech. Its basically a meme-bootloader, that allows for a cognitive super charging. Again, its the software, not the hardware, making all the difference.
I propose a new law - the "Above Law". Similar to Poe's Law, it's sufficiently impossible on the internet to determine if a comment referencing "the above" is referring to a LLM prompt, or is a real human using an unusual way of referencing the content in question.