Hacker News new | past | comments | ask | show | jobs | submit login
Reassessing relative temporal lobe size in anthropoids and modern humans (wiley.com)
111 points by doener on March 20, 2023 | hide | past | favorite | 165 comments



Original title is "Updated imaging and phylogenetic comparative methods reassess relative temporal lobe size in anthropoids and modern humans" which to fit HN title submissions could maybe be condensed to "Reassessing temporal lobe size in humans and anthropoids with updated methods"


I've changed it now. Thanks!

Submitters: "Please use the original title, unless it is misleading or linkbait; don't editorialize" (https://news.ycombinator.com/newsguidelines.html) If a title doesn't fit HN's 80 char limit, please shorten it in a way that preserves the meaning of the original title. If you do need to change it because the original title was misleading or linkbait (not the case with this submission), please do so using representative language from the article itself (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...).

(Submitted title was "Newer data once again shows: the human brain is just a scaled up primate brain".)


ChatGPT to the rescue:

Provide an alternative title for "Updated imaging and phylogenetic comparative methods reassess relative temporal lobe size in anthropoids and modern humans" so that layman people would understand.

    "Reevaluating Brain Size in Humans and Primates: New Techniques Show Surprising Results"
Without clickbait

    "How Modern Imaging and Comparative Analysis are Helping us Understand the Evolution of Brain Size in Humans and Primates"
:)


Unless we're already living in Idiocracy, the original title was perfectly understandable by a layman. Do not underestimate your fellow humans. :)


Laymen won't be definining anthropoids or phylogenetic...


Neither does ChatGPT, it seems.


Primates seems like a good sub in to me for laypeople.

Phylogenetic seems safe to remove.

Where do you disagree?


Just that it seemed to drop the "phylogenetic" part. The importance of the word depends on what's actually in the article, I suppose.


What is wrong with laypeople learning the words "phylogenetic" and "anthropoid"?

I am not a biologist, and my knowledge in the field is limited to what I learned in high school plus afterwards self study out of curiosity. I did know what "phylogenetic" refers to but I do not remember ever encountering the word "anthropoid". Yet I was perfectly able to infer what it means based on the meaning of the base and suffix.

If I don't know a word I look it up. Have people stopped doing this?


There is nothing wrong with laypeople learning words. But if you use words that laypeople don't know, it's not a good laymen definition. You seem to be under the impression that laymen terms means it's outside the capability or expected behavior of layman to understand ever.

Ironically, you ought to look up the definition: https://www.merriam-webster.com/dictionary/layman%27s%20term...


Fair, but we were not discussing providing definitions using laymen terms. We were discussing using words in reformulations of titles of articles posted on HN. I expect people on HN to be able to deal with a title containing an unfamiliar word and seek either a precise or a laymens terms definition when needed.


Yes we were:

> Provide an alternative title for "Updated imaging and phylogenetic comparative methods reassess relative temporal lobe size in anthropoids and modern humans" so that layman people would understand.


>If I don't know a word I look it up. Have people stopped doing this?

You should feel lucky that you are surrounded by curious people because it's not the norm for the world - though it's a great thing obviously. The process you describe is only done by an extreme minority of people.

I'd challenge whether most ever did it.


You're both wrong and not-wrong.

Not-wrong in the sense that in practice most adults are not curious about 99% of topics out there.

Wrong in the sense that humans are naturally curious. I've seen it now with kids.

It seems to me that what happens through childhood is that not only are neural pathways pruned for efficiency, but curiosity is actively stifled and arguably totally destroyed in most kids, by adults. Coupled with adulthood time/money pressures, I'm not surprised that a large number of adults become completely intellectually inert, or worse, fully anti-intellectual.


Totally agree here

Humans are by nature curious - but agree that the social systems we've built just ablate curiosity


I would argue that it should not be a minority of people for a community (HN) which values curiosity as claimed by the guidelines:

"the primary use of the site should be for curiosity"

"On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity. "


So is mice, lizard and so on.

But there are huge differences:

* Primates can not sweat and self regulate body temperature through perspiration.

* Primates can not fast, most humans can go 40 days without eating, chimps would starve to death after couple of days.

* Primates can not run for several hours and outrun a horse!

Scaling up brain is not a problem, but keeping it supplied with energy is.


I think on a higher level, the ability to fast might be the most important trait here (combined with the capability to innovate). At least, Robinson Crusoe type economics has me believe that one of the important things for a civilization to advance is the ability to save and invest. So (simplified) the human can catch some fish, take some time off from the day-to-day grind of catching fish and try to build some device to catch even more fish (so basically investing the initial fish). With the ability to fast, this process can be sort of forced more (you can still gamble on building a better net even if you don't catch enough fish in the first place). Rinse and repeat in other areas, and slowly "wealth" is built up. The longer you can go "save" without being forced to do something else, the more you can advance, and that's exponential gains.

I'd be very curious if there are any studies on time sensibility across species. Obviously, it varies even among humans (cue children with candy experiment here). My hypothesis would be that the ability to delay satisfaction is quite vital for the development of a species. On a completely tangential note, I'd also be curious how it relates to "procrastination". Anecdotally, I fell like people who need everything "right now" struggle a lot less with procrastination.

Edit: spelling


Squirrels, ants and bees can build wealth as well.

Humans can build complex societies and use weapons in organized way.


> Squirrels, ants and bees can build wealth as well.

investing isn't just storing "wealth" (ala, a storage of foods). Investing is using the time that such wealth affords you to invent new tools, to make more efficient methods/machines for making wealth! Animals are, currently, missing this part.


Animals do learn techniques and pass knowledge to their offspring. The only thing is, they don't make tools.


They do make tools (corvids, octopuses, apes).

Learning and passing knowledge is much rarer, I believe, and I can’t really think of any non-mammal example.


Don't they watch their parents hunting and things like that?


Mammals? Some do. Most other animal I can think of right now? Nope, a snake just.. does its “pre-programmed” thing, the same way a spider creates its web. Jellyfish are barely alive, fish are also not particularly intelligent.

People really did believe even just a few hundred years ago that animals were just “animatronics”, and let’s be honest, plenty of animals don’t disprove it too well.


Primates, especially our close relatives, can sweat, just not as much as humans. https://www.kamilarlab.org/single-post/2018/02/22/sweaty-pri...

Sounds like humans' sweat capability is also scaled up.


>Primates can not run for several hours and outrun a horse

But can you?


So long as you mean "after several hours the horse will be tired and I'll still be going", probably yes. I'm not sure exactly what range a horse has in a day before exhaustion, but I did walk (the equivalent of) a marathon with absolutely no prior training and not even bothering to sort out optimal shoes.


Hunting by exhaustion. Pick one animal and follow it, night & day, until it finally gives up. Kill it 'n eat it.


I have to pay rent.


Most horses have it even worse.


Domesticated horses have humans pay their rent. My boss's wife does horse riding and has her own horse (rich people things). The cost of vet visits, rent and upkeep in a stable is pretty insane.


If they paid the horse instead of the stable, the horse might make rent.


If only the horse could learn JavaScript


Yeah, finding a horse for this stunt would be hard.


Surely the point of the exercise is to find a horse, and the biggest difficulty is what you do with it when you actually catch up.

And now consider this from the perspective of a wild horse: a strange being approaches, your instinctual neophobia tells you to flee. At a safe distance, you stop, only for the being to arrive again and the cycle to continue. You sleep, only to be awoken by its approach. Finally, too exhausted to run any further, you wait for the inevitable… only for the being to feed you a carrot, brush you mane, and wander off.


I do not think horse would just "wander off" after that. It would need veterinary treatment and a long time for recovery.

Look up details for horse marathon. Horses need several breaks and veterinary supervision. And that is just a few miles!


That was the human wandering off, not the horse.


We generally can't outrun a horse[1].

[1] https://en.wikipedia.org/wiki/Man_versus_Horse_Marathon


In that race, Humans won 3 times, and in several other races the margin was in the low minutes. So I wouldn't say "generally" is true. And overall, it seems Humans benefit is to have more endurance than horses, so the longer the race, the more likely will they win.

If we take a look at the numbers of humans doing a 24-hour-run[1], and compare it with Endurance-Riding[2], then humans seem to be able to run far longer, at peak even significant longer. Though, to be fair, it's possible that the numbers are skewed for the horse's safety, and riders are not going all out till the possible mile.

[1] https://en.wikipedia.org/wiki/24-hour_run [2] https://en.wikipedia.org/wiki/Endurance_riding


Basically we can. Not everyone, but you can imagine someone that relied on it for their survival definitely could. In these races the horse gets to subtract hold time. Take a look at this interview for example where one of the racers explains this, and won over a horse by an hour and 15 minutes: https://www.irunfar.com/catching-up-with-nick-coury .

It's also covered in this very nice episode of Radio Lab: https://radiolab.org/episodes/man-against-horse


Marathon is too short for this, try for 24 hours...


Most folks can’t run 5k, let alone run for 24 hours.


Almost any human trained from a childhood would be able to do endurance hunting. Human endurance thanks to sweating and other traits is especially advantageous in hot climate: horses can't ourtun humans indefinitely as they overheat and can't cool down as effectively as humans, so eventually a trained human will catch up a horse (and eat it).


The key there being a trained human.


Doing it all your life since childhood counts as training for the sake of this argument.


I don’t know where you live but kids in the UK certainly don’t do much exercise through their childhood


That’s just deliberately missing the point.

It is actually remarkable how quickly we can go from “untrained, can barely go down to the shop on foot without a racing heart” to running being barely a problem (unfortunately to many — it really is hard to burn more energy than what you normally eat with even quite long runs, we are that energy efficient at that).

But the actual point is that people before the agricultural revolution was more than fit enough for that, and evolutionarily speaking that is no time at all, so.. yes, not even being trained from childhood, just deciding to train for that hard, most people could definitely do that.


They aren't expected to chase their food until in exhausts and can be easily killed and eaten.


If we take a person from a prehistorical hunter-gatherer society, you are either a trained and physically fit human, or a dead one.


If a horse run away to the point where you loose track of it, then it does not matter that you can run for a week


People skilled in persistence hunting [0] track their prey even if it runs away from their sight. It still gets exhausted / overheated faster than the pursuing human hunters.

[0]: https://en.wikipedia.org/wiki/Persistence_hunting


I'm not sure about wild horses, but I have spent some time observing moose, which seem like a similar design. They don't seem to have much of a sense of purpose. So once scared into running, I don't think they would run very far before stopping to browse on some berries or nice grass.


unless you have also trained in tracking?


And hunter gatherers even to this day are absolutely great at that, plus we could/can pass the knowledge down generations. We literally hunted down many of the megafauna to extinction.


Most people's internal alarm for self preservation starts yelling long before endurance is truly tested. Nobody sprints for 24 hours, but I am certain that even the average couch potato can keep walking for 24 hours. They'd be hurting after, of course. From learning to jog, I can attest that it mostly about willpower, and I suppose, desperation at times.

In death marches, such as in WWII, even many starving prisoners, walking from dawn to dusk, with beatings, lasted for days on the trail.

The Americans and Filipinos on the Bataan Deathmarch are one example:

>The total distance marched from Mariveles to San Fernando and from the Capas Train Station to various camps was 65 miles long.

For the British there was The Burma Rail: >Camp Nong Pladuk was initially used as a transit camp from where the prisoners were transported or had to walk to work camps along the Burma Railway.

And of course, the Jews and other victims of the Nazis were often force marched.

My great great grandmother returned to her Volga German village in Russia after the rise of the Soviets, was arrested, sent to Siberia, where she worked in a camp for 7 years until her death from malnutrition and other neglects. And she was a grandmother at the time.


> Most people's internal alarm for self preservation starts yelling long before endurance is truly tested. Nobody sprints for 24 hours, but I am certain that even the average couch potato can keep walking for 24 hours. They'd be hurting after, of course. From learning to jog, I can attest that it mostly about willpower, and I suppose, desperation at times.

I'm not disputing this, but the response was that humans can't generally outrun a horse. Which is true. The average human will not be able to outrun the average horse.


> The average human will not be able to outrun the average horse.

The average human can't fix a toilet.

Because we specialise so much, average human can't do anything, but a trained human cam do everything.

Whereas a horse is mostly always a horse.


Google finds:

> How long does it take to train for marathon? Most marathon training plans range from 12 to 20 weeks. Beginning marathoners should aim to build their weekly mileage up to 50 miles over the four months leading up to race day.

So even untrained couch potatoes could learn in 4 month to run a marathon. (And indeed many do and test themselves that way.)


Google is wrong.

Training plans do last that long but jumping into a marathon training block from nothing is a very quick path to being injured.


Not everyone is obese american at verge of diabetes!


You don't need to be obese to not be capable of running very far.


No but we can chase them into traps we dug earlier and hunt them at our leisure


All these are normal and necessary adaptations when switching from a vegetarian diet to animal hunting.

All predators need the ability to fast.

All non-ambush predators, like wolves or hyenas or humans, need adaptations that ensure enough endurance to pursue their prey for hours, without overheating or becoming too tired.


Just a small correction. Humans are primates.


Every time the hardware gets better we compensate with more bloated software. Fragile code, countless bugs, poor patches. We keep adding more and more features that spend more cognition as if it's an unlimited free resource.

From walking to a tree to grab some fruit to a complex network of farmers, merchants, machines, oil drills etc From fighting with fists over simple basic things to intercontinental ballistic missiles and autonomous drone fights over issues we don't even understand anymore. The most complex mating dance on this side of the galaxy.

I'm happy we didn't lose our monkey sense of humor[1.4.42]

[1.4.42] - https://youtu.be/FIxYCDbRGJc


Big brain primate has big brain of primates!

What a blinder of an specious insight.

We don't even understand scaled up LLM let alone brains to make a meaningful inferences here.

It's also a disingenuous title for the linked article.


> What a blinder of an specious insight.

We are on track to potentially learn there's no deep mysterious secret to human cognition. If and when we crack that, this planet will experience the technological singularity.

I thought the singularity was a crackpot idea not much different from string theory in how it tries to sell itself. We were stuck in the post WWII, post globalization, smartphone incrementalism age. Now we're moving on to something exciting again.

What I'm getting at is that you'll not only see more articles like this, but that collectively as a species we're going to start feeling a whole lot less special.


> collectively as a species we're going to start feeling a whole lot less special.

If that's the case, expect a massive backlash from the spiritual types, which basically means every other person. The centrality of man is essential to the abrahamic religions that dominate the planet, and other ones too will struggle to come to terms with seeing ghosts in the machine. I suspect organized religions will find ways to officially ostracize Machine Learning over the next decade. They'll form an alliance with people whose jobs are threatened, and pass laws to smother or ban research and deployment.

I am not a believer in an "AI revolution" or singularity, I think they're still one step up from parlor tricks, and anyway the world can evolve as fast as it can devolve (a nuke or two and we're in the ancient world again), but it feels inevitable that we're going to experience some significant neoluddite movement very, very soon.


>If that's the case, expect a massive backlash from the spiritual types, which basically means every other person.

Which, regardless of the existence of god or not (which is irrelevant) sounds very smart. Basically a great evolutionary defense mechanism!

Hopefully the non-spiritual types wont doom themselves and everybody else to AI-slavery or annihilation...


> The centrality of man is essential to the abrahamic religions that dominate the planet

Do they really dominate? They're big and all, but both economically (and to an even greater extent by population) China + Japan + India are also pretty substantial, and not predominantly Abrahamic. And even in the west, religiosity is in decline.

> inevitable that we're going to experience some significant neoluddite movement very, very soon.

Absolutely agree, the discourse from those artists who vehemently dislike AI art speaks to this.

How powerful they are, I do not know. But they are there, they don't like what they see, and they have already turned the counter-arguments into bingo cards.


> I am not a believer in an "AI revolution" or singularity

Check back in 12 months.

I bet my entire life -- not just the sum total of my earnings -- that this the moment humans duplicate the spark that makes us who we are. And from there, God only knows what is possible.

Every day it improves. And it's not stopping.


>And from there, God only knows what is possible.

A few different kinds of dystopia, mainly.


Will dystopian fiction get worse or better? These are the pressing questions of our time.


Too many doomers in this world. Tilt your head up and stop looking at the ground.


That happy-go-lucky attitude is how people end up in the ground...


Never stop an adversary when he's making a mistake, and all that.


> I bet my entire life

I would hedge that bet.


> We don't even understand scaled up LLM

What’s there to understand?


The problem is that we don't even know what might be there to understand. Since it is pretty much unknown how humans think (from a brain processing perspective), we also don't really know if the things we see in LLMs are just dumber versions of human thought processes or if they are qualitatively different.


LLMs literally can’t compute any complex algorithm, they can’t simulate different steps in chess to get which version is best, nor execute some non-trivial algorithm.

They are nowhere near close to human intelligence.


We dont understand LLMs but we certainly understand matrix multiplication.


We also have a good idea of how individual neurons work, but struggle a lot with understanding brains. Both LLMs and human brains have a lot of emergent behavior, which is why our understanding the low-level primitives only provides very limited insight.


We still don't have a good lower bound on the complexity of matrix multiplication.


Yes, what is research compared to a facile HN dismisal?


People really need to stop claiming that we don't know how LLMs work.


Last time I checked we shared something like ~99% of our DNA with chimpanzees. Finding differences must be way harder than finding similarities.


While you're correct don't forget the distinction between phenotype and genotype

I've found a lot of people use the similarity of the latter to imply the similarity of the former

Some genes express themselves more than others (often dramatically)


Not really.

Watch: "Are We Really 99% Chimp?" https://youtu.be/IbY122CSC5w


I have to admit I never got the “99% similar” and “octopi are so smart” thing.

May sound like an ignorant meme, but if they’re really so smart and similar, why can’t they build cities or create cultures or do anything better than just survive?


“For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.”

― Douglas Adams, The Hitchhiker's Guide to the Galaxy


This is what scares me so much about AI researches, many assume that more "human like intelligence" is a good thing, it's been hyper destructive so far, even if unintentional, so I can't see it being much different into the future. Useful yes, not by default always "good".

The hope I have is that we actually do create something that is "actually" really intelligent in the we like to think of ourselves and not the way we actually are.


> This is what scares me so much about AI researches, many assume that more "human like intelligence" is a good thing, it's been hyper destructive so far, even if unintentional, so I can't see it being much different into the future. Useful yes, not by default always "good".

It is our imperative for self-preservation that drives destruction. Everything else is just human psychology corollaries of the same thing (aggression, deceit, greed). Human drive for destruction doesn't originate in human intelligence, intelligence just amplifies the ability to destroy.

Arguably this is true for all reproducing mammals (whose intelligence is not comparable to humans). If you trap most mammals in a cage and threaten them, they will destroy whatever comes at them to the best of their ability.

So probably more relevant would be to make AGI without a sense of self or at least without an imperative to self-preserve or reproduce, rather than just fearing the development of something that happens to matches human intelligence.


Homo sapiens is one of the most violent and power and control seeking species ever. I have little hope that what we create is anything but.


Are you mad? I say this to grab your attention more firmly!

Do you think unchecked locusts, would not eat the planet bare? Goats, with no predator, would do the same, destroying all vegetation.

Cats torture their prey for fun. Hippos attack with little provication. Even male beavers, during mating season, are deadly and attack without cause.

Everything on this planet expands endlessly, without predation. Apex predators only die off due to starvarion, see prey/predator population cycles for more info.

Humans are perhaps the most benevolent species, for we actually try to reduce our impact!

Heck, even vegetation cares not for anything but itself. Vegetation changed the entire atmosphere of the planet!


By violence, I don't just mean physical violence. Humanity's emotional violence is astounding. Also, a lot of those animals you mentioned do those things for survival. Humanity is discontented and will often reach for violence that is otherwise not necessary. Animals in the wild will often avoid conflict at all costs.


How do you square "humans try to reduce their impact" with the fact that, exactly like vegetation, we're changing the entire atmosphere of the planet? And getting plastic everywhere from Everest to Mariana Trench, to boot?


Then again, this type of thinking caused dolphins to lose their autonomy to humans.

Humans have the ability to drive dolphins to extinction, while the reverse is not true.

If they were so smart, dolphins should have invested more in defense.


It might just be a threshold thing; that the brain just crossed a size threshold where it could actually make useful connections that led to where we are now.

Similar to how GPT finally crossed a size threshold where its responses are actually useful to us now rather than just random words. The models are just bigger.

It’s worth noting that a human living in caves and hunting 20k years ago would’ve been perfectly capable of being a modern-day software developer hanging out on the internet. So it’s not even the cities part that matters, it’s something more fundamental, something that must be impossible to achieve without that extra 1%.


I suddenly had this image of a CV by a hunter gatherer looking for dev job. “Dev looking for a job. Skills: Able to work in small tribes, likes to hunt Mammoths, Bears and small rodents for team mates. Basic arithmetic : can count to ten, more with help from others.”


“I’m a fast learner”


“Can sprint on demand and am agile. Willing to hunt bugs, when required.”


Perfect :)


Octopi die at around 3 years old (1 - 5 years) from a mutation that closes their digestive tract upon sexual maturity, genetic ailments that occur after reproduction do not get weeded out

A genetically modified octupi that lived far longer may well become something we would have to coexist and collaborate with. Right now we’re just taking advantage of children, who may be far more intelligent than our own children.


Because genes are more complex than gene A causes trait B

I stop listening most times people use the "99% similar" as the basis for an argument on phenotypes


Why do you think "building cities/creating cultures" is a "smart thing"?


This seems a different topic. It doesn't seem very difficult to imagine why creating civilisations is evidence of intelligence. Even within our species, we'd call subsisting tribes 5000 years ago "primitive". This is just applying that same reasoning to other species.


Are ants intelligent?

I think it is an important distinction to make in every similar discussion - certain animals just have some behavior “hard-coded”. Humans realized the need and the how of building cities/homes.


Let’s be honest. Most humans just barely survive when taken out of an established society, and that’s with over a decade of public education


The most important survival abilities are friendship, family and cooperation. Those are the things our society sometimes lacks.


Over a decade of public education doesn’t give you much to live -in- established society either.


Your cells would also die alone, just like a single ant/bee.

We are social animals to the greatest degree.


Technology does not imply intelligence and vice versa. Animals that live in the ocean are highly constrained by their environment. Animals that lack opposable thumbs are constrained by biomechanics. Plenty of animal species have culture.


Maybe thumbs are really just that awesome.


Maybe building cities isn't important for long term survival


Good point. It's still not clear if our way of surviving as a species is more successful as the "old" selection method. For the individual this is also an arbitrary goal even if some buy it as-is.


Lack of fire, perhaps.


You share the majority of your DNA also with a banana - many of it is just “how to be a good eukaryotic cell”.


That 1% is actually quite important.


A glass of water with a stick in is basically the same as a person.


Wow these AI people are really shilling for their “scale is all you need” hypothesis eh? Now they are planting this stuff to gaslight us into thinking the same? :)

https://lastweekin.ai/p/the-ai-scaling-hypothesis


Looks like it. The linked article does NOT show what HN title does. Even the original title does not imply what the HN title does. The article is about one specific area of the brain, which is not the most obvious difference.


There were bigger models than ChatGPT 3.5 before it was released and they didn't perform better. In fact, the hype isn't built around large parameters but in fact on an interactive LLM architecture. ChatGPT and GPT do very different things but it is ChatGPT that gets the hype despite essentially having the same parameter count.

The parameter count thing appeals to people believing in linear scaling per parameter.


What do you mean by interactive LLM architecture?


I assume they mean trained on human feedback to understand human intent and answer questions, and then glued to a front end that lets us chat with it.


Why am I seeing more of this lately? "just a", "just a meat bag", "just monkeys".

I think it's a degrading and disrespectful way to speak, not so much that I'm offended, but I think it's actually a non-intelligent way to view the world. Also probably not very healthy.

Feels like there is a little bit too much of this self-degrading language used in science and technology circles now.


Why is finding that we are not that different from other mammals degrading and disrespectful?


That is not intentional in case of GP [0]. But this disrespectful perspective towards other animals is rooted within us culturally. As example some religions: there is heaven free from other animals, because they are not worth it.

[0] Which was probably just take against nihilism that sometimes occurs in current AI discourse.


You're interpretation of my comment contains the opposite message from that which you've perceived.

I'm proud to have the brain I have, even if it's like an Ape brain. It's not "just something".


Ok, so, given that comment I've just got to ask even if only for the sake of my own curiosity: that username ironic or (modulo ordering) literal?


For more than two thousand years the lunatics have been running the asylum spewing all kinds of horsefeather metaphysics about spirits, souls, heavens, hells, gods, demons, turtles, triangles [1], and so forth. We will need to remind ourselves for the next 4,000 years at least that down there is just physics, chemistry, voltage-gated information processing with collective emergent properties, functionalities [2], and so forth, to clean and escape our selves, our languages, our behaviour, our conceptual frameworks from all the drivel. After those 4,000 years, yes, it will probably feel as demeaning to call the universe, the only one we've got, Everett be damned [3], just physics.

Watched recently on the Applied Science channel how to "identify chemicals with radio frequencies - Nuclear Quadrupole Resonance" [4]: just the universe as it is is more impressive than one could ever imagine.

[1] https://en.wikipedia.org/wiki/Timaeus_(dialogue)

[2] Michael Levin, The collective intelligence of cells during morphogenesis as a model for cognition beyond the brain, https://www.youtube.com/watch?v=SSNasSsiTlY

[3] Even this expression, "to be damned", is a remnant, https://en.wikipedia.org/wiki/Many-worlds_interpretation

[4] https://www.youtube.com/watch?v=JO_EHceV9sk


The logos with which they apply the rationalism of the position that as biological creatures we do not hold a special place in the cosmos is not the offensive part. It's the ethos part that they really don't want you looking at.

Human history has always been tumultuous, and now we get to the present day, where we have extreme inequality, world-ending weaponry, and destabilizing global climate, contracting resources, and social upheaval as a result. There's no pretty solution to be found. Now we have the latest zeitgeist of AI, already threatening the economic wellbeing of the multitudes if you believe Altman & co.

The rights of humans throughout history has been predicated on the idea that humans are special somehow, deserving of dignity. Get enough people to really lean into the idea that they're just a meaty LLM and the foundation of human rights crumbles. You can get away with quite a lot of evil if you take care to dehumanize those you want gone.


Part and parcel of the desacralisation of the human being, unfortunately, which is so hot right now, and has been since the start of the twentieth century.


Fairly dangerous view, I feel it's definitely going to ramp up now we can apparently be replaced with LLMs.


It is, absolutely. Even to the level of socialisation where chatbots start to replace in-person connections.

I truly feel things are about to get a lot worse for most people (.i. Those who don't control the systems) and it won't be pretty. Sadly I don't think those who do do this stuff care one bit. The whole 'Did it because we could without asking whether we should' type problem.


It might be a reaction to the centuries of the opposite position, that humans are divinely created, and rule the world with the Devine right handed down by god. So now as the human is picked apart by science, and we see we are just primates, monkeys, and we tend to form groups and throw our crap at each other, the 'just monkeys' seems apt.


IMHO, there could be a tipping point of scaling "primate" brain. For a given task, when size of brain can handle so much knowledge that unknowns are fewer than knowns, the advantage grows exponentially. Because you can deduce outcomes correctly by eliminating the wrong answers without actually "understands" the problem itself.


Idk. I think the real tipping point is being able to succesfully use external memory so that you don't have to keep everything in your working memory when doing reasoning or solving problems.


Yes, humans have been around for 300,000 years and only in the last 10,000 years human crossed that tipping point to be able to think higher order things.


Do you have any evidence for this claim? Just because people didn’t start agriculture doesn’t mean they weren’t as smart as people today. Hell, 300000 years is nothing from an evolutionary perspective, they were for all practical purposes genetically identical.

It was just most probably a nurture thing — there was not as much accumulated knowledge to “bootstrap” a more advanced civilization (or perhaps, no will to do so. Why change when your system works just fine?). It’s not like Einstein could have achieved much back then without “standing on the shoulder of giants” before him.


That is basically what continuous learning is and so far everything I have seen is that it is discounted as not important or something you could easily work around. There is a difference between waiting for the next minor release which updates the weights based on whatever OpenAI gathered from your chat history and having the model update its weights as you query it.


This is exactly what happened with large language models: they are quite useless until a certain number of parameters, after which their abilities quickly become impressive (read more here: http://arxiv.org/abs/2001.08361)


Except they are fundamentally limited and can’t algorithmically think no matter the scale?


I was merely commenting on the apparent similarity between the phase transition in ability observed in the brain of primates and artificial neural networks, without necessarily equating the capabilities of the two.

On that note, maybe in the future we will discover some neural architectures that are able to "algorithmically think", who knows.


Do you have any literature on this?


Just my own thoughts. Add `IMHO` prefix.


The surprise would be if it wasn't.


Actually, it's just a primate brain.


That, minus the "just".


So intelligence is determined by genes?


Yes of course. My daughter will grow up to be a human, not a chicken. Humans are genetically all very similar though.


Can there be different versions? scaled up Porpoise brains, scaled up Corvid brains


“Just a scaled up primate brain”? And so a nuclear ICBM is “just” a scaled up bow and arrow!


"Rods from God", the most advanced form of the bow and arrow: https://en.m.wikipedia.org/wiki/Kinetic_bombardment


Yeah, but how the subvolumes are scaled is highly fucking evolved.

Then there're some nasty little contradictions like FOXP2.


So what you're saying is... /it's aliens/ ... right?

(:


Nah, we've just spent a long fucking time not dying with increasing efficiency.

Distributed autoadversarial algorithms have honed our minds over millions of years.

You may be able to show that something is largely a scaled form, but you're leaving out optimised mass/volume/energy scaling optimisations that have one hell of an impact.

Feedback loops of minor advantages tend to cascade and the propagate across the entirety of breeding populations.

(Some advantages are more aggressively curbed, such as the sickle cell trait.)

Then it's missing things like FOXP2 - weird shit happens to language capabilities when you break it.

Not only that, but:

    Three amino acid substitutions distinguish the human FOXP2 protein from that found in mice, while two amino acid substitutions distinguish the human FOXP2 protein from that found in chimpanzees,[17] but only one of these changes is unique to humans.[10] Evidence from genetically manipulated mice[22] and human neuronal cell models[23] suggests that these changes affect the neural functions of FOXP2.
So there are major mutations that make differences on scales that they aren't even looking at it.

Topological theory has its limits.


PaLM vs GPT


Somehow HN-ifying the title made it more click-baity.

Humans are primates. And those scientists know it.

https://www.britannica.com/animal/primate-mammal


"HN-ifying the title" should be the opposite of rewriting it to be more clickbait - from https://news.ycombinator.com/newsguidelines.html: "Please use the original title, unless it is misleading or linkbait; don't editorialize."

The submitted title was "Newer data once again shows: the human brain is just a scaled up primate brain". I've HN-ified it now :)


[flagged]


And it's sickening.


The above is interesting because it highlights the fact that humans do not have truly unique cognitive abilities, and hence must differ from other animals not qualitatively, but rather in the combination and extent of abilities such as theory of mind, imitation and social cognition. Moreover, viewing the human brain as a linearly scaled-up primate brain in its cellular composition does not diminish the role that particular neuroanatomical arrangements may play in human cognition. Rather, such arrangements should contribute to brain function in combination with the large number of neurons in the human brain.


Its speech. Its basically a meme-bootloader, that allows for a cognitive super charging. Again, its the software, not the hardware, making all the difference.

https://en.wikipedia.org/wiki/Language_deprivation_experimen...


Indeed, simply saying that it's just "more of the same" misses the point that quantity has a quality all its own.


I think the comment is AI generated.


I propose a new law - the "Above Law". Similar to Poe's Law, it's sufficiently impossible on the internet to determine if a comment referencing "the above" is referring to a LLM prompt, or is a real human using an unusual way of referencing the content in question.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: