Hacker News new | past | comments | ask | show | jobs | submit login
My techno-optimism (vitalik.eth.limo)
101 points by yarapavan on Nov 28, 2023 | hide | past | favorite | 123 comments



While I can appreciate reasoning about AGI as a purely abstract thing (I loved Nick Bostrom's book, Superintelligence), it seems out-of-touch to believe it's at all likely to happen. Do these people even use GPT-4? Do they connect the dots of its capital to the dots of its performance, and see how we're already hitting a huge wall? All while losing MSFT cash and routinely failing to do even basic tasks?

I can't help but feel like I'm being marketed to. I've grown to completely distrust anything in this vein, since the reality of the system I see before me is so drastically inferior to how people appear to be reasoning about it. It's not like how a Model-T car differs from a modern one in safety, power steering, and so on -- it feels like an error of categories.


> Do these people even use GPT-4? Do they connect the dots of its capital to the dots of its performance, and see how we're already hitting a huge wall? All while losing MSFT cash and routinely failing to do even basic tasks?

Honestly it's a complete skill issue if you can't get it to "basic tasks". People are out there literally spinning up entire websites and apps in minutes using GPT. There is also no indication at all that we're "hitting a wall". I suppose next you're going to say its glorified auto-complete or a "stochastic parrot"?

Like is this guy completely blind to progress? GPT4 is no better than GPT2? That GPT5 or 6 will have even better capabilities??


And I say in turn, it's a skill issue if your "website" or "app" could be made by a GPT in a few minutes. Is what you're doing actually valuable, then? It consistently fails to solve my problems.


Yeah because only complex websites and apps deliver any value lol. I think you're really just admitting to having very poor creativity. I use LLM's everyday in systems that replace manual business processes. They're delivering clear value in a Multi-Billion dollar company. So again, skill issue.


You've got to extrapolate a bit. I mean 20 years ago AI was fairly rubbish. Now it can beat humans at all board games, and things like GTP4 can beat most humans at things like law exams. At the current rate of progress it'll be quite good in a decade or two.


I like Vitalik but like a lot of tech leaders, his opinions on things outside of his expertise are not particularly insightful. The idea of mind uploading, for example, is taken as a serious possibility here, when in reality it's not much more than a sci-fi trope.

The 21st century may well be the pivotal century for humanity...

Every generation probably thinks this. And honestly, if I had to pick a major inflection point in recent history, it would be World War 2. The current world would be a lot different had the Germans or Japanese won. It seems to me that the current trajectory of world history is still just one extending from the 1950s.


This is a solid take.

The motifs that he evokes in this essay are far too complex for me to digest in confidence from a "Techno-optimist" "tech leader" who is still so relatively young. I'd be more interested in reading this perspective (and its counterperspectives) from someone else.


He says mind uploading may happen in the future. Do you have any confidence that it won't happen in the next few centuries?


The issue is that there is no “mind uploading.” The human mind is dependent on the human body and any so-called uploads are really just clever copies, elaborate imitations. This is a problem with positivism, not with a particular technology.

I’m sure these copies will happen in the future, but I don’t think this concept of identity will withstand real scrutiny. What’s more likely in my opinion is a reapplied focus on the body / extended cognition and a discarding of the mind-body dualism model that enables thought experiments like mind uploading to be conceivable. Or, as Vitalik mentioned, more of an enhanced human model that builds upon the body instead of discarding it.


that very much depends on your philosophy. Many believe that there is no continuous "self" in the first place.

it doesn't matter if the upload is a copy, if the "original" is constantly being shredded and re-printed every moment anyway


But the shredding and reprinting machine is a human body, biological in nature. That isn’t equivalent to a digital “body” made of software. There is a difference between the body-self of yesterday’s cells and a self made of software.

Again I’m not denying that elaborate imitations of the human brain won’t be possible. But that these won’t be copies, they’ll be a new thing. Hence “uploading” is an incorrect model.


> There is a difference between the body-self of yesterday’s cells and a self made of software.

There doesn't seem to be a strong reason to presume this. What is your basis for stating this?

> But that these won’t be copies, they’ll be a new thing. Hence “uploading” is an incorrect model.

Sure, that's a relevant, but pedantic point. Does it matter? Presume star-trek transporters were real, effectively they could be said to be duplicating and murdering every-time someone is beamed up. But the subjective experience is still one of continuity. It seams likely that would be the case for an "uploaded duplicate" they would "feel" like the original, in mind (barring sensory differences). Whether the "upload" process itself is destructive is another matter.


I'm curious how this plays out in reality. Already you can make very crude copies/imitation of people like the Robot Dad mentioned on HN yesterday https://news.ycombinator.com/item?id=38433330

As time goes on these kind of things will get better and maybe you can have a copy version of you to chat to people you don't really want to talk to or do chores or the like.

As you get older and maybe go gaga the AI version of you may be closer to the old you than your physical self, and that could live on if you pass away.

Not sure if they'd do things while no one talked to them or just respond like GPT or have some physical presence or in VR or if you could leave money and power to the virtual you or not. Guess we'll have to see how that plays out.

Getting data from scanning a brain is probably a long way off but could be added in in a few decades if they get there. Sadly real world teleporters probably aren't happening. Though maybe in VR.

There's actually a YC startup for brain scanning but I still think that'll be a while https://www.technologyreview.com/2018/03/13/144721/a-startup...


If we're talking inflection points, my vote would be for the industrial revolution.


Industrial revolution was not an inflection event.

It was a continuous process that lasted centuries, at different times in different parts of the world.


This «merge with the ai» thing never makes any sense. What possible improvement would a hunan provide a super intelligent AI in some merge operation?


The way I imagine it, the superintendent AI might not know we exist. Not at first.

I mean, have you ever tried to have a conversation with one of your cells? Maybe they talk back and we've just been listening wrong.

Once you learned that they do talk back, would you try to rid yourself of them? I wouldn't.


It is an emotional response to the fear of being "left behind".

And for some folks, there are pseudo-religious elements. There's been a long line of thought (search Extropians) obsessed with a sort of godhead-merger, or becoming godlike, and related immortality dreams. And just from a symbology perspective, there are a lot of religious overtones. The similarities between Vinge's singularity and biblical rapture are hard to miss, for instance. And just listen to some of the "AI" cheerleaders right now...


In my opinion calling something similar to religious thinking is not an actual argument against it at all, as many smart people in the past who developed important concepts have been religious. See plant genetics, the "big bang" theory, and outcome matrices. It's kind of the other way around of the "Jesus said this (in some out of context statement) therefor you should take in all the refugees, no I don't believe in any of that stuff" meme. If you're asking an atheist "since this sounds like something a (intelligent) Christian said in the past you should disagree with it" you should check if the original argument made sense with the assumed priors (i.e. "God exists") before throwing it away in general for any assumed priors. On the other hand, merging with AIs is not a good idea and sounds like a very good idea to bring yourself and the species to ruin, especially if ASI are involved, even if not doing the merging themselves.


You seem to think I'm arguing against something there. I'm not, it is strictly descriptive.


Any merger attempt is unlikely to go well, but there have been many intelligent Christian and non-Christian Russian cosmists (or just cosmists, the "C" in TESCREAL) so I don't think "sounds religious" is a good argument for or against. I do agree that merger is cope, and it's probably reasonable to write off all "AI controls the world, de facto or otherwise" upsides, but criticisms should be against the actual thing, not its aesthetics.


Must be all those stimulants they pop finally driving to psychosis.


A really cheap and efficient way to get around and multiply.

We are cheap, disposable tools for machines.

If machines figure out how to farm humans, they're set...it's way more practical than mining resources, refining them, producing materials and using the materials to build a mechanism to travel around in.


Wait, isn't this the plot of a popular 90s movie?


Whoa.

It's from the original script at least, not the batteries explanation in the final product.


If a "super intelligent AI" is something in the vicinity of an LLM, which does nothing but respond to prompts, then humans provides a necessary element. Without a prompt, it'll just sit there doing nothing forever.


Maybe thats what makes it AGI. Currently there is no AGI. There are many, many, many things people can do that AI cant. At an abstract level it makes sense that adding a little human to AI could be what makes AGI.


Yes, computers are already so perfect.

Or is it that your emotions and thoughts are where all the value is, and a machine executing its function is no different to a hammer?


Culture and probably successful fitness function for some time before it can simulate universes.


I'm a fan of Vitalik and his blog. He's the kind of next-level thinker who will accidentally re-discover or invent some key finding in economics on the pages of his blog.[1] I got to meet him and run a site that hosted some of his blog posts with Glen Weyl a few years ago. I was a frontend amateur, and the SEO for the site was totally crap, making it impossible to Google for the posts, but he was kind and forgiving with his feedback in person.

My two cents on the post: First of all, in my mind, the Andreessen "Techno-Optimist Manifesto" is such self-serving half-baked neoreactionary junk, that the only appropriate response is just to ignore it altogether. It seems not even worth commenting on. Or at least, if you're going to comment on it, you should just summarily dismiss it, like "yeah, of course VCs are excited about accelerating techno-capitalism because it serves their material interests, and let's not take their philosophical self-justification seriously."

Regarding the substance of the post:

> If we want a future that is both superintelligent and "human", one where human beings are not just pets, but actually retain meaningful agency over the world, then it feels like something like this is the most natural option.

This is on the one hand, totally common sense, and on the other hand, something that (I'd argue) is directly in conflict with the material interests of companies that create and use generative AI. They absolutely want you to outsource your thinking to the machines, to give up your agency. What will the organized forces of resistance to this look like?

[1] https://conversationswithtyler.com/episodes/vitalik-buterin/


To be fair, I think there's a significant difference between giving up agency to AI and hoping to become their pets like many e/acc's seem to do.


I'm not so sure about that!

I see only a difference in degree. It starts with trusting AI to write your emails, then organizing your calendar, then organizing your finances, then establishing financial goals, then making important life decisions. And then you're a pet.

There will be companies actively trying to build this kind of trust. And they will succeed -- at least, they'll be able to deliver this kind of advice for almost no cost, and much of the advice will be considered helpful, much of the time.

But who's going to be on the other side, arguing that listening to the AI advice is a net bad?


That sounds more like a Parent-Child relationship than Owner-Pet to me. I guess I take inspiration from the Culture series where AIs indeed have basically all the agency and "run" things but humans are not relegated to the status of pets.

>But who's going to be on the other side, arguing that listening to the AI advice is a net bad?

If an AI reached the point where it is objectively better than the human, across all intellectual metrics, to which it is giving advice would it be a net bad?


Parent-Child relationship implies future emancipation. Why would the megacorp laundering control through AI give up power?


No analogy is perfect and I feel like this one is already stretched enough.


Doesn't matter how good the tech gets, you'll eventually run out of resources to build them.

Between topsoil depletion, fresh water scarcity, usable sand and precious metal scarcity, and ocean acidification leading to an eventual possible extinction of all edible seafood, it's looking like we aren't going to be able to just keep building new tech to keep supporting our current population, especially at the rampant level of consumption companies have been training them on (nevermind a perpetually increasing population).

And likely we're going to start running out of these things (and others) in certain locations, and eventually everywhere, if we don't find the will to completely reverse course (and it's clear there's been no real will to do so by those in charge). Conversion to electric cars and solar panels just improves the CO2 problem, it doesn't address the rest.

I like Vitalik and own Ethereum and I love technology, but I still don't see anything that's likely to resolve the fundamental problem that we're just using resources way, way faster than they can be replaced on this finite planet, and eventually it's going to catch up to us, short of some form of major human cataclysm (like a super pandemic or something) that happens before resources start running out.


The idea that we're at some fundamental limit where we can't keep building technology without destroying the environment seems wrong. Besides solar power, there's plenty of room for alternate, environmentally friendly technologies to address the problems you mention.

I can't help but think the actual motivation behind your attitude is not environmentalism but a pessimism about technology that amounts to an extreme form of conservatism. At worst it's a kind of misanthropy.

Technology isn't always good but we should keep building it. I'm not sure how much pressure we should be applying to the brake. Definitely some, but not as much as you're advocating.


Again I love technology. I probably have more of it in my house than most people. I'm literally surrounded by two laptops and a desktop right now (with two other laptops in the room). Hell, I just bought a force feedback steering wheel yesterday, and it's super fun (I shouldn't have bought it considering what I'm saying here, but I did).

I would love if there was a technological solution to these things, and the human will to do so. I don't even want people to stop searching for technological solutions, it just doesn't seem like it will ever be enough considering human nature.

I also suspect that throwing more technology at the problem might just add more problems, like taking a shot to lose weight that ends up giving you pancreatic cancer a few years later (based on an actual warning label on an Ozempic-like sample my doctor gave me once).

Edit: Part of the reason I have so many laptops is my job requires it, and one of those extra laptops is 8 years old and I haven't recycled it yet. Whenever I change jobs I'll have to return 2 of those laptops.


As people get richer, as we stop starving and freezing, we prioritize the environment more and more. People in rich countries like green spaces and wild animals, are concerned about climate change, and invest massive amounts of money in environmentally friendly tech.

There's every reason to believe that this pattern will continue. As the world gets richer, we'll continue to develop the will to deal with various environmental problems. In the meantime, a certain amount of damage will be done, but it won't be irrecoverable.


So far people in rich countries just export their dirty business to a poorer place. They don’t give up on those products or processes.

Why is there every reason to believe that we can do those things without environmental destruction? Seems like wishful thinking. If people in rich countries actually cared, they would demand things were made in their country with environmental regulations, not imported from somewhere that doesn't care.


As you make more and more money it is easy to just keep purchasing, and then yelling from soap boxes how no one has the will to save the environment.


Yep, and I admit I'm part of the problem. I'm not really sure how to break the addiction myself, I haven't even worried about any of this until the past few years, and I've been a fan of technology my whole life, buying video games and consoles and computers and other tech since I was 8 years old.


you believe the solution is on the consumption side, not the production side? interesting


How is this an either/or question? Does consumption not influence production and vice versa?


they are related yes. but if it happens that the vast majority of influence over it is on the production side, the consumption side only matters insofar as it's able to change production which may be a distraction to focus on individual consumer green-purchasing if that's not actually a great way of influencing production behaviors. It ends up as a "personal garden" approach to ecology, keeping us occupied doing what we can in our personal lifestyles and evaluating/influencing our neighbors within our proximity. so then what force drives necessary change on the production side?


> The idea that we're at some fundamental limit where we can't keep building technology without destroying the environment seems wrong.

It is wrong. Technology fundamentally requires the destruction of the environment, to make a spear you must nap flint and harvest a hard, straight stick. That’s resource destruction on a very small scale.

We have scaled up to Billions of people and near ubiquitous hi-tech. So the scale is what’s eating us, the earth is big, but 8 Billion people is a heck of a lot when every one of them requires highly refined goods.


Rich people consume more and pollute more than poorer people. So, yes, numbers do play a role, but so does how much each individual consume.

People in rich society could certainly improve their quality of life quite considerably by cutting back on certain consumption, such as building more walkable and sustainable environment.

We probably still have to make certain sacrifice, such as eating less meat.


If you want to get down to it, forget technology. Living requires "destruction of the environment". The question is whether it's sustainable in the short, medium, and long term.

I don't think that 8 billion people is too many. As we adopt more environmentally friendly technology, I think we'll realize that the earth can easily sustain 8 billion.


It doesn't really require destruction of the environment. Change to the environment maybe but you can do things like greening deserts that probably improve it.

The environment changes naturally anyway, technology or not.


On what timeline though?

Why our bubble in time is specifically owed resources to make barely updated iPhones each year?

Why these crappy synthetic gadgets? Why not compute in chemistry or biology? Business machines of the 1900s are not the only viable substrate for computing.


All energy production is bad for the environment; methods labeled green or environmentally friendly are just less bad than the alternatives. Our society has shown a consistent lack of ability or will to address those problems. I've been hearing that we'll technology our way out of problems my whole life but our current way of life is as unsustainable as ever. Some areas have gotten better, but more have gotten worse.

I don't know if it's our society that cares little about future generations or simply human nature, but we're almost certain to keep living unsustainably until a major disaster happens.


> I can't help but think the actual motivation behind your attitude is not environmentalism but a pessimism about technology that amounts to an extreme form of conservatism.

Your critique trades on intuitions about the moral conservativism that frustrates political debate, but is actually pointing to an existential conservativism that looks back at a successful half-million year effort at human continuity and wonders what all the rush is about.


> I can't help but think the actual motivation behind your attitude is not environmentalism but a pessimism about technology that amounts to an extreme form of conservatism.

Care to elaborate on this speculation? Sounds more like a harbinger of collectivist austerity to me.


You are conflating absolute resource scarcity with planetary boundaries.

There is no absolute resource scarcity on any meaningful timescale, there is only a related question about the size of the resources which will become economically recoverable over succeeding centuries. Resource-to-production ratios are often used to suggest that certain resources will run out within a number of decades, but that is just a form of corporate book-keeping - new exploration, technologies, and rising prices, can and do push the search for new reserves. The R-P ratios at the heart of the Limits to Growth argument have actually improved since the 1970s despite rising consumption, for exactly that reason.

You are right, however, that there are clear tensions between compound growth and planetary boundaries like ocean acidification and topsoil depletion. That is really the crux of the issue - how many times can we multiply the size of the economy without, through sheer weight of exploitation of natural resource sources and sinks, and of material power and artifice, destroying the planet.


I don't think I actually disagree with anything you're saying here.

I'm not really arguing exactly whatever is in the Limits to Growth book (I've heard of it but haven't read it), but I do think we are overutilizing resources faster than they can recover, and we can't just create a "make some new fossil fuels" or "make the sand that we need" tech at the scale that we would need to replace them.

Like fossil fuels for example, take millions of years for the planet to produce naturally, but we've burned through a good chunk of what is cheap to extract and process in about a hundred years.

I might be including two things that are a bit different from each other, but they all contribute to our ability to thrive as a species, especially at our (average) level of comfort and consumption currently.


We are literally getting more energy from the sun than we could know what to do with it at this moment. The problem is that we oriented our society around the use of certain technology and resources are destructive to the planet's ecological and biological systems.

I might be including two things that are a bit different from each other, but they all contribute to our ability to thrive as a species, especially at our (average) level of comfort and consumption currently.

It's possible that certain overuse of certain technology is actually causing us discomfort.

Cars are energy intensive mode of transportation and manufacture. We could all walk more and be subjected to less pollution and experience a greater quality of life.


> Cars are energy intensive mode of transportation and manufacture. We could all walk more and be subjected to less pollution and experience a greater quality of life.

Yeah I agree. I already try to use my car (a hybrid, tried to get electric but there were none in the area at the time) as little as I can.

Working from home the past six years has cut down how much I drive to like 20% of what I used to. Unfortunately I do live in a suburb in the US, so I don't have much choice but to drive to places unless I just want to be stuck in my house forever. Only place I could probably feasibly walk to near me is a 7-11 or a Papa Johns, and that would still take 25 minutes each way.


The depletion of fossil fuels is not the problem. In fact, the opposite is true: we need to leave a very large proportion of present-day fossil fuel reserves in the ground if we're to avert catastrophic increases in global radiative forcing, and leading fossil fuel states don't particularly want to do that. Renewables, and potentially nuclear, can deliver all the energy we need if we are intelligent about it. Some things will be more expensive in the medium-term (producing extreme high heat, flying), but that's far from a doomsday scenario. Other things (average electricity prices, road and rail transport) will be cheaper.


This is just doomer material not really based on any fundamental limits.

If someone took the technology of the early 1900s and looked forward at the population growth, the conclusion would have been as bleak as yours, if not worse.

Topsoil depletion isn’t a problem for people that know how to farm. Market forces will quickly incentivize people who don’t understand it when there is starvation at stake. Right now we have such an over abundance of affordable food that it’s just not a global threat.

Freshwater access is likewise a complete non-issue at a global scale. The absolute worst case is people pay for it like in Israel with desalination, but there even in the US in the driest parts, we’re still arguing about letting farmers grow the dumbest crops possible for free.


> Freshwater access is likewise a complete non-issue at a global scale.

"Water scarcity affects roughly 40% of the world's population and, according to predictions by the United Nations and the World Bank, drought could put up to 700 million people at risk of displacement by 2030. People like van der Heijden are concerned about what that could lead to.

"If there is no water, politicians are going to try and get their hands on it and they might start to fight over it," she says."[1]

But no, a complete non-issue. Just build desalination plants, poor countries!

[1] https://www.bbc.com/future/article/20210816-how-water-shorta...


Sure, if we have an ever increasing population, we're going to run into resource limits, but the vast majority of experts believe the population is going to stop increasing, they just disagree about when and at what level.

Secondly, the resources we use per dollar of GDP is going down rapidly. Not as fast as GDP is growing, but it is going down. Given that a good portion of our growing GDP is due to growing population, once the "growing population" part of it goes away, it's not unreasonable to expect that resources/$GDP will shrink at the same rate that $GDP grows.

It's not unreasonable -- if we have access to lots of clean energy and we recycle efficiently our footprint will go down substantially. For example, the iPhone 25 will be better than the iPhone 24 yet likely use less resources. And the upgrade cycle for iPhone's is slowing down -- an iPhone 18 will still likely be "good enough".

The biggie is food. Food uses a lot of resources. If we stabilize the population that'll take a lot of strain off, but I believe that we can significantly reduce our resource requirements for food. Right now we grow our food the way we do not because it requires so many resources, but because that's the cheapest way to do so. For example, if we used a lot more greenhouses we'd use a lot less land for the same amount of food, but we don't because greenhouses are more expensive than just using a lot of land. Also, I'm very optimistic about "precision fermentation".


Would the industrial pipeline to build sufficient greenhouses not just make up for the mess saved in production of food?

Invisible resource consumption pipelines offset resource savings elsewhere. Drink carts on airplanes have their own resource supply chain while most just think of the fuel costs.

Reducing life for the masses to “make money” has infantilized people. Not so sure why we’re afraid of pushing back against a generation that can barely hold its head up and the tacit ageism they put upon us to prop them up at our expense.


No it wouldn't. Glass and steel are easily and highly profitably recyclable when done in volume.


With better technology, you can:

* reduce ocean acidification by reducing CO2 — which is the primary cause of ocean acidification — emitted per unit of energy generated. For instance, solar panels are becoming more effective and affordable, and this progress is happening at a rapid rate. Wind turbines are advancing nearly as fast. There's also progress in carbon capture and storage technology, which can directly reduce the amount of CO2 entering the atmosphere. Nuclear energy also has enormous potential.

* create new topsoil and increase the amount of food produced per unit of topsoil. For example, the use of GPS and IoT devices in agriculture allows farms to optimize planting and harvesting, toincrease output while reducing topsoil degradation. The same applies to the rapidly progressing fields of hydroponics and vertical farming

* recycle more precious metals, and access more precious metals in the Earth's crust, and outside of the Earth. This also applies to material classes other than precious metals, including critical minerals like lithium, where companies are developing methods to more efficiently recycle lithium from batteries. The steady improvement of compost system manufacturing and distribution meanwhile makes topsoil generation more accessible and affordable.

* generate more fresh water, through technologies like desalination and more efficient water filtration systems, which makes previously unusable water drinkable.

Many of these are energy intensive, and the potential to expand clean energy output is absolutely enormous, from sources including solar, nuclear fission reactors and nuclear fusion reactors.


Just to start, lets ban practice of building (almost) irreplaceable batteries into phones, laptops and other electronic devices. The amount of electronic waste generated by forcing consumers to upgrade because the batteries are failing is staggering especially if one counts all the waste produced building those devices. When Apple is talking about its ESG strategy I am laughing through my tears.


I sympathize with your concern but I don't personally share it. Just to share my perspective (which is not the same as trying to argue that you're wrong...)

First, this concern is as old as the world[1] - and I experienced a version of this in the mid-90s when people were freaking out about peak oil[2] using much the same language/tone/concepts as you're voicing in your message. My point isn't that past performance guarantees future returns, but that there's something about this type of idea/anxiety that is inherently "sticky" in people's minds, separately from how it has actually panned out.

Second, I think you're missing a logical step between "we aren't going to be able to just keep building new tech" and "rampant level of consumption companies have been training them on." Companies have been able to "train" consumers to the current level of consumption precisely because resource availability has made it possible. If a resource became scarce, the price of goods based on that item would go up and thus consumption would go down. For example, see the graph of oil consumption in the US that naturally dropped by quite a bit around the supply shock of the Iranian revolution. [3]

The fact is obvious that humans can shift our consumption patterns, either by consuming less of something or shifting our consumption to a different type of thing. We naturally do this all the time as resource availability (and thus price) determine what is and isn't economical to be consuming.

[1] https://en.wikipedia.org/wiki/Malthusianism [2] https://en.wikipedia.org/wiki/Peak_oil [3] https://www.ourworldofenergy.com/vignettes.php?type=introduc...


Our biological resources and ecological systems, sure.

Earth's inner core is literally a molten ball of iron and nickel. Earth crust contained plenty of minerals, including elements like silicon and oxygen.

Now, we could go full hog on population growth, assuming no ecological issues. What we might run into is the problem of heat dissipation. Humans are basically 100 watts space heaters. Never mind all the infrastructure needed to support a human population. How much can we produce before there's too much heat? Producing steel is a heat intensive process.


I’m under the impression that we’ve been experienced like 1% of the detrimental effects of climate change and real change won’t occur until mass deaths start to actually happen. Either that or we’ll reach a point where the effects start to actually affect the rich (billionaires) in a way that causes them to start lobbying in a way that actually benefits the climate rather than doing so only for show.

But yeah, there just doesn’t seem to be enough incentive yet unfortunately. But if, say, there were mass protests world-wide there could be action from governments and companies. The problem is people just don’t care that much, but when millions are dead we might start caring.


The problem with waiting is that there are potential tipping points where the Earth itself begins releasing carbon. It's entirely possible (perhaps even likely; my impression is that the scientists don't really know, but are terrified by the possibilites already) that by the time we see what you consider real consequences, the process will have runaway far beyond humanity's ability to control.


That is the Malthusian take. The Club of Rome famously announced in "Limits to Growth" that we were going to run out of resources, and it caused great anxiety https://en.wikipedia.org/wiki/Club_of_Rome

They were totally wrong on every point. All of their predictions failed. It's because technology constantly adapts. If some resource is low, alternatives are found. Productivity increases. Price signals tell entrepreneurs what to work on and produce more of. We will never run out of anything.

Regarding CO2, it pains me greatly that people actually think a few degrees change in temperature will destroy the earth. The earth has been much warmer and much colder. Mesopotamia used to be paradise, now it's a desert. Regions change. Humans adapt. This is no reason to do something extreme, or live with existential grief.

For people that live in constant anxiety about all the things politicians and the news tell you about, I recommend this book, what can it hurt giving it a read https://en.wikipedia.org/wiki/The_Vision_of_the_Anointed


May as well just pack up the human race and give up right now if thats the attitude. Earth wont be around forever, and currently our only ticket off is increasing our technology and power consumption.


This all falls apart as soon as you look into the night sky and see that, no, we are actually not running out of resources any time soon.


Cost to extract and re-create what planet earth creates from biology from which benefits and is essential for humanity isn't achievable in space. You can get rare metals and materials but the insects and organic materials essential for life aren't out there in any reasonable way .. unless you know of a planet that has those ecosystem services.


Plus you've got to deal with the biological offensive you're going to be exposing yourself to. Imagine a whole planet of microorganisms against which we have no immunity or medical experience. Do we even have an antibiotic that would be effective against the new lifeplans we'd encounter? The Spanish colonizers of the Americas died in droves, and likewise the indigenous population; all this from bacteria in lineages which our bodies were familiar with! Now imagine what you're going to experience on a fictional planet with an ecosystem that could support our bodies. We'd be living in hazmat suits for generations, or would have to burn the planet to ash and start fresh.


Are you sure humanity is going to get there, and survive, before they run out of resources to build what's needed for such a thing here on Earth?

Also even if we do that, there's only a tiny fraction of the people on Earth that will be able to go on that journey. Most of the people born here are definitely stuck here.

There are predictions that some of these things could start happening as early as 2050 (specifically the extinction of edible seafood one), and most of what I mentioned maybe happening by 2100. Is the majority of people here still alive at that time going to be off this planet then?


Any resource limitation problem we have on Earth will be unfathomably harder on an exoplanet, short of hitting the moonshot jackpot of an exact Earth-equivalent with matching biome. Not agreeing with OP on the direness of resource exhaustion today, but planning to resolve ex. climate change by moving off-planet is like planning to pay for your heart surgery by winning a lotto scratcher.


This argument appears to circularly assume techno-optimism in order to justify it.


It’s very hard and expensive to get out of our gravity well though. And almost everything you see in the night sky is an unfathomable distance away.


... uhh, sure, if you can get to the planet a resource is on, harvest it, and bring it back before we run out of it here.


I suspect we might need a little bit of the pesky «logistics» thing between us and the stars.


Sounds like an even better reason to start working on space travel!

The Universe is really, really, really, really, really, really, and even more really than you can probably count big!

I guess this is what the bible meant when it said:

"the meek will inherit the earth"


It seems like Vitalik is wishing for future humans to get more credits for making progress than AI which is also created by humans. Who’s giving credit? This isn’t what human wants i would argue, but to just be happy. All the other stuff is a means to that end. To get credit or say in the progress is filled with ego. AI accelerates all that


This post is all over the place. It's about technology but treads on lesser concerns of it IMO.

The greatest immediate risk is concentration/centralization of power (in human hands).


The assumption that human consciousness can be uploaded to a suitably fast machine is a big leap. Daniel Wegner's the illusion of conscious will and other physical determinism is theoretical and full of holes. Sheldrake is a good antidote to this Wegneresque mind-prison that seems to trap people like Andreessen and Buterin.

I'm not advocating for luddites, and we should continue to learn about consciousness and the brain, but be open to the possibilities that there are forces at work that we do not currently perceive or understand.

AI should replace human governance, but the biggest danger with this transition is that those in power today have a wildly different vision for how the future should look. Their vision is clearly techno-communism and digital serfdom. I'm not sure how we steer around this.


Yes, these guys don't seem to understand what positivism is and why it isn't a perfect way of thinking about the world, especially when it comes to "transferring" consciousness to a piece of software.


Paramecium caudatum is smarter than any LLM, and it has precisely zero neurons.

Yes, our scientific models of intelligence and information processing are severely limited.


See Close the Gates to an Inhuman Future: How and why we should choose to not develop superhuman general-purpose artificial intelligence by Anthony Aguirre for a somewhat similar (but to me somewhat more appealing) paper. If everyone is going against The Culture now, does that mean it's okay to be a human supremacist?


"Over the last ten years, there has been a quiet shift from downloadable applications to in-browser applications. This has been largely enabled by WebAssembly (WASM)."

What? WASM came out in 2017. The shift is definitely from faster client machines and more browser APIs (that often test the sandboxing assumptions he's talking about)


It seems misinformed; I'd argue it's more down to nodejs, electron, and whatever allows you to wrap a webapp in a mobile app these days; runtimes that move web apps outside of the browser, the "write once, run anywhere" that was once pushed by Java's marketing department.


Yeah, if anything the impetus has been away from in-browser VMs (like Java applets and Flash).


Those VMs are where exactly? Been gone from browsers for over a decade.

Browser VMs are v8 and the dart one flutter uses in the web browser implementation. And I almost forgot React native has their own VM too,


> Those VMs are where exactly

Well that's my point.


Techno optimism, and optimism in general, is alive and well in America. I'm always struck by how optimistic Americans are compared to Europeans. Chinese are also very optimistic. I think it largely comes down to the trajectory of the economy.


https://www.lemonde.fr/en/opinion/article/2023/09/04/the-gdp...

> The GDP gap between Europe and the United States is now 80%

Lots to worry about in Europe.

Since dang is rate limiting me, I'm going to add my response to this comment:

Cutting edge treatments for cancer usually arrive in the US first. Also, I believe the US had access to mRNA vaccines for Covid before Europe.

Also, the US has more access to mental health treatments. Is ketamine for TRD available anywhere in the EU? If so, it’s a minority.

A lot of drugs used in the US for ADHD and depression aren’t available in Europe.


Personally, I wouldn't worry about living a longer and less depressed life.


Although at purchasing power parity things are much closer.


I keep hearing this, but I don’t really understand what it means when you put it on the ground. Like people think tech will solve their problems or people actually use tech to improve their QOL?

If it’s the former, then I guess. If it’s the latter, in the past 10 years, it’s hard for me to imagine what tech made improvements to people’s lives in NA, meanwhile Europe and Asia has been chugging along. From simpler things like never needing cash when you’re in the country (I know, HN hates it), to building massive infrastructure in the cities which requires new tech.

Disclosure: I live in Canada, so can be completely wrong. I have family both in US and Europe, so I visit both very often. You can see gradual change in both, but it’s different.


True.


The first thing that comes to my mind for d/acc is disintegration acceleration [1]. Anyway the big problem that’s never addressed is whether humans have political control over the process. There’s a strong reason to think humans don’t; because the neoliberal political system is dominant, and the emerging alternatives appear to be even more technocapitalist. Meanwhile the attempts from both the right (fascism) and the left (communism) to reign in techno capitalism failed.

[1] https://on.soundcloud.com/uxRPR


Overall I like this take, and I feel like it aligns closely with my own beliefs about the future of technological progress. With that said, I feel like this and most discussions about technology are far too limited in what they consider to be "technological".

To paraphrase Ursula Franklin: technology is systems, not just the artifacts that we typically consider "technology". As an example, consider "the car". There is the artifact of an automobile, for sure, but when we say "the technology of cars" we are referring to something which is a superset of the collection of physical automobiles: we are also tacitly including all of the human protocols which have been put in place to enable the automobile artifacts to "function properly" (move a human body from place A to place B safely and quickly). Road design and maintenance, drivers ed, traffic engineering, urban planning etc. are all components in the larger technological system that defines the capabilities of "the car". I'm not saying that it's wrong to refer to subsystems of this technology as "technologies" of their own, just that it's a more difficult taxonomical problem than many are willing to acknowledge. There are no artifacts functioning in a vacuum; all of our tools are embedded in and partially defined by the human systems around them.

I say this because I think it changes what it means to be a "techno-optimist". To be optimistic about the capabilities of technology to produce beneficial outcomes is more than just being optimistic about the capabilities of the artifacts ("we will develop more effective vaccines and more efficient batteries"), but also being optimistic about human systems. I do think that Vitalik has this somewhat in mind in this post; his discussion of differential development, as well as the role of governance systems gets towards this. However I wish he was more explicit about it because "technological progress" takes on a radically different flavor from this perspective. To some extent it becomes tautological: progress in human techno-systems is progress in human society more generally. However it also lays out additional axes to consider development along. For example: moving towards his solarpunk utopia north star could conceivably occur with no or little change in the current capabilities of our tooling, and instead result from a radical reorganization of human society. Extrapolating further you might conclude that a true d/acc techno-optimist should spend less time developing cryptographic consensus mechanisms, and more time radicalizing the folks at your community garden to become self-sustaining and mutually supportive. This is extreme, and probably the best way forward is "both/and", but I think that it gets underplayed in any of these discussions about "technology", resulting in a somewhat limited scope of action items.

When it comes to AGI I think this perspective is even more salient, because it unbundles the single "AGI" entity into a collection of subsystems and allows us to more effectively game out what the probable AGI futures are. Indeed, my own somewhat contrarian take is that AGIs already exist in the form of the human organizations (states, corporations, etc). These are decision-making entities composed of humans, but I think it's not unfair to consider them as a type of hive-mind entity which deserves status as "intelligent agent". The popular AGI anxiety, that we will create a novel type of agent which will "function" free of human systems entirely, seems unlikely to me: given how little we know about what a "mind" is, it seems like a stretch to assert that one could be engineered in silicon so easily. Rather I think the more probable outcome is Vitalik's AI-supercharged totalitarian state: instead of carving itself free of our systems I think it's likely that an old version of an AGI (the state or corporate apparatus) upgrades itself using the new computing paradigms available to it. This has happened many times before: the invention of statistics and the big data revolution(s) show us what happens when computing and decision making infrastructure embeds itself in the governing technology. Looking back to these previous "AGI apocalypses" we can more effectively extrapolate and plan for the future. Rather than being an entirely new problem space (Vitalik compares it to the origin of life; to me this is an enormous stretch), it becomes one with which we are exceedingly familiar.


He accounted for life expectancy as a positive but didn't account for excess natality as a negative, by all means (except some emotional ones) 8 billion is a ridiculous amount of humans and direct cause of speeding up climate change among other damages to our own quality of life.


Fertility rates are below replacement everywhere but sub-Saharan Africa, and dropping there too. The population growth curve is flattening out.


Replacement levels is still too high, we are way past any reasonable number.


I feel like I'm taking crazy pills when people talk of any slowdown in human population growth in fearful terms.


The problem is that the people with the strongest and least flexible religious views are precisely those who deny that they are religious.


No, we're not.


Yes we are, and it's a bit concerning how much people in this forum like overpopulation, our species literally already outnumbers rats.


And outnumbering rats is bad because...?


Because we consider them a plague just be their sheer number (aka how easy they reproduce)

If your species outnumber what your species consider a plague, there is a high chance that your species it's the bigger plague.


I mean, we have at times considered rats to be part of a literal plague, or in certain contexts where there are vast numbers of them, they've been called a plague metaphorically. But rats in general are not a plague in the context of the whole world, so for a number of reasons this doesn't follow.


That's a reason why we should be happy that we outnumber them, not a reason for why we should want there to be fewer of us.

Nobody cares about the absolute number of rats. We care about the inconveniences they cause us.


The inconveniences that our own population high count cause to ourselves includes things like global warming, which we all agree is a huge inconvenience, so even by your own definition we are a plague.


I have not provided any definition of "a plague", so no, we're not.


Oh you are that kind of discusser, a good faith attempt at discussing would have included a definition for plague in your comment, if the heavily implied one that it must be a "damaging" was not the right one (or just not answer at all if don't want to bother keeping the discussion)


"If you were arguing in good faith, you would have defined the term that I brought up and didn't define and which you didn't use, and regarding which I then falsely attributed a definition to you"

Mmmmmmh...


Personally, I dislike rats enough that I'm very happy they don't outnumber us.


Who should we stop from reproducing?


10 races we should sterilize, number 6 will shock you, click here to learn more


That your heads immediately go for "races" when talking about overpopulation says a lot about the emotional weight and biases you have and that you are not making a good faith attempt to discuss.


Not defending OP. But whenever we talk about overpopulation the conversation will automatically shift to limiting some races to have offspring. (Even you commented on incentivizing poor communities to not have children. Guess who is going to be disproportionally affected by this).

I don't think that overpopulation is a problem today. Population collapse due to concentration of capital and power is.


...where do you think there are the highest suicide rates? In the rich neighborhoods? Which neighbors do you think the accumulation of capital and power is hurting the most and will continue to hurt? You are not damaging the comunities by paying them to have less children, unless you value life but not the lack of suffering by such lives.

Our carbon footprint would be an eight fraction of what it is if there was only 1 billion people in the world, current state of climate change would still be decades away giving us a bit more time to gradually change course, and consentration of wealth and power would be less dramatic as the workforce will be more limited (supply and demand for money itself).


Even something one might think would be an equal-opportunity population-reducer like COVID is confounded by socio-economic factors, so it ends up harming those with less access to healthcare more.

Another perspective is whether not doing anything (and by "doing" I mean simple public health measures like free condom distribution and family planning, education about the impact of human population, or, more succinctly, "no Hitler-y stuff") would ultimately yield more harm to minorities as they'd be the first to feel the impacts of population stress.


They always try to form the question in a way to make you sound like Hitler when you speak against overpopulation, it's a bit funny at this point; who should we stop? Nobody, we should just pay some fixed amount every decade or so to people who don't have children (by their own decision to do so of course), probably focused on low income neighborhoods as they are less likely to provide a suffering-free childhood, and that need the money the most. This is just an idea, the main gist is that is a matter of incentives and disincentives.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: