Hacker News new | past | comments | ask | show | jobs | submit login

At this point, with so many of them disagreeing and with so many varying details, one will choose the expert insight which most closely matches their current beliefs.

I hadn’t encountered Pascal’s mugging (https://en.wikipedia.org/wiki/Pascal%27s_mugging) before and the premise is indeed pretty apt. I think I’m on the side that it’s not, assuming the idea is that it’s a Very Low Chance of a Very Bad Thing -- the “muggee” wants to give their wallet on the chance of the VBT because of the magnitude of its effect. It seems like there’s a rather high chance if (proverbially) the AI-cat is let out of the bag.

But maybe some Mass Effect nonsense will happen if we develop AGI and we’ll be approached by The Intergalactic Community and have our technology advanced millennia overnight. (Sorry, that’s tongue-in-cheek but it does kinda read like Pascal’s mugging in the opposite direction; however, that’s not really what most researchers are arguing.)




>one will choose the expert insight which most closely matches their current belief

The value of looking at ai safety as a pascals mugging as posited by the video is in that it informs us that these philosophers arguments are too malleable to be strictly useful. As you note, just find an "expert" that agrees.

The most useful frame for examination is the evidence. (which to me means benchmarks), We'll be hard pressed to derive anything authoritative from the philosophical approach. And as someone who does his best to examine the evidence for and against the capabilities of these things... from Phi-1 to Llama to Orca to Gemini to bard...

To my understanding we struggle to at all strictly define intelligence and consciousness in humans, let alone in other "species". Granted I'm no David Chalmers.. Benchmarks seem inadequate for any number of reasons, philosophical arguments seem too flexible, I don't know how one can definitively speak about these LLMs other than to tout benchmarks and capabilities/shortcomings.

>It seems like there’s a rather high chance if (proverbially) the AI-cat is let out of the bag.

Agree, and I tend towards it not exactly being a pascal's mugging either, but I loved that video and it's always stuck with me . I've been watching that guy since GPT 2 and OpenAI's initial trepidation about releasing that for fear of misuse. He has given me a lot of credibility in my small political circles, after touting these things as coming for years after seeing the graphs never plateau in capabilities vs parameter count/training time.

Ai has also made me reevaluate my thoughts on open sourcing things. Do we really think it wise to have gpt 6-7 in the hands of every 4channer?

Re mass effect, that's so awesome. I have to play those games. That sounds like such a dope premise. I like the idea of turning the mugging like that.


> Re mass effect

It's a slightly different premise than what I described. Rather than AGI, it's faster-than-light travel (which actually makes sense for The Intergalactic Community). Otherwise, more or less the same.


Moreover the most important thing people that deny risk is the following:

It doesn't matter at all if experts disagree. Even a 30% chance we all die is enough to treat it as 100%. We should not care at all if 51% think it's a non issue.


This is such a ridiculous take. Make up a hand-waving doomsday scenario, assign an arbitrarily large probability to it happening and demand that people take it seriously because we're talking about human extinction, after all. If it looks like a cult and quacks like a cult, it's probably a cult.

If nothing else, it's a great distraction from the very real societal issues that AI is going to create in the medium to long term, for example inscrutable black box decision-making and displacement of jobs.


Low probability events do happen sometimes though and a heuristic that says it never happens can let you down, especially when the outcome is very bad.

Most of the time a new virus is not a pandemic, but sometimes it is.

Nothing in our (human) history has caused an extinction level event for us, but these events do happen and have happened on earth a handful of times.

The arguments about superintelligent AGI and alignment risk are not that complex - if we can make an AGI the other bits follow and an extinction level event from an unaligned superintelligent AGI looks like the most likely default outcome.

I’d love to read a persuasive argument about why that’s not the case, but frankly the dismissals of this have been really bad and don’t hold up to 30 seconds of scrutiny.

People are also very bad at predicting when something like this will come. Right before the first nuclear detonation those closest to the problem thought it was decades away, similar for flight.

What we’re seeing right now doesn’t look like failure to me, it looks like something you might predict to see right before AGI is developed. That isn’t good when alignment is unsolved.


What are you on about? The technology we are talking about is created by 3 labs and all 3 assign a large probability. How can you refute this with what kind of credentials and science?


Unfortunately for you, that's not how the whole "science" thing works. The burden of proof lies with the people who are dreaming about these doomsday scenarios.

So far we haven't seen any proof or even a coherent hypothesis, just garden variety paranoia, mixed with opportunistic calls for regulation that just so happen to align with OpenAI's commercial interests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: