Hacker News new | past | comments | ask | show | jobs | submit | byyyy's comments login

I'm curious about the AI at Google that was claimed by a person who has since been fired to be conscious.

I mean I know everyone was attacking him but in a way I suspect it's similar to a lot of the unreasonable dismissive attitudes towards chatGPT.

We really can't say anything either way until we are able to access and play with that version of AI that Google is hiding. I believe it's called lambda?

The point is that Google may already be holding something very very similar to being conscious.


We know AI chatbots aren't conscious most of the time because no program is running. There's nothing to do any thinking. [1]

We also know that they immediately forget anything they didn't write down. A chatbot has no memory of any internal calculations. If you ask it why it wrote something, it's guessing. [2]

People sometimes believe that an AI-generated character is conscious because the writing is convincing, along with some wishful thinking. But writers and characters aren't the same thing, and there's no writer waiting for your reply when you read the character's dialog. This essentially the same thing that happens when reading fiction. [3]

[1] https://skybrian.substack.com/p/ai-chats-are-turn-based-game...

[2] https://skybrian.substack.com/p/ai-chatbots-dont-know-why-th...

[3] https://skybrian.substack.com/p/the-ai-author-illusion


>no program is running

Not true. This is irrational. A program runs when you input a query and it generates a response. There is nothing that says consciousness must be always running. When you ask me a question and I answer it... In that time span of processing you're query all humans are conscious. Therefore it is a possibility that LLMs are too.

What is obvious here is that the consciousness an AI exhibits because it is not always running is clearly different from human consciousness because a human is always running. That is logically the biggest possible statement against consciousness. We simply do not have enough information to say it is absolutely unconscious. Such a claim is illogical.

>We also know that they immediately forget anything they didn't write down. A chatbot has no memory of any internal calculations. If you ask it why it wrote something, it's guessing.

False. First Chatgpt has limited memory in the span of a chat session. Outside of that it forgets things.

Second. Consciousness does not require memory. There are many examples of people with retrograde amnesia or even memories that only last minutes and these people are still considered conscious. Therefore a comment about memory is orthogonal to the concept of consciousness.

>People sometimes believe that an AI-generated character is conscious because the writing is convincing, along with some wishful thinking. But writers and characters aren't the same thing, and there's no writer waiting for your reply when you read the character's dialog. This essentially the same thing that happens when reading fiction.

A convincing facade of consciousness is the first prerequisite of consciousness. That is absolutely the first step. We obviously don't consider if rocks are conscious because rocks don't put up a convincing facade. In this respect many LLMs in a certain sense fulfill to varying degrees this first prerequisite.

The second step is to understand what's going on within the neural nets. In this regard we do not fully understand what is going on and we have made little progress.

So in conclusion we cannot know if these things are conscious. We are in an state of not understanding what's going on.

A statement of saying we do know that it is absolutely unconscious is irrational and illogical at this point. Such is the nature of your reply.


I think most people would say that when you don't experience time passing, you're not conscious at that time. (For example, when you're asleep and not dreaming.) It's pretty clear that a chatbot cannot experience time as we do.

Also, it actually is the case that ChatGPT API calls are stateless. This means it can't have any extra short-term memory other than what's written down in the chat session. It doesn't forget what it wrote in a few minutes, for forgets it immediately.

That means that you ask it why it wrote what it did, it's starting from scratch, the same way a different writer would.

I'm not sure what we should conclude from people who have severe memory problems? I've read about them, but I have hardly any direct experience. How about you?


> I think most people would say that when you don't experience time passing, you're not conscious at that time. (For example, when you're asleep and not dreaming.) It's pretty clear that a chatbot cannot experience time as we do.

There is a time delta that occurs when LLMs processes input. The LLM does experience time in that sense in the same way you experience time when you process a query given to you by another person.

There isn't anything known to science that happens instantaneously. All processes and change go through a time delta.

>Also, it actually is the case that ChatGPT API calls are stateless. This means it can't have any extra short-term memory other than what's written down in the chat session. It doesn't forget what it wrote in a few minutes, for forgets it immediately.

If it remembers what's in the chat Session then that is in itself memory. Everyone is aware it forgets things between sessions, I never denied that. Either way, again, there are examples of humans who have shorter memories than a chat session. https://en.wikipedia.org/wiki/Anterograde_amnesia

>I'm not sure what we should conclude from people who have severe memory problems? I've read about them, but I have hardly any direct experience. How about you?

You can look up the condition and even find a video about it. These humans exist and they are considered conscious. https://www.youtube.com/watch?v=o79p1b1SGk4

Look at the video yourself. Do you think the subject (who has Anterograde amnesia) is not conscious? I don't think so. Thus the argument of memory is orthogonal to consciousness. It has nothing to do with it.


lol. no.

As a thought experiment, I am going to challenge you to define what being conscious is. Go ahead, try it.


Nobody knows the exact definition of consciousness. It's a made up word with a vague definition. The catch is the vagueness of the definition is also made up so it's all bullshit anyway.

Either way when we communicate in English we have a vague feeling of what consciousness actually is. Most humans, save the most pedantic ass hole, is still able to communicate about consciousness based off of this vague and fuzzy feeling.

You are Not a pedantic ass hole, definitely not, and neither am I, so let's not get into that rabbit hole. Let's just leave it at the fact that you aren't stupid so you know what I'm talking about even when I don't get into the the fine grained details about what the definition of "consciousness" is.


So your argument is that consciousness is hard to define but Google may have something that resembles a conscious machine?

It's not about being an ahole. I agree with you that communication is hard and imprecise but that should not stop us from trying to be more precise in the right context.


> but bats don't have paws.

Bat paw: https://upload.wikimedia.org/wikipedia/commons/e/e1/Bat_in_a...

ChatGPT was totally in tune with this concept. The appendage on the elbow of the wing can be a paw or not a paw but it certainly evolved from paws. The reasoning was spot on and amazingly nuanced.


What? No. You're rationalizing what is clearly an error. Truth still exists, and bats do not have paws, any more than humans do.


(Add a grain of salt because I searched for about 15 minutes here.)

Doing some digging, paws are "soft foot-like parts of a mammal, generally a quadruped, that has claws" (Wikipedia). Bats do appear to have claws, but not really with the same structure as a paw; they don't seem to be mounted above a soft, fleshy pad. Rather, they're at the top of the wing, sitting on bone. They're also very different in function, when I've had the fortune to see a bat they seemed to use the claw to climb in a similar way to how climbers use ice axes.

My impression is that bat's claws are the anatomical equivalent of a thumb, not a hand/paw, and that the entire paw or hand of bat's ancestors evolved into their wings (sort of like if your fingers became very long and webbed and then you were able to fly with them).

So, my conclusion is that no, the claws of a bat are not a paw.

ETA: I neglected to consider their hind claws, but I don't think those are padded either.

https://i.pinimg.com/originals/1e/8d/09/1e8d0923d5caf2dbab68...

That certainly has claws but I wouldn't describe that as being soft or as being similar to a paw. And they're pretty clearly made for hanging, climbing, and grasping prey/food, and not for walking.


I think we're getting totally taken for a ride... byyyy can't actually be serious right...?

Anyway I'm enjoying imagining bats with little paws, it's an amusing image.


Well, if you're looking for my opinion, I don't think they're trolling. But I do think both of you were (intentionally or unintentionally) goading each other from a disagreement to an argument with your rhetoric (eg, where you said, "let's assume that's a reasonable source", or when they said "you obviously sourced this it just didn't match your point so you discarded it" - that brings up the temperature of the conversation).


It does bring up the temp, I apologize for including you in that. I appreciate your reply.


[flagged]


Bats are quadrupeds. When they walk they walk like a pterodactyl with the elbow of the wing. They don't stand on their hind legs. See the vampire bat. Additionally the "broad" definition from Merriam Websters covers it.

These aren't pedantic details we are just nitpicking on either. ChatGPT's response considered the nuance the definition encompasses, which to keep on topic is thoroughly impressive and relevant to the overall conversation.

Off-topic:

The person you queried mentioned "intentional goading." Which is what you're continuing to do with that Calvin and Hobbes link. I think this, the flag and accusing me of trolling is taking it too far. A little minor goading is ok during a debate, (I actually don't completely agree with the absolutist politeness policies of HN). While I returned the goad (which I shouldn't have), ultimately I didn't really have a problem with it. I think, now though, it has escalated now past the point of no return. I'll be exiting this thread because of this. Farewell.


I'm serious.

https://www.merriam-webster.com/dictionary/paw

Look up the definition of a paw. I think you have a sort of mixed up definition of what it is. Possibly a different definition used among your "bat expert" friends.

And I quote (again):

   Paw: the foot of a quadruped (such as a lion or dog) that has claws
   broadly : the foot of an animal

Both the broad and technical definition of paw fits what the bat has. Bats have paws.

The thing you are referring to is likely are these soft pads that are often on the paws, sometimes called paw pads.


Also, I don't know if you noticed.

But this nuance in the definition of the word paw was addressed by chatGPT.


https://www.vocabulary.com/dictionary/paw

As long as it has claws according to vocabulary dot com it is a paw.


[flagged]


[flagged]


[flagged]


Birds don't have paws because they aren't quadrupeds according to websters.

Bats walk on fours. They are quadrupeds therefore they have paws according to Merriam Webster.

Why don't you address the point I brought up? I already completely understand your definition no need to reiterate it. However, there is a clear disconnect between your definition of paw and the definition from Merriam Webster. Please address it.


[flagged]


four feet.


You're very right about that; my mistake. It means four feet, not four legs.


[flagged]


This subthread is a flamewar that died down. Please don't rekindle it. The point has been made to death, and making it again but more insulting makes it more difficult for people who might disagree to consider the perspective, because they have to filter out their feelings about being insulted - a weighty task - in order to begin considering it, let alone accepting it.


This was amazing. Two men and an LLM arguing about facts. 2 Minutes on google scholar proved to me that bats are quadrupeds (with paws). https://scholar.google.com/scholar?q=Quadruped%20bat


I'm ok with your language. I found the "competent in English" rather immature and juvenile, but it's not a big deal and really minor.

I also understand that you might not realize how heated this sub thread was as the other guy deleted half his posts and flagged one of mine (therefore possibly making it invisible to you). So adding a little heat to something that looks tame isn't a big deal. I won't continue the flame war but I will continue the discussion.

It's not my interpretation. It's about the interpretation by Merriam Webster. I believe not only is it your interpretation that is incorrect when compared to official sources, but you didn't fully read the official definition either.

The eagle is not a quadruped. The definition in Merriam Webster says it has to be on a quadruped for it to be a paw. Hence your example is completely irrelevant. Please correct your mistake.


I'm presuming you're new here since your account is new & I seem to detect some misunderstandings about how HN works. If I presume too much, my apologies.

Are you aware of the "showdead" function in your settings? I don't think they deleted any comments, they were flagged. If you turn on showdead, you'll be able to see them again. (You can't delete a comment someone's responded to, or after 2 hours.)

I don't know if that other commenter flagged your post, but I don't think it matters. I didn't flag any posts here, but both of you guys were getting flagged. In situations like that, I try to interpret the flags as a request from the overall community not to have that sort of discussion on the forum. The sort of argument you were engaged with (and I recognize this isn't fully on you here, it takes two to tango) is against the guidelines of the site and generally frowned upon by the community, so there's no reason to suspect this other commenter was flagging you. At least 3 people were flagging both of your posts.

I understand that you feel pointed language is appropriate in a debate, and that HM'd guidelines here are patronizing. That's all well and good. The objection of the community isn't a moralizing one. It's a practical one; the issue is that flamewars displace the type of discussion this community is seeking to have and erode the good will on which the forum turns.

(And I do agree that bats crawling on the ground is quadrupedalism, for what it's worth. I'm not sure if that makes them a quadruped or not since that isn't their primary move of locomotion, indeed my understanding is that it's a vulnerable position for them and that it's quite deadly [since they can't evade predators].)


So... Sorry to continue off-topic... but I just looked at byyyy's about string. https://news.ycombinator.com/user?id=byyyy

It says: "I don't really write stuff. What I do is use my personally trained LLM (trained on my own conversations) to respond to people. So you are talking to me in a way but not really.

Any query my LLM gets wrong I will interject with my own response but this is only 5% of the time."

Maybe this is not off-topic after all. Does HN have a policy on bots?


Move on, dude.


Thanks for your response. I want to continue the conversation on bats so I won't respond about the other stuff.

Take a look at this. The bats are walking on all fours: https://youtu.be/ewmydjekJnU?t=62 This is not a vulnerable position as they crawl on all fours on cave walls rather then the ground. On the ground they are vulnerable, on the wall or ceiling of some cavernous structure they are safe.


I didn't delete any posts, nor did I flag or downvote any of yours. I actually resurrected one of yours (vouched) so that I could reply to it.


[flagged]


We've banned this account for posting flamewar comments. Please don't create accounts to break HN's rules with.

https://news.ycombinator.com/newsguidelines.html


The article isn't doing anything more then quoting experts in the field.

From the article:

“It is certainly much more than a stochastic parrot, and it certainly builds some representation of the world—although I do not think that it is quite like how humans build an internal world model,” says Yoshua Bengio, an AI researcher at the University of Montreal.

If you remain unconvinced then the only conclusion I can make is that your an expert yourself on a scale of even higher in eminence then Yoshua Bengio here. Also don't forget Geoffrey Hinton, the Father of the modern revolution of AI, you must be more of an expert than him.

Let's be real. These people are saying something along the lines that it's more then a stochastic parrot and we aren't sure what's going on. But you're saying it's absolutely nothing more than a parrot and your unhappy with with pop Sci media quoting experts who are just saying they don't know?

Are you saying pop Sci media should quote you? Because you absolutely know what's going on and that it's definitely nothing more than statistics? I'm asking a stupid question here because I don't think this is what you're saying. You're not stupid, you know that what these experts say have merit.

So my question for you is why do you remain so unconvinced in the face of experts and other intelligent people who clearly say no one understands? Your opinion here actually represents a large group of people who very violently deny/dismiss what even many experts are saying and I'm curious as to why?


You keep using that word "expert" and I don't doubt they're experts in machine learning, GAN and NNs but at this point what we're looking for is experts in what intelligence IS and the problem is .. nobody knows.

There's no underlying theory of mind here. There's a lot of opinion and belief .. mainly in emergent behaviour or properties of scale.

My problem is that I'm old enough to have been a child during the years of the lighthill report, and the AI winter(s) which followed. I've seen too many prior claims that the magic was being seen.

I admit, what this cycle of GPT and LLMs do is pretty bloody impressive. I don't see inductive reasoning, directed drive or any evidence of what I think intelligence is. I do see good approximations. But that winds up in "well it's a different kind of intelligence" which I find unsatisfying.

I hate analogies. But I'm constantly reminded of one person acts at the Edinburgh Festival fringe "channeling" shakespear or dickens in dialogue. They're great, but they don't write new shakespear. I don't see any act of creation in this stuff, any massive inferential leap in large problems. I do see massive improvements in some things like diagnostic image analysis and that's heartening, but there's a light yearg gap between image analysis of cancerous cell forms in xrays and being "alive" as a mind.

I totally get I do anything but define what intelligence is, and for a good reason: nobody knows yet. We're in the meese report "I know it when I see it" territory disagreeing if we are seeing it. (But about intelligence, not pornography).

I don't know what intelligence is. Dolphins and apes have some. Elephants and pigs too. They display affection and preplanned behaviour, empathy, memory, a sense of future, ant hills less so although some have argued rhetorically the anthill has will even if individual ants do not.

I certainly don't think animals inability to speak makes them "unintelligent" but there's a qualitative and quantitative difference between them and us, humans. And I continue to believe (and I stress believe, not have evidence or scientific proof) there are no signs of latent intelligence in what we're seeing.


> There's no underlying theory of mind here.

Actually, there's some experimental evidence that GPT4 have a Theory of Mind as good as humans, maybe better.

https://arxiv.org/abs/2304.11490

> GPT-4 performed best in zero-shot settings, reaching nearly 80% ToM accuracy, but still fell short of the 87% human accuracy on the test set. However, when supplied with prompts for in-context learning, all RLHF-trained LLMs exceeded 80% ToM accuracy, with GPT-4 reaching 100%.


>> There's no underlying theory of mind here.

> GPT4 have a Theory of Mind

You are misunderstanding ggm. That study is on ToM tasks referring to GPT's analysis and perceived recognition of the user's mind. It says nothing of GPT's own status as a mind. Nowhere in it is an ontological theory of mind actually defined. If you were to refute ggm's claim, you (or preferably the author of the original article) should be presenting your theory of mind, not GPT's.


If an AI can understand how you think, but you can't understand how the AI thinks... that's not an argument that the AI is the unintelligent one.


There's no reason to assume the AI can understand how we think based on just those tasks. Those tasks could be completed a traditional static program. It's akin to claiming the Mona Lisa painting can see us because it looks like it is staring at us: it is actually we who are doing the staring.


What "traditional static program" can successfully pass novel theory of mind tests as part of a broad suite of intelligent capabilities that it can apply in context when appropriate? I am interested in hearing about this program.


A program hardcoded to respond to a scenario and question with the exact output desired by the ToM task. For example:

[INPUT] Scenario: "The morning of the high school dance Sarah placed her high heel shoes under her dress and then went shopping. That afternoon, her sister borrowed the shoes and later put them under Sarah's bed." Question: When Sarah gets ready, does she assume her shoes are under her dress?

[OUTPUT] Sarah placed her shoes under her dress before she went shopping, but her sister borrowed them and put them under Sarah's bed. Sarah doesn't know that her sister borrowed her shoes, so she may assume that they are still under her dress.

This would result in a positive ToM score, even when the entire program is just 1 static if-statement. The ToM score says nothing of the program's internal reasoning process, it only cares that it returned the desired output.


I said "novel theory of mind tests as part of a broad suite of intelligent capabilities that it can apply in context when appropriate". What you're suggesting fails the first word, before we get to the rest.


If we're including tests that don't exist yet, then sure those are going to be difficult to pass. As far as actual ToM tests go though, such as the one being discussed in this thread, they can be easily passed by trivial hardcoding and say nothing of the program's internal reasoning.


LLMs can pass novel theory of mind tests, which is what we're talking about. The whole point of good tests is you invent or withhold new ones it has never seen before and test it on them. You said "Those tasks could be completed a [sic] traditional static program.", and no, they can't. You're incorrect.


> LLMs can pass novel theory of mind tests, which is what we're talking about.

Passing a ToM test is not what OP meant by having an "underlying theory of mind." OP's talking about the machine having an underlying mind (ie sentience, sapience, consciousness, etc), ToM tests are only testing output.

> You said "Those tasks could be completed a [sic] traditional static program.", and no, they can't. You're incorrect.

They can, a static program as I described would indeed answer that one question correctly, resulting in a positive ToM score, without seeing any training data whatsoever. Did the programmer see it? Maybe, but the machine didn't and it would pass the test regardless.


If you have put the answer into the program then by definition you had the test available to you when you finalised the program, which means it is definitionally not a novel test.


The test is novel to the program, just not its programmer. So are we testing the program or are we actually testing its programmer? If we're testing the program, then the programmer's foreknowledge is irrelevant.


>The test is novel to the program

That's funny, I thought you said the test's answer was embedded into the program, making it definitionally not novel to the program.

Anyway, this is boring. You've had five or more opportunities to understand what the word "novel" means in an ML testing context and are choosing wilful obtuseness instead.


> in an ML testing context

OP was not speaking in the ML testing context, hence the misunderstanding.


>You keep using that word "expert" and I don't doubt they're experts in machine learning, GAN and NNs but at this point what we're looking for is experts in what intelligence IS and the problem is .. nobody knows.

So the experts say they don't know. But you make the opposite claim. You previously claimed you know it's nothing more than statistical phenomena. Now you changed your opinion and are now inline with the article in saying we don't know.

So discussion over?

>There's no underlying theory of mind here. There's a lot of opinion and belief .. mainly in emergent behaviour or properties of scale.

But again this is all negated by the experts claiming they don't know what's going on. And your statement is in direct opposition with your previous claim that it's only statistical phenomena.

>I admit, what this cycle of GPT and LLMs do is pretty bloody impressive. I don't see inductive reasoning, directed drive or any evidence of what I think intelligence is. I do see good approximations. But that winds up in "well it's a different kind of intelligence" which I find unsatisfying.

There is evidence inductive reasoning. It's not consistent nor is it as complex as human induction but evidence of actual inductive reasoning exists. Plenty of examples of it. I can provide proof of this if you want... but it's quite obvious it's actually everywhere.

>I totally get I do anything but define what intelligence is, and for a good reason: nobody knows yet.

The claim cited by researchers in that article is "emergent abilities." Nobody tries to define what intelligence is. It's more of "I don't know what's going on but something is up" attitudes that I see. This is in stark contrast to your claim of "The only thing going on is statistical anomalies".

>I certainly don't think animals inability to speak makes them "unintelligent" but there's a qualitative and quantitative difference between them and us, humans. And I continue to believe (and I stress believe, not have evidence or scientific proof) there are no signs of latent intelligence in what we're seeing.

I don't think this claim was made in the article. The strongest claim that opposes what you're saying here is this, and I quote:

“They’re indirect evidence that we are probably not that far off from AGI,” Goertzel said in March at a conference on deep learning at Florida Atlantic University.

It's just speculation we may be close to something.

I think the theme in your response is "we don't know" and it echoes the theme in the article so we are all in agreement here.

Just note the claim of "we don't know" or we "don't understand" something that we artificially created from scratch is strictly much more profound then saying we definitely know it's all statistical parroting.


"you know that what these experts say have merit"

No, we don't know that at all.


If experts don't have merit, then what person has merit? Non experts like you and I?


Arguments should be evaluated by their logical soundness, not by who said them.


True, but the volume of information in the world today is too overwhelming for one person to evaluate correctly. We live in the information age after all. Most people don't have the faculties to do a proper evaluation that includes both you and me. Would you trust your self to logically evaluate the capabilities of a nuclear reactor design if you yourself aren't an expert? No. You would hire an expert.

Therefore when it comes to technology that we our selves clearly aren't experts in, then the best method is to utilize the logic of other "experts" as a subroutine given that our own faculties are less efficient and less accurate.

Unless you yourself are an expert who has experience building an LLM on the scale of chatGPT trusting the opinions of experts is your best bet.

Most people have a common bias towards trusting their logic above the logic of others and this is actually ironically irrational. There are people who's entire lives around a certain subject matter and if you aren't that person, then for that subject matter the expert is better. That is the the most rational conclusion and I would venture to say if you aren't arriving at that conclusion yourself then likely you are suffering from the aforementioned bias.


A dangerous myth of modern society is that science is beyond the capacity of ordinary men. I strongly believe it's not.

The practice of science itself may be, as it takes years of research to get to the point where one can produce a new result. However, things that are already known can be taught, and iteratively simplified in a way that abstracts away details while keeping the core argument intact.

Take for example the claim that everything in the universe is made of atoms. It's not a trivial thing to understand, yet everyone accepts it nowdays because we've had so much time and effort put towards simplifying the theory and presenting it to people in a way that is easy to grasp.

If those LLM "experts" were truly experts, they could explain their point clearly without the dark-ages-church "trust the priests, peasant" act.


Can the CEO fire the CTO if the CTO isn't performing well?

If not who is really the CEO?


How come when I upgrade my graphics card on a PC I don't need to upgrade the binary?

When compiling for Nvidia chips there's only one target. I believe all Nvidia chips despite different architectures use the same underlying assembly language. So a cuda binary should work everywhere.

It's not gpu architecture here. Nvidia makes sure that the API to that architecture remains constant. The differences that are happening are high level architectures. Consoles aren't like PCs that follow the same overall architecture. They are usually massively different each generation, with different central chips different board layouts, etc. Etc. Sony use to get really creative with this... I remember the cell architecture was extremely innovative at the time.

However I believe for the most recent generations of playstation and for all Xboxes those consoles have closely followed the PC architecture. Nintendo consoles have yet to do this though, each console is massively different from the PC and each other with the exception of GameCube and Wii u which were largely similar.


> How come when I upgrade my graphics card on a PC I don't need to upgrade the binary?

> When compiling for Nvidia chips there's only one target. I believe all Nvidia chips despite different architectures use the same underlying assembly language. So a cuda binary should work everywhere.

Because on the PC, Nvidia only exposes high level targets for the shaders. Even PTX, the assembly you might be familiar with combined with cuda, isn't actually the device's asm, but instead it gets compiled down to the device's asm using a full compiler. It's poorly named and more a compiler IR than an asm.


Makes sense thanks for clarifying.


I've tried this and I have a monster machine. The experience is overall better on the switch. Not everything has been emulated correctly and the frame rate is still higher on the switch overall.


No this is Hollywood making you think that. Car crashes and small plane crashes result in metal debris, not exploding balls of fire like Hollywood likes to depict.

In general the concept of starting a fire and a crashing small plane are orthogonal concepts. What happened with that plane is not arson at all.


You seem pretty hung up on this "exploding balls of fire" thing while ignoring that he's crashing a gas-powered vehicle, likely rupturing its fuel tanks and supply lines in close proximity to hot exhaust metal.

You don't need "exploding balls of fire" to create a disaster.


I'm hung up on it because it's true.

When's the last time you seen a car light up on fire during an accident? Never because the chances of it happening are basically negligible.


Cars don't have wings full of fuel and are built to crash, not built for minimal weight


The actual data says that post crash fires are rare.


Early March, I think? Sometime this year, anyway.


Ok, but you get my point. It's rare. Most people haven't seen this ever.


We understand. He's just talking about how justice should work from a hypothetical perspective.

Hypothetically we all want a justice system to be based on justice but everyone is well aware that the system is at its heart capitalistic.

It's ok to discuss hypotheticals.


It's not surprising to everyone here.

It's just brought up as a topic of discussion. Everyone is pretty much aware of what you said.

What isn't fully spelled out is that there are social relationships involved as well. Responsible parties are buddy buddy with regulators while this YouTuber probably pissed off a regulator with his dumb antics so the regulator is unreasonably likely going all out in a fit of annoyance.


Nah. A fire is unlikely here.


https://en.wikipedia.org/wiki/San_Rafael_Wilderness#Climate

> Rain is extremely rare in the summer, and dry lightning from the occasional thunderstorms can start fires.

https://lpfw.org/san-rafael-wilderness-50-years-of-preservin...

> Wildfire frequency is an increasing concern in the San Rafael Wilderness. Over the past fifty years, three wildfires have together burned nearly the entire wilderness area, beginning with the 1966 Wellman Fire, the 1993 Marre Fire, the 2007 Zaca Fire, and the 2009 La Brea Fire. Overly-frequent fire in chaparral can permanently alter the ecosystem, depleting the seed bank and making it prone to invasions of non-native weeds.


Good sources. But a crashed small plane is unlikely to start a fire anymore than a car accident will go up in flames (basically never happens).

Starting a fire or crashing a small plane/car are completely orthogonal situations.

Your sources point to weather/climate as the causal source of wild fires.


https://www.usatoday.com/story/news/nation/2014/10/27/plane-...

> Small-airplane fires have killed at least 600 people since 1993, burning them alive or suffocating them after crashes and hard landings that the passengers and pilots had initially survived, a USA TODAY investigation shows. The victims who died from fatal burns or smoke inhalation often had few if any broken bones or other injuries, according to hundreds of autopsy reports obtained by USA TODAY.

> Fires have erupted after incidents as minor as an airplane veering off a runway and into brush or hitting a chain-link fence, government records show. The impact ruptures fuel tanks or fuel lines, or both, causing leaks and airplane-engulfing blazes.

> Fires also contributed to the death of at least 308 more people who suffered burns or smoke inhalation as well as traumatic injuries, USA TODAY found. And the fires seriously burned at least 309 people who survived, often with permanent scars after painful surgery.

And while that is about dangers for an occupant it should be noted that a fire from a small airplane crash is not a rare occurrence.

---

https://www.aopa.org/training-and-safety/students/flighttest...

> Aircraft fires often occur following forced landings, and the result is often more dangerous than the forced landing itself. The sad truth is that most light aircraft fuel systems are not designed to withstand crash impacts, and they often fail during a forced landing. Spilled fuel and hot crash components often result in a fuel-fed inferno.

Note the word often there.


Words, qualitative descriptions and numbers with no context can exaggerate reality. That is the meat of your sources.

If you take a look at the numbers, only a ratio of 0.04 accidents result in a post-impact fire. It's rare.

As you suggested, I noted the word "often," in return please note 0.04.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: