- "blocked a San Francisco ambulance from getting a pedestrian hit by a vehicle"
I'm taking no "position" on either "side" of the debate, but I feel it's noteworthy that a *human-driven* car killed a pedestrian (seriously?) here, and that fact is utterly banal and completely unremarkable to the authors—and to us. AI cars are still highly flawed and kill people. Human-driven cars are highly flawed, probably more flawed, and kill way more people. This is sort of a trolley problem, and we suck at solving those.
Because: incompetent killer humans are a diffuse force of nature; incompetent killer AI's are a centralized, attributable malefactor. We humans love identifiable malefactors, evildoers; love identifying and spotlighting and ostracizing them. It's a honed evolutionary trait. We've no atavistic instinct for the opposite, utilitarianism—for minimizing harm through "rational" (?) number-crunching, even where it conflicts with "obvious" (?) morality. Should we? Or is the other side of the trolley switch the correct one?
GP is criticizing the narrative focus. A robotaxi causing a new, less serious problem is the focus over a commonplace very serious problem caused by conventional cars.
Humans have always lived around danger; and now, for as long as anyone can remember, cars have been possibly the most significant source danger while also being integral in modern life. I would argue reacting to new problems is the rational thing to do.
Sure, cars are dangerous, but they enable a social structure that is far safer than what we had(hospitals, specialized surgeons, first responders, advanced pharmaceuticals, etc) to say nothing of productivity.
> Sure, cars are dangerous, but they enable a social structure that is far safer than what we had(hospitals, specialized surgeons, first responders, advanced pharmaceuticals, etc) to say nothing of productivity.
The next time I go to the Amsterdam, I'll be sure to see how many less hospitals they have.
But the issues with driven cars are talked about a lot. This is a situation where we have to evaluate if self driving cars should be allowed to test in the public commons. All other discussions aren't relevant to this discussion (but are still important to have)
No I think OP has a valid point. There is almost no news coverage about the absolutely massive problem of human drivers killing people constantly. It's become an actual crisis in America that's spiraling out of control.
Meanwhile the media only focuses on weird stuff like scooter accidents and self driving cars which is statistically irrelevant.
There are lots and lots and lots of human drivers, and very few self driving cars.
I think it's legitimate to find it unremarkable, as an anecdata, that one of very many human drivers hit a pedestrian, then one of very few autonomous cars blocked an ambulance - something so obviously bad that even a drunken driver could avoid it.
Kind of hard to assign liability to some matrix algebra that unfortunately did some computations that were suboptimal given the conditions. If these costs add up, it turns into a tragedy of the commons type of problem. A more modern form of the "corporations can pollute but don't pay the costs".
This is why automated driving is untenable, just like why LLMs are untenable with copyright. Capitalism is so awkward that it requires these contradictory legal frameworks to give an impression of rationality or naturalistic logic. The contradictions come to the foreground more with new technology, but they have always been there.
This isn't even a "trolley problem" it sounds like this is just "fake news":
> "Throughout the entire duration the AV is stopped, traffic remains unblocked and flowing to the right of the AV," a Cruise spokesperson said in a statement. "The ambulance behind the AV had a clear path to pass the AV as other vehicles, including another ambulance, proceeded to do."
> The video captured by Cruise showed that the ambulance parked behind the Cruise and did not attempt to pass the robotaxi in the rightmost unblocked lane. Instead, responders moved a firetruck to allow the ambulance to pass on the left. The video, which Cruise declined to share publicly, indicates that 90 seconds elapsed between the patient being put on the stretcher and the ambulance leaving the scene.
This is also the easiest "trolley problem" in the world since it was the vehicle driven by the human that struck the pedestrian and gave them a life threatening injury. I know humans have a metric fuckton of perceptual biases, but I have no idea why that isn't the intellectual equivalent of stabbing-you-in-the-face-obvious.
If the story was centered on the driver of the vehicle and if they were some rich narcissist we wouldn't have any difficulty understanding that blaming the vehicle that blocked the ambulance for 90 seconds was blame shifting.
I think the fact that we as a society allow for cars to be driven at all, given the number of deaths that inevitably result, proves that there is at least some utilitarian instinct. Because of course someone could always argue that cars themselves are a "centralized, attributable malefactor" (similar to, say, cigarettes). For instance we could imagine a world in which the only cars allowed on the road were emergency vehicles. In that world, your point might be stronger.
And also, AI cars are on the roads, aren't they? Sure, not all of humanity is instantly going to be ok with them, but we'll probably get there (assuming the technology gets there).
This is an excellent point. There are two independent causes to this person's death.
1. A human driver grievously injured a pedestrian
2. An AI driver delayed medical care for the injured pedestrian
That person is potentially still alive if 2 doesn't happen. They are certainly still alive if 1 doesn't happen. Therefore, 1 deserves more of the blame here.
Considering these are two independent causes, we can still be upset that 2 happened and work to stop it from happening again. But we also shouldn't get so concerned with 2 that it causes us to increase the occurrences of 1 (e.g. by overzealously legislating away a future in which AI drivers are drastically safer than humans).
I disagree that 1 deserves more of the blame here—a blocked ambulance is blocked from attending to all medical emergencies, not just medical emergencies caused by (human-powered) car accidents.
If the robotaxi had blocked the ambulance from reaching someone who'd just had a stroke, blame would rest 100% on the robotaxi. And there's no telling that this won't happen again.
>If the robotaxi had blocked the ambulance from reaching someone who'd just had a stroke, blame would rest 100% on the robotaxi.
Yes, but that isn't what happened here. You are no longer assigning blame for a real incident. You are theorizing about the potential damage that AI drivers can cause in similar situations. But once you do that, you also have to consider the tradeoffs between human and AI drivers and all the deaths that human drivers cause. Otherwise, you aren't actually prioritizing the overall safety of our communities.
The incident being discussed here is the blocking of an ambulance. The root cause for the ambulance being dispatched is immaterial to the argument.
The position you have outlined creates a second argument, and its being called out as a distraction. Its akin to "Children eat less fruit when the cupboard is full of sugar" and then someone countering "I don't like pasta, why aren't we discussing that?".
They seem related because they are about food preferences, but they are not because the original study was discussing the presence of sugary foods and children bearing on health, not your personal preferences as an adult. In this case the discussion is about an automatic vehicle failing to move aside for an ambulance, and it is not about why that ambulance was attending a job, or about what that job is. Changing the topic does a disservice to the issue.
> You are no longer assigning blame for a real incident. You are theorizing about the potential damage that AI drivers can cause in similar situations.
I am! Because I think that's a useful thing to do.
> But once you do that, you also have to consider the tradeoffs between human and AI drivers and all the deaths that human drivers cause.
Also yes—and one factor right now is that we have a robust EMS system designed to respond to medical emergencies, including road accidents. If the near-future tradeoff is that robotaxis (1) coexist alongside human-piloted cars, rather than replacing them outright, which means road accidents will continue to happen, and (2) robotaxis hinder ambulances from reaching humans in need, including (but not limited to) humans who were struck by cars piloted by other humans, I think that's a poor tradeoff.
If it were possible to jump past the messy middle part and immediately get to the "all cars are autopiloted, except ambulances, which aren't hindered by autopiloted vehicles in any way," then of course that sounds great! But it's precisely the messy middle part that worries me.
For better or worse, human drivers aren't going away anytime soon. And you rightfully point out that human drivers are prone to accidents. Which is all the more reason that EMS services are critical in the near-term.
(But also, counterfactually speaking, it is absolutely the robotaxi's fault that the person died. Had the ambulance been unhindered, they would have had a chance at life after receiving proper medical care; since the ambulance was hindered, they didn't get that care and died as a result.)
If you are talking about these in general terms about a hypothetical future, I don't think one example of medical care being delayed is very strong data compared to the 42,939 motor vehicle deaths the IIHS reported in 2021.
>(But also, counterfactually speaking, it is absolutely the robotaxi's fault that the person died. Had the ambulance been unhindered, they would have had a chance at life after receiving proper medical care; since the ambulance was hindered, they didn't get that care and died as a result.)
I'll take the certainty of life for that pedestrian if they weren't hit by a human driver over the "chance at life" that would have come from quicker medical care.
But they were hit by a human driver. 42,939 people were struck and killed by human drivers. So in these present circumstances, delaying ambulance care (which is the immediate, not-hypothetical question here) is inexcusable.
The ambulance could have been blocked by normal well behaving traffic too. Seems like the optimal solution would be for most roads to have a separate bus/bike lane which normal traffic can not ever use, then in emergencies it can be used as a fast lane around traffic.
I don't think OP is talking about assigning blame in this particular case: The point is that a huge and arguably bigger problem (to the tune of ~40K deaths per year in the US) is buried in one sentence in the middle of the article.
The USA has kind of thrown up its collective hands and declared the danger of car accidents to be unsolvable. We treat it like an inevitable fact of life, now. The most the USA does, regulation-wise, are mild safety regulations on car manufacturers with tons of loopholes. There's relatively little penalty for drivers who accidentally injure or kill with their cars, and almost no liability on the car manufacturer. How does the saying go? If you want to kill someone and minimize your punishment, your best bet is to "accidentally" hit them with your car.
>I don't think OP is talking about assigning blame in this particular case
To be clear, I'm only talking about this specific case in these terms because the coincidence of it being a human driver which caused the medical emergency makes this a great example to highlight all the points you and OP are talking about. We are in agreement on all those points.
>The difference is you can sentence a person to court. A robot, not such.
People generally don't end up facing much legal trouble when killing a pedestrian in an accident. It usually requires negligence. I don't know enough about this accident (or the Cruise vehicle's behavior) to say that anyone was negligent.
On a tangent of trolley like moral dilemmas, people tend to feel there's a moral difference between throwing a bomb onto a person versus throwing a person onto a bomb.
AI driven cars are not highly flawed and very, very rarely kill people. Humans driving cars are often reckless or intoxicated and kill many, many people every day.
This is the thing I keep trying to remind people about when people complain about how much energy Bitcoin uses. Just think about how much energy is used by the U.S. government to maintain fiat.
It's wild that people want to focus the problems of completely unproven new technology whose adherents are betting on it taking over.
I think SFFD is rightfully worried about robotaxis but this doesn’t seem to be the case. When cameras are recording everything, you can’t just make up facts.
“The video captured by Cruise showed that the ambulance parked behind the Cruise and did not attempt to pass the robotaxi in the rightmost unblocked lane. Instead, responders moved a firetruck to allow the ambulance to pass on the left. The video, which Cruise declined to share publicly, indicates that 90 seconds elapsed between the patient being put on the stretcher and the ambulance leaving the scene.”
I think the safe default assumption is that if the ambulance driver didn't use the right lane, there was probably a good reason. Clearly the firetruck driver agreed with them. Maybe traffic was moving so quickly that they'd have to accelerate hard and risk hurting the patient, or they were concerned about getting rear-ended. Regardless, Cruise's vehicle clearly was at fault for not getting out of the way.
Drives me mad when there’s armchair bickering about “well you could have just done X.”
Doesn’t matter. If there’s a law about getting out of the way, offering maximum space and choices to emergency vehicles, and you didn’t do that, you failed and you broke the law.
One can definitely bicker about the details, but it doesn’t change anything about the fact that the vehicle failed to obey the law.
> Doesn’t matter. If there’s a law about getting out of the way, offering maximum space and choices to emergency vehicles, and you didn’t do that, you failed and you broke the law.
Then we can talk about whether the vehicle operator failed to obey the law, but I think we're talking about something far more serious, which is whether Cruise indirectly killed a man.
In determining whether Cruise has moral responsibility over this death, it is then natural for citizens to look over the video footage and ask what's going on.
Assuming the Ambulance driver was competent, the good reason was likely the patient not being secured and thus they wouldn’t have left for a significant fraction of those 90 seconds in either case.
The cruse driver could still have slowed their exit by a few seconds even without being a significant obstacle. The issue is ambulances encounter a lot of cars on the way to and from an accident, so a few seconds per car can add up to a significant delay overall.
If there was a reason that the ambulance didn't move right, then wouldn't that same reason exist preventing the taxi from moving right? Even moreso since they don't have lights and sirens like the ambulance?
Ok so the guys and gals who we pay to save your life are saying they're getting in the way. But MLRoboCorporation(TM)(c)(r) says fuck those guys, they can go around?
I think I'll side with the humans on this one, let them take control of the vehicle.
I don't care how much money Kyle Vogt loses to car theft, that's not my problem. Clearly this industry needs to get regulated hard.
oh no dont speak ill of Ycombinator's startup bros. You will get downvoted!
Loved Justin.TV and how they helped shake up the media industrty up ... all the start-up bro ethos followed there successfully helped kill and bring new media business models (all the Frasier, Seinfeld, etc 24/7 channels on Justin.TV). Awesome!
Applying that same killing it here theory (Cruise trying to rapidly .. rapidly expand in many, many cities while Waymo with its WAY better track record as of late August is expanding to only one city .. Austin) is just down right deadly as we seen with Travis Kalanick's version of trying to kill it and in turn killing a pedestrian. Who cares if you kill business and in turn new business come from that .. that's awesome but killing people to kill it and win this race. Irresponsible and disgusting and again they're expanding to 5 to ten more cities (Nashville, DC, Austin, etc, etc) yet causing chaos in San Fran (Waymo is not & they have been working on this tech since 2007).
> “The video captured by Cruise showed that the ambulance parked behind the Cruise and did not attempt to pass the robotaxi in the rightmost unblocked lane. Instead, responders moved a firetruck to allow the ambulance to pass on the left. The video, which Cruise declined to share publicly, indicates that 90 seconds elapsed between the patient being put on the stretcher and the ambulance leaving the scene.”
Why isn't the video shared if it proves they're innocent? Smells strange.
> A Cruise spokesperson said the company offered to share video footage with San Francisco officials. As of Saturday morning, Cruise said, city officials had not reviewed the footage. It was unclear why.
They offered to share it with the city officials, apparently. There’s no strange smell in the fact that they didn’t decide to share it with the general public.
I mean, for all we know it might show the emergency services making an error in the heat of the moment and they might be holding it back to preserve their relationship with the city. (I’m not saying this is likely, just saying there are plenty of reasons that holding the video might be more tactful than nefarious).
if you have video of somebody being put on a stretcher before they died, it's best not to share that too widely. for legal reasons as well as just general humanity.
They don't need to publish it on a website. They could release it to the local news orgs that handle this type of video daily. It's not hard to blur things that would be offensive.
I'm not sure how many orgs have seen it. I've heard of two and we don't have any way of knowing if they saw an edited or full copy and from all available angles. They apparently haven't been allowed to make a copy of it and broadcast it so we can make up our own minds.
> Video and other surveillance data gathered by Cruise and reviewed by The Standard
The SF Standard might have a tech bias due to its funding, but giving them access and offering the city access seems reasonable enough to not suspect anything fishy.
90 seconds is fast. It’s often longer than that. Besides safety straps, there’s leads, transfer from scoop/backboard to the full stretcher, bags, goo, more secondary vitals to take..
This is such a weird framing. The person (a pedestrian) died because a car driven by a human hit them. There was also a human-driven police car that was also blocking the ambulance.
I don't think the Cruise made this situation better, but it was a human-driven car that caused the death. And as far as I could find, Cruise has never hit a human.
This comment is very selfawarewolves. I agree, people in this thread are overly concerned with blame attribution, but maybe it's not the group you think.
SFFD is highlighting to people who live/work/visit SF that so called "AV" taxis are getting in their way. That they were able to move another vehicle to get the ambulance on its way is immaterial. They are saying that in a different set of circumstances where they can't, this car blocks the ambulance. That's the problem, that's the fault.
Your comment sounds like throwing a ladder on the freeway, but you vigilantes better not stop traffic to clear it, no one has hit it. Yet.
Nah, the reason someone needed an ambulance is totally immaterial here. A person had life-threatening injuries, and Cruise prevented that person from receiving critical treatment. Counterfactually, that person would most likely be alive if the Cruise taxi hadn't been on the street that day.
What makes you believe this? According to the videos, there were multiple ambulances on the scene and the injured person was on their way to the hospital within 90 seconds.
It’s very likely both are true: Cruise made things more difficult, but didn’t cause enough of an issue to materially change the outcome.
Unbelieveable that you buy that. If Cruise wants to clear this up they can post the video. I'm gonna believe the first responders here who do this for a living over a corporate PR mouthpiece.
That journalist could have signed a non disparagement contract to see the video or just been paid off. Cruise has the video, there is no body. Post it. Anything else is bullshit.
> Counterfactually, that person would most likely be alive if the Cruise taxi hadn't been on the street that day.
That's a big statement. It sounds like it delayed the ambulance up to 90 seconds. 90 seconds can make a difference, but the patient died "later," and from pretty severe injuries.
Title seems misleading and Cruise doesn't seem to be responsible for the death, but I have to think that Cruise needs a better system for responding to emergencies like this. Do they have remote operators who can assume control of vehicles when the vehicles are confused or become immobile? What is their maximum response time? Is there a mechanism for people near the vehicle to signal Cruise for assistance, like an external button or even a QR code/phone number to report issues?
> Title seems misleading and Cruise doesn't seem to be responsible for the death
I hate click-bate titles just as much as the next guy, but in this case it is warranted. The article said the injured was in critical condition. In general getting this person to proper care in Hospital _faster_ could have had a better outcome.
Regardless, I don't think Cruise is roadworthy until they are able to clear the way for EMS. Maybe the cars don't need to be manually overridden in such case, at least initially, but even if they can maneuver to the side of the road that would be enough.
While it’s bad that the ambulance was blocked for 90 seconds, the title makes it seem like the person died because the ambulance was blocked- which is uncertain.
Note that the ambulance was not impeded for 90 seconds; 90 seconds elapsed from the patient being put on the stretcher to the ambulance leaving the scene.
That feels like trying to get by on a technicality. The casual reader would reasonably interpret "after" to imply some causation the way this headline is written.
I’m not a linguist but this use of after seems to at least imply a link. Strictly speaking it’s only the temporal relation of two facts but I’d reckon that a lot of people will read this after in the sense of because.
Most journalism tip-toes around this line by declining to assert direct cause. They use terms like “after” instead of “because” as it specifically does not imply a link, only the order in which two events (possibly related) occurred. Others you might see, “experts say” and “possible link”.
I’m not commenting on the ethical implications of this usage, only the specific mechanisms they use to avoid assigning cause in the event that they end up wrong.
Someone died. Stop and think about that for a bit and tone down the stupid "but logic!" BS. Recognize that we don't know exactly what happened and it's worth digging into. This is a great time to practice empathy instead of stroking your ego.
We can argue all day about whether Cruise was in the wrong here, but the fact is, with autonomous cars becoming more common, this kind of situation is going to happen - an autonomous car blocking a vehicle in an emergency - and protocols/regulations need to be in place for handling these situations.
How about instead of thinking about it from a statistical perspective we think of it from a liability one. A human driver has direct liability for the motor vehicle they operate. Meanwhile Cruise is a social construct literally designed to avoid liability.
If a human kills someone accidentally they are almost certain to apologize and be obviously distraught. Cruise will deny everything, never apologize. While that is technically valueless by your metric, in the real world that has a lot of intangible value for those involved.
In what way are they not liable, or designed to avoid liability.
>human kills someone accidentally they are almost certain to apologize and be obviously distraught
you start out wanting to use reason, then quickly jump to an emotional argument. I really dont care if if they feel distraught or not, I really do not care if they apologize or not.
I care if they are likely to be a danger to others in the future, I care if there is a pattern of dangerous behavior, and I care that they (to the extent possible) make the victims hole.
None of which requires them to express remorse, or be "obviously distraught" nor should any that be different between human and automated drivers.
For example new laws in TN and TX will require someone that kills a person while driving drunk to pay child support if the victim has a child. That concept could also easily be adopted to automated driving.
We do not need special laws, special regulations, etc. Human drivers and automated drivers should be held to the same standard. the Same standard.
A human can be held accountable. When another vehicle causes harm to a person, I want accountability. When things go wrong, and they will, who is being held liable. The answer cannot be no one.
The whole description is pretty confusing, I hope video is shared at some point so we can understand it a little better. I’m sure they (Cruise as well as the city) don’t want to dump it on the general public immediately for good reasons, but hopefully we’ll see it eventually.
> A stalled Cruise robotaxi blocked a San Francisco ambulance from getting a pedestrian hit by a vehicle to the hospital in an Aug. 14 incident, according to first responder accounts.
> “The patient was packaged for transport with life-threatening injuries, but we were unable to leave the scene initially due to the Cruise vehicles not moving,” the San Francisco Fire Department report, first reported by Forbes, reads.
> Video and other surveillance data gathered by Cruise and reviewed by The Standard showed three Cruise vehicles were present at the scene. Two left the scene but one remained stopped as an ambulance arrived behind it. Cars continued to pass in the lane to the right of the stopped Cruise car.
I’m wondering if The Standard just got a little video of the end of the interaction? The Fire Department seems to be complaining about a general issue with multiple Cruise vehicles slowing things down, while the paper is talking about one car that seems to have “stalled.” (is that a mechanical failure?)
> "Throughout the entire duration the AV is stopped, traffic remains unblocked and flowing to the right of the AV," a Cruise spokesperson said in a statement.
> The video captured by Cruise showed that the ambulance parked behind the Cruise and did not attempt to pass the robotaxi in the rightmost unblocked lane.
This seems a little weird, the ambulance is apparently in a rush but there’s also normal traffic flowing to the right of it? Why weren’t those people pulling over, like you are supposed to when you encounter an ambulance in a rush (presumably with sirens on)?
I’m not going to defend autonomous driving here, I think it is awful that these things have been allowed on our roads before they are 100% ironed out. But it would be nice to see the video before coming to too many conclusions.
Here if a ambulance has lights and sirens on, it has right away in every direction (including going opposite of traffic), where everyone pulls over to the shoulder.
If it really is a case of “autonomous vehicle can’t pull to the right to get out of the way of the emergency vehicle because there’s traffic flowing there,” then it was just a bad move to roll out autonomous cars in an area with so many malfunctioning human drivers.
The current talking point regarding safety of autonomous vehicles as compared to human driven ones is a red herring. The question we should be asking is if we should be investing so many resources into cementing reliance on cars. It is super convenient to jump in an automated taxi and get driven to your destination -- but we all know that people in aggregate pick convenience overwhelmingly even when the inconvenient choices are objectively better.
Is it good to build fleets of multi-ton individual person movers that still require an enormous amount of infrastructure and energy, and make cities worse for everyone not in a car at any moment?
Regardless of human vs ai, roads are a lot safer with fewer cars on them.
Shocker and I'm not seeing any such issues with Waymo cars and or am I missing those headlines?
Waymo has been at this game and working on their tech since around 2007.
Cruise definitely not as long ... no robo AI car company should be allowed on roads until they have proven themselves like Waymo has .. it's irresponsible and proving deadly like Uber's rush to do the same. Start-up bros killing it yet in this instance actually killing people... ridiculous ... no scruples/morals!
As a driver in SF ive been stuck behind Waymo cars even as construction workers waved them to go with exaggerated hand motions. Was stuck for several minutes before hazard lights turned on and then shortly later it moved, i assume a remote human assistant got involved. Same with zoox, they have delayed me commuting to work. I want them banned/put back safety drivers
I don't understand all the debate. The _law_ is that when an emergency vehicle is behind you, you pull over to the side and make as much room as you can in the center of the road. The vehicle did not move out of the way, the company should be penalized heavily, as would a human driver.
Sure. What's the penalty for a human driver in that situation? Is it having the SFFD lie to the press about the events and imply you caused somebody's death?
If the story were "Cruise fined $5k for not yielding to an emergency vehicle quickly enough", I bet the tenor of the conversation would be totally different.
A human likely wouldn’t sit there failing to move over with an ambulance honking and blaring its sirens so this hypothetical is ridiculous, and the media isn’t slandering them by reporting on the incident. No one knows “what would have happened” for certain, and the headline only implies the mere possibility of causation (for anyone literate who reads it)
5k might be a heavy penalty for a normal driver, tho their are rich folks in SF that would see that as a mere inconvenience for their PA. The penalty would need to materially impact the company and its operations.
Corp robotics aside - any of this kind of tech needs to 100% have the ability to get out of the way.
By robotic control. Bu human control. Or by being pushed.
90 seconds before ambulance moves is not long at all. Quite fast actually. I’ve seen it be longer for things with an MOI that requires ambulance to landing zone for helicopter transport. With helis waiting.
The people in this thread simping for the tech while armchair qb’ing the first responders really sadden me.
I have, personally, smashed windows, popped cars into neutral, and pushed them out of the way. It happens. That’s what insurance is for.
This new tech needs to prioritize getting the heck out of the way - for any and all emergency vehicles. Even in odd places, like when we have to drive on the wrong side of the road, or on a lawn, etc.
The only way this will happen (anytime soon) is if they put the safety drivers back in but they will never do that willingly because they have worked for a decade and invested billions to get to this point and it would be difficult to capitulate. There is simply no way to remotely assist reliably in the same way as being physically present (no network has 0 latency and 100% reliability). The AI likely cannot drive “perfectly” no matter how good it gets and neither can humans, but for the foreseeable future the AI will continue making entirely different type of mistakes than humans do.
For sure. Us FD (and some police) have what we call “knox box keys” - knox is a brand. They’re locked in the apparatus, and with a pin code or radio signal I can get them unlocked and then from there unlock a little box on a building. That contains the buildings master keys.
Anyway - if apparatus contained corporate keys (fobs, fido, physical, whatever) that allow a FD member to drive the car - I can 100% see that being part of a scene SOP/G. Tell the probie “get the key, move that AV out of the way.”
"Throughout the entire duration the AV is stopped, traffic remains unblocked and flowing to the right of the AV," a Cruise spokesperson said in a statement. "The ambulance behind the AV had a clear path to pass the AV as other vehicles, including another ambulance, proceeded to do."
Cruise seems to be ignorant of the law here. the law is: YOU GET OUT OF THE WAY of emergency vehicles, period. unless you are "landlocked" by other cars, you get the fuck out of the way, so that they can do their job. it is not acceptable to say "well the other lanes were clear, so its the ambulance fault. most human drivers instinctively know this.
I was taught to allow the ambulance to manuever and do what it needs to do. It will drive the wrong direction down a road, hop onto curbs, drive on sidewalks, etc. They have trainining and experience in this type of driving.
It sounds like moving to the other lane would have just blocked another ambulance. What is the procedure here?
That's how it is supposed to work in theory, in reality emergency vehicles will use whatever lane has open space. If that happens to be the middle lane or right lane, or sometimes even the shoulder, they will go there.
Eg. Let's say you have a three lane highway with a block of cars cruising in the middle lane and one car slowly passing in the left lane, with the right lane clear. An emergency vehicle rapidly approaching sees this and moves to the clear right lane. Technically you are supposed to move right to allow them to pass on the left, but they are already passing on the right, so moving over would impede them.
If the ambulance is in the right lane - do I still pull over in front of them? Every lane is blocked besides the right lane - do I block it or line up behind another car already at the light? The real world isn't so black and white.
I do think Cruise's responses to stuck cars needs work but problems with emergency vehicles and drivers happens all the time.
I remember an ambulance was right behind me at an intersection. The right turning lane was free but it was right behind me. I got really uncomfortable because I had 0 idea what to do, if I was supposed to basically break the law and go into the intersection on red so that it could get by. After a few seconds, I decide to pull into the right lane but I guess the ambulance driver had the same exact idea at the same exact time and almost read-ended me. So then I get back into the other lane and he goes through the right lane just fine, like he could have the whole time. I'm not blaming anyone btw, maybe he thought the light would change soon or something.
I wanted to get out of the way, but I had no idea what way the ambulance wanted to go. In my mind, if it wanted to go forwards, it just had to get into the right lane and run the red light (which ofc they can do). I think everyone was equally confused because everyone else just sorta froze to let it go by in the right lane.
Yeah, this is the thing that bothers me about people's response to robo-taxis. They're not compared to human drivers, who make mistakes all the time. They're compared to a theoretical ideal of perfection, and people get outraged when they inevitably fall short. The problems with the status quo are easy to overlook, just a normal part of life that everybody has come to accept -- but the problems with any attempted change are headline news.
The counter-position: Civil and criminally-negligent accountability for robo-taxis are unproven, untested, and unclear.
Being corporate-owned and run, who goes to prison for vehicular negligence? Will fines be collected and too many violations lead to revocation of license?
When accidents with autonomous cars are blamed on humans (and they are), why are key details like inhuman and unexpected behavior by the autonomous car omitted?
Yep. The other day when one ran into concrete, everyone was laughing saying "its so dumb how could it do such a thing" but that has happened TWICE in my town over the last few years.
It's not good though, not at all... its just blown out of proportion, imo.
They really need a way for someone to remotely move it like 5 feet or something lol
If I did that then the ambulance would have been totally blocked, unless _all_ the cars in front of me also did that (they didn't) or if they pulled into the intersection on red. If I did that, then I would have to pull into the intersection on red after moving to the right lane.
I have no idea if you're supposed to run a red light if an ambulance is behind you and I didn't want to take my chances.
What are the rules for that in Virginia, doing a quick google search I can't find them?
No one is saying that the Cruise vehicle was behaving well.
But there's clearly a difference in the scope of the problem between A) vehicle behaves badly, but it causes no harm, and B) vehicle behaves badly in a way that causes harm repeatedly.
Cruise does not get to decide where the amb goes. The amb does. If the amb needs the car to move, it needs to move. For ANY reason. Often with more info than the driver has. So the solution is: move. 100% of the time. Don’t clap back with “yeah but you could have gone to my right.”
They have released it to several media outlets. It sounds like SFFD lied, or to be more charitable, the report was based on eyewitness testimony which is notoriously unreliable.
Us FD (and some police) have what we call “knox box keys” - knox is a brand. They’re locked in the apparatus, and with a pin code or radio signal I can get them unlocked and then from there unlock a little box on a building. That contains the buildings master keys.
Anyway - if apparatus contained corporate keys (fobs, fido, physical, whatever) that allow a FD member to drive the car - I can 100% see that being part of a scene SOP/G. Tell the probie “get the key, move that AV out of the way.”
Or bull bars on my tanker will make it happen..
I'm a fan of the fire hose through the windows type of solution. In the way and not occupied? Firetruck beats car. Car company doesn't like the loss? Provide overrides to move it.
Quite strange, from the report it seems like traffic was moving smoothly in the adjacent lane, including another ambulance. Perhaps a robot ambulance would have handled the obstacle better than this human did? Perhaps the patient who was hit and killed by another human driver would have survived… ah but yes we need to get these AVs off the road before they kill more people!
Maybe I'm naive but is there a reason why in an emergency situation where the car is stuck, control can't be given over to a remote driver? It seems like in 99% of these situations, the car just has to go forward a few feet or change lanes.
The term FUD makes sense in low-risk applications where the benefit of the doubt ought to be given. In safety-critical contexts it should be completely reversed, and confidence, certainty and trust should be looked at extremely critically.
Human drivers kill 30,000+ per year in the US alone. Development of an automated driving option is a safety-critical context, and that requires real-world deployment experience.
The precautionary principle is not what is called for here.
Your argument has the form of: "We have to do something. This is something, therefore we have to do this."
You've failed to establish automated driving as a safer alternative, let alone the best practical solution. The fact that these cars can't even move out of the way of emergency vehicles proves they aren't ready for testing on the public now.
> requires real-world deployment experience.
Not at this stage it doesn't. They haven't exhausted the utility of simulated training and training on closed courses. They're testing on the public (human experimentation without informed consent) because it's cheaper and they can get away with it, not because they must.
> And you've failed to establish anything as a safer alternative.
I don't have, but since you've asked here are some possibilities, pick and choose as many as you like: ban cars, mandate breathalyzer interlocks, revise safety standards, improve collision avoidance systems in human-driven cars.
> Get out of the way of the people who are trying to make things better, please
Trying to make things better doesn't give you license to actually make things worse. Good intentions don't count if I think what you're trying is actually making things worse.
I believe you’re citing a stat corresponding to the number of traffic fatalities. That doesn’t mean in all of those cases one person killed another as you implied. For example if someone dies after drunk driving and crashes themselves into a tree that doesn’t seem right to say “the driver killed someone”
It isn't FUD, it's dumb autonomous vehicles won't get out of the way. Just force them to allow human intervention, a person gets in and moves it. What's the big deal? It shouldn't even be restricted to emergency personnel.
If people steal them Cruise can file charges and consider it an expense of doing business.
I'm bearish on self driving usually but this seems blown out of porportions.
1) I don't think it was in the way. A lane was open to the right lane but it waited 90s for a firetruck to move in the left lane before it continued on.
2) If I am paying for a ride in a Cruise and some random person hops in the front seat to "relocate" the car because it is inconviencing them - I will not be happy. Safety issue, etc.
3) We don't know if the patient would have died anyways. It was only a 90 second delay.
4) "after" in the title is being used as temporally after. There were a lot of things that happened before you could attribute to the cause of death - signaling out this one is sensationalist.
1) it took 90s to prepare for transport before being able to move. This is fast but also standard.
2) If you are in the car, and I as a firefighter enter, I am going to announce that and your opinion doesn’t matter. I am moving it for safetys sake. I will be safe, kind, and take your safety into consideration. But that you will be late to work does not enter.
3+4) I have extricated patients, in massive accidents, alive when I package them on the stretcher, and then only hear they died later. It fucking sucks. Was it me not getting that roof popped, or door opened fast enough? Was it the amb? Traffic? Was it the taxi who refused to move?
2) I agree with you but normally a car is locked from the inside and random people (firefighter or not) shouldn't be able to just open it via the usual methods. Police still have to bust out windows to extract people all the time. They don't get a special unlock button.
I agree with you in general. I'm sorry you have to go through things like that.
I'm taking no "position" on either "side" of the debate, but I feel it's noteworthy that a *human-driven* car killed a pedestrian (seriously?) here, and that fact is utterly banal and completely unremarkable to the authors—and to us. AI cars are still highly flawed and kill people. Human-driven cars are highly flawed, probably more flawed, and kill way more people. This is sort of a trolley problem, and we suck at solving those.
Because: incompetent killer humans are a diffuse force of nature; incompetent killer AI's are a centralized, attributable malefactor. We humans love identifiable malefactors, evildoers; love identifying and spotlighting and ostracizing them. It's a honed evolutionary trait. We've no atavistic instinct for the opposite, utilitarianism—for minimizing harm through "rational" (?) number-crunching, even where it conflicts with "obvious" (?) morality. Should we? Or is the other side of the trolley switch the correct one?