This is amazing, but if anything it's made me more wary of some of the challenges the project faces.
- What if the cyclist fell off his bike in front of the car? How quickly can the computer process the real-time imagery and react compared to a human with their peripheral vision.
- What if the cyclist swung from the pavement on to the road. A human driver will probably have spotted the hazard (we all train for that kind of thing when learning to drive) earlier. What are the limitations of the peripheral vision of the car when checking hazards?
- What if a fire hydrant bursts on the side of the road 50m in front of the car and makes the road ahead really wet? Can the cameras determine and detect quickly enough the need for different driving (and probably braking) due to a change in surface?
The sad truth of this is that whilst it's an interesting technical challenge, I really can't forsee a situation where a computer could react to all the different things that could happen when driving a car as well as a human.
The intermediary step (computer driving with human failsafe) is also incredibly challenging, because you are unsure of the computer's knowledge and intent (this is also what makes backseat drivers so annoying).
This computer drives much more conservatively than a human driver, so much so that I'm not sure a human failsafe driver would grok what is going on.
Waiting for a railway crossing to be completely clear before proceeding is not something most drivers do (ehem, especially at that particular intersection), and may seem like odd behavior to a failsafe driver behind the wheel. Similarly, waiting for a bicyclist coming from behind may also seem odd. For one, he may not have that information since he is less attentive in 'failsafe' mode... or many human drivers would see the cyclist and proceed a bit more quickly to make the turn to get out of traffic and free the intersection for the oncoming cyclist (and cars behind him).
On the other hand I would have probably pulled the plug on the computer driver when Mr. Indecisive-Cyclist was in the road. The computer handled it fine, but as the failsafe, am I sure that the computer will handle it well?
This intermediary stage is going to get really weird, I don't even know how you'd manage liability in this world.
I'd actually go further. An autonomous driving system that works for routine driving pretty much has to be designed on the basis that the human "failsafe" will NOT be paying attention and will not be prepared to take over on short notice. Heck, enough people don't pay close attention to their driving today without self-driving cars.
The most obvious intermediate stage is designated sections of highways in which self-driving cars can operate without active human drivers. The question though is whether that's an interesting enough use case to push through all the legal/regulatory/etc. changes that would be required.
I think the fail safe human laws are only useful to license research.
For day to day driving, the cars should either be fully autonomous or have the automated systems limited to intervening in dangerous situations (like current brake priming systems and the like). If the driver is able to take (primary) control, they should be required to be in control.
Tricky. There may be situations where the car stops and says "I can't do this" at which point primary control is handed over to the driver. Think about the system being damaged or obstructed, or a protest rally staring in the street up ahead. The driver should be able to take over, maybe after a stop.
Given this "stop, your turn" option, IMHO it would be horrible to also deny the driver from taking over during normal driving conditions, seeing as though computers will certainly make mistakes. Allowing that option to exist, however, should not equate to the human 'driver' being liable for not taking control when the computer failed.
Basically, you should give passengers an emergency stop option and not make them liable for not using it.
My thinking about this is based on humans being very bad at handling situations where they don't really need to pay attention. I think if the system is only good enough that an attentive driver/passenger can use it well, then it needs to be made better before it is licensed for everyday use.
The stop to change control thing makes sense to me.
The thing is .. what happens if a cyclist falls in front of a car with a human driver? Probably the car drives over him and the cyclist dies. The challenge is to do better than that, which is a rather low bar.
People won't see it like that. If a human driver makes a mistake, it's "human error". If a robot driver makes a mistake it's criminal negligence -- because robots are capable of not making mistakes.
In other words, robots are held to a higher standard than humans.
I don't think it's quite that... If a human driver makes a mistake, the driver is liable. If the car makes a mistake, the car manufacturer is liable. This is true today; if you're driving along and you swerve into someone/something, that's your fault. But if your wheel falls off because of a manufacturing defect the car company is going to get sued. The car company faces a much higher liability risk, because they're responsible for ALL of the cars they manufacture, unlike the driver who is responsible for a single car.
It'll be the same with automated driving software. All of the liability for accidents that occur in cars using the software will be concentrated with the software developer, instead of distributed amongst all of the drivers. That might be too much liability to make the business viable.
The good news for us drivers: our insurance rates should drop once the cars are fully automated and manual control isn't even possible. At that point we'll just be passengers, and we shouldn't need to pay insurance.
There's two points about insurance: If you own the car, and assuming cars are still stolen / dented etc, then you'll want insurance the same way you insure your house and other possessions.
However the bigger point is I doubt you'll own a car. More likely there will be lots of driverless taxi companies, and it'll just be very convenient to use your phone to call a taxi to take you where you want. In this case, the insurance cost is folded into the cost of hire.
Maybe. Many people (particularly parents) in suburban areas store a substantial amount of equipment in their cars, and can't effectively use a taxi-service. If I had to pack everything my kids need in a day in and out of shared car at every stop, it would add 10 minutes to each leg of my trip. Plus, do I haul the stuff I bought at the store on my way to work into the office so that my self-driving car can go somewhere else? What about my kid's stroller? That lives in the car.
In urban areas, people travel light, but that's not the case in sub-urban and rural areas.
Taxis are an incredible pain when you have kids, because of the carseat issue. For example, they're required by law in CA for kids under 8 (with a height exception that doesn't apply to most kids).
For bigger kids there are some sorta options like RideSafer vests, but with a baby you're just screwed.
Zipcar has the same issue; our use of it dropped hugely once kids appeared, because it's just too painful.
But regardless, isn't the total liability the same. N number of cars x N drivers paying for N insurance policies vs N number of cars x 20 car companies paying for N insurance policies.
I don't understand why people seem to always imagine that human drivers are all Michael Schumacher or Mario Andretti.
Most drivers are terrible at driving. They're unobservant, they have slow reaction times, they have inappropriate reactions, they don't adjust to conditions, they don't understand the laws, they can't anticipate other drivers, they can't plan ahead, you're lucky if they're actually looking where they're going half the time.
What if the cyclist fell off his bike in front of the car? I'd be surprised if a human was able to react to this in less than two seconds. The cyclist probably dies. It wouldn't take much for a computer-driven car to do better than this.
What if a fire hydrant bursts and makes the road really wet? I bet the computer would do something besides continue driving at full speed the way 90% of human drivers would.
These standards are just ridiculously high. Everybody is imagining scenarios that 90% of the time would not be handled correctly by human drivers, and then saying that anything less than a 100% success rate on the part of the computer is insufficient.
There are over 30,0000 deaths in the US alone every year, which is actually near historical lows. I believe there are several million accidents and hundreds of thousands of injuries.
Now the computer might not be perfect but if it's 10x better, wouldn't that be acceptable?
Getting to that point is the challenge. I'm just throwing out questions and raising the point that there are some very very hard challenges to solve, where humans are probably much better.
There is a type of trolling - the concern troll - where a person just asks questions.
There's a risk that you look like a concern troll or like someone with a superficial understanding of the topic or someone with vested interests in competing technology.
Concern trolling has a long history in, for example, big tobacco campaigns.
I don't believe you are trolling and I didn't downvote you. I am giving a possible explanation for downvotes.
You should try to be more specific. What are the hard challenges that remain? Where are humans better? If it's bad weather driving, for example, perhaps humans would drive in those instances.
Your statement is factually incorrect according to official statistics,[1] which I found through a very easy Google search after reading your comment. (Note that the statistic shown is for all accidental falls, not just falls on stairways, and even in the aggregate all falls cause fewer deaths than car crashes.)
You said "fall down the stairs", which I addressed hours ago. Also, you aren't accounting for the hundreds of thousands of people injured and the millions of dollars of damage.
> The sad truth of this is that whilst it's an interesting technical challenge, I really can't forsee a situation where a computer could react to all the different things that could happen when driving a car as well as a human.
I agree, at least for some time. But it's not about a computer reacting to as much as a human. The question is, can a computer can do a better job overall (e.g. drnk driving or texting while driving accidents are eliminated)? So dont look for a perfect driving record, as humans have loads of accidents today. As long as the computer has less frequent or severe accidents it is a welcome improvement to our roads even if it does have infrequent limitations.
People will have a lot of trouble accepting that they drive worse than the self-driving car, but less that others do. After all, we all think we're better than average drivers.
What I think we'll see initially is that people won't think they need self-driving for themselves but will be all for it for others. Think parents buying a car for their teenager: they'll want that car to have every safety feature under the sun. Likewise, the younger generation helping their aging Baby Boomer parents buy a new car.
Once you've started to get some mindshare that way, I think usage will spread pretty quickly. Then I think we'll hit a cultural tipping point.
When there are a lot of self-driving cars on the road and people have seen the evidence of their safety, not driving one will clearly become a selfish act: you're choosing to endanger others because you want to be in control of your car.
In the same way that we scorn people who don't wear seatbelts (even though it only puts them at risk) and despise people who don't make their children buckle up, I expect eventually we'll feel the same way about self-driving cars. It will be considered a public safety issue where you are a bad person if you don't drive one.
A software bug IS a human error. We need to accept the overall better (statistical) performance. In aviation this works out pretty well. As well a bug will harm at most on a limited number of events after which it is weeded out (like in aviation). The same human error (eg. drunk driving) will occur over and over again.
I think the day when a computer is more reliable than the median licensed driver will quickly come. A big piece of that is because the software will be aware of and respect traffic law.
The good thing about computers is that they can simulate the future based on current physical data and make decisions in realtime, so if a cyclist fells off on front of it, computer can detect the diversion as it happens, simulate each possible scenario beforehand, and choose the one that has the best outcome. actually it can perform a lot better than humans. look at this unbeatable rock-paper-scissors robot video https://www.youtube.com/watch?v=3nxjjztQKtY
Somebody correct me if I'm wrong, but Google cars still can't drive in the rain or in the snow. Not even on a wet street. They have driven all those thousand miles in Mountain View, Calif. in good weather.
Generally lidar's don't work well with rain or reflective surfaces.
Indeed — I remember a projet in high-school, explaining how lidar were being used to measuring rain drops direction…
The thing is: there apparently now is a technology (at Mercedes or BMW) that uses beamers instead of front lights; a secondary camera detects droplets’ movement, predicts their fall and the beamer doesn’t shine light precisely on them. It helps light in front of you at night without shining mainly the rain. I was stunned when I learned we are now that fast. Same thing with cooking mosquitoes with lasers. I’m assuming that with those tech handy, a lidar should work properly through the rain. It leaves braking as a problem, especially on black ice — and that one is tougher.
BMW does have an interesting new headlight system available on the coming i8 which boosts the high beam range using lasers, which is still pretty cool but doesn't avoid rain.
My understanding is that it uses vision processing algorithms on a camera nowadays, and works just fine in rain. Snow is an issue because the car can't see the lane markings through the snow -- a human drives through snow by remembering where they were or making a guess.
A computer can do dead-reckoning off of fixed objects (buildings, bridges) and remember a millimeter-level resolution map of where lanes are better than any human.
They still have trouble in snow. I think that's fine for a first version.
I fully expect the first consumer versions of automatic cars will have a "No, I won't drive for you today, Dave" mode. Driving in everything but snow would still be a huge win.
And if you ever look at a large parking lot after a thaw when it had been snowing earlier in the day you can see we're also horrible at predicting where the lines are and, for the most part, use other drivers as our guide.
There's a relatively simple fix for lane markings, that would help autonomous cars even in good weather. You know those reflective lane marking doodads they embed in the road surface? We start replacing them with magnetized nails that are pounded into the asphalt in intervals. Sensors in the car could recognize the lanes even in deep snow or pea-soup fog. And homeowners could easily outfit their driveways with a trip to Home Depot.
They don't use the embedded ones in most parts of snow countries like Canada, because it's hard to get them flush with the road so that they work with snowplows. AFAICT, Toronto still uses just plain old paint on most of its roads.
I am also curious about multiple vehicles using LDAR at the same time. Is it possible for two or more vehicles to operate using LIDAR independently, or must they synchronize with one another and how far apart do they have to be?
Humans have very poor peripheral vision, while the self-driving car can see perfectly in all directions. I think it also has a faster reaction time. (Though either way, a car may not always be able to break in time.)
But what about distance? Imagine a cyclist travelling down a hill at 50km/h and running straight over some crossroads in a residential neighbourhood. Assuming a line of sight to the hill, a human can probably recognise that that fast moving object (that may or may not run over the cross-roads) is a bicycle with a cyclist on it quicker than a computer can do the same.
This of course, assumes the driver has good vision or is wearing any prescription they might be required to do so by law.
The car has many fewer limitations on its peripheral vision when checking hazards because it has dozens of sensors all over the car, as opposed to the humans' very limited eyesight.
As I understand it, the computer has enough sensors (LIDAR, etc) that it has far better vision coverage than a human. That is, I believe it can see in all directions at once.
My guess is that the car would fare much better in reaction time than any human, not just with reacting to the biker, but also with surrounding conditions (e.g., not flying into an opposing lane and hitting a car or something similar).
People have a very hard time grasping the speed of computers. Imagine you saw the world in 1/1000 speed, and how quickly you'd be able to react to occurrences.
I see where your concern is: this is the sort of lateral synthesis of ideas that computers are still vastly our lessers at performing. We can fill in what we know is going to happen before the rider hits the ground and potentially react before the event happens, while the computer would have to have an algorithm in place specifically to detect a combination of rider posture, angular acceleration of the bike, identify a heavy package affecting the center of balance, and so on.
Ideally the system can compensate in another area where it does beat us, such as reaction time.
> while the computer would have to have an algorithm in place specifically to detect a combination of rider posture, angular acceleration of the bike, identify a heavy package affecting the center of balance, and so on.
Or just an algorithm to detect that the rigid body in moving in front of the car is about to enter a collision trajectory. Isn't that a pretty much solved problem? There's no need for computer to "know" that the dangerous object is a cyclist.
I'd like to think that Google isn't meticulously evaluating the situations where humans have to take over. In those cases, the software needs to be much more robust. Only way to encounter these situations is to get more cars out there in the greatest variety of markets.
Wow. You are giving waaaayyy to much credit to the responsiveness of humans. Roughly 1 in 4 drivers have a statistically significant amount of alcohol in their blood system. Lets also not forget distractions like cell phones, finding/selecting music, passengers, OTC and prescription drugs, et al.
From my perspective, I'm the only good driver on the road and everyone else needs to go back to driving school or have their license revoked. :)
> The human body constantly produces small amounts of alcohol itself. Normal levels of 0.01 to 0.03 mg of alcohol/100 ml are contained in the blood. By contrast, a blood alcohol limit for driving of 0.05 per cent is equal to around 50 mg of alcohol/100 ml of blood.
Thus, according to my definition, 100% of drivers have a statistically significant amount of alcohol in their blood system, though nowhere near to the level of impairment.
It's not just high, it's ludicrously high. Over the easter weekend in my city, extensive random breath testing of almost 60,000 people caught 58 driving above 0.05 - still too high but orders of magnitude less than the GP's assertion.
Impairment is on a continuous scale. Even a totally sober person would have their concentration slightly impaired if say they caught something out of the ordinary in the corner of their eye. If that happens at the wrong time, you have an accident.
- What if the cyclist fell off his bike in front of the car? How quickly can the computer process the real-time imagery and react compared to a human with their peripheral vision. - What if the cyclist swung from the pavement on to the road. A human driver will probably have spotted the hazard (we all train for that kind of thing when learning to drive) earlier. What are the limitations of the peripheral vision of the car when checking hazards? - What if a fire hydrant bursts on the side of the road 50m in front of the car and makes the road ahead really wet? Can the cameras determine and detect quickly enough the need for different driving (and probably braking) due to a change in surface?
The sad truth of this is that whilst it's an interesting technical challenge, I really can't forsee a situation where a computer could react to all the different things that could happen when driving a car as well as a human.