Hacker News new | past | comments | ask | show | jobs | submit login

AI has a different risk profile than humans. They are a lot more risky for business operations where failure is wholly unacceptable under any circumstance.

They're risky in that they fail in ways that aren't readily deterministic.

And would you trust your life to a self-driving car in New York City traffic?




This is a really hard and weird ethical problem IMHO, and one we'll have to deal with sooner or later.

Imagine you have a self-driving AI that causes fatal accidents 10 times less often than your average human driver, but when the accidents happen, nobody knows why.

Should we switch to that AI, and have 10 times fewer accidents and no accountability for the accidents that do happen, or should we stay with humans, have 10x more road fatalities, but stay happy because the perpetrators end up in prison?

Framed like that, it seems like the former solution is the only acceptable one, yet people call for CEOs to go to prison when an AI goes wrong. If that were the case, companies wouldn't dare use any AI, and that would basically degenerate to the latter solution.


I don't know about your country, but people going to prison for causing road fatalities is extremely rare here.

Even temporary loss of the drivers license has a very high bar, and that's the main form of accountability for driver behavior in Germany, apart from fines.

Badly injuring or killing someone who themselves did not violate traffic safety regulations is far from guaranteed to cause severe repercussions for the driver.

By default, any such situation is an accident and at best people lose their license for a couple of months.


Drivers are the apex predators. My local BMV passed me after I badly failed the vision test. Thankfully I was shaken enough to immediately go to the eye doctor and get treatment.


Sadly, we live in a society where those executives would use that impunity as carte blanche to spend no money improving (in the best-case scenario,) or even more likely, keep cutting safety expenditures until the body counts get high enough for it to start damaging sales. If we’ve already given them a free pass, they will exploit it to the greatest possible extent to increase profit.


What evidence exists for this characterization?


The way health insurance companies optimize for denials in the US.


What evidence is there that they do that? That would be a very one-dimensional competitive strategy, given a competing insurance company could wipe them out by simply being more reasonable in handling insurance claims and taking all of their market share.


If there is objective, complete data available to all consumers, who are not influenced by all sorts of other means to choose.

Right now it's a race to the bottom - who can get away with the worst service. So they're motivated to be able to prevent bad press etc.

The whole system is broken. Just take a look at the 41 countries with higher life expectancy.


There is no evidence that this what's happening, and the famous RAND health insurance study showed that health outcomes have almost no relationship with the healthcare system, so you'll need to look elsewhere for explanations for the U.S.'s relatively poor standing in life expectancy rankings.


talking specifically with car companies, you can look at volkswagon faking their emissions tests, and the rise of the light truck, which reduces road safety for the sake of cost cutting


The emissions test faking is an anecdote, not an indication that this is the average behavior of companies or the dominant behavior that determines their overall impact in society.

As for the growing prevalence of the light truck, that is a harmful market dynamic stemming from the interaction of consumer incentives and poor public road use policy. The design of rules governing use of public roads is not within the ___domain of the market.


Let’s see… of the top of my head…

- Air Pollution

- Water Pollution

- Disposable Packaging

- Health Insurance

- Steward Hospitals

- Marketing Junk Food, Candy and Sodas directly to children

- Tobacco

- Boeing

- Finance

- Pharmaceutical Opiates

- Oral Phenylepherin to replace pseudoephedrine despite knowing a) it wasn’t effective, and b) posed a risk to people with common medical conditions.

- Social Media engagement maximization

- Data Brokerage

- Mining Safety

- Construction site safety

- Styrofoam Food and Bev Containers

- ITC terminal in Deerfield Park (read about the decades of them spewing thousands of pounds benzene into the air before the whole fucking thing blew up, using their influence to avoid addressing any of it, and how they didn’t have automatic valves, spill detection, fire detection, sprinklers… in 2019.)

- Grocery store and restaurant chains disallowing cashiers from wearing masks during the first pandemic wave, well after we knew the necessity, because it made customers uncomfortable.

- Boar’s Head Liverwurst

And, you know, plenty more. As someone that grew up playing in an unmarked, illegal, not-access-controlled toxic waste dump in a residential area owned by a huge international chemical conglomerate— and just had some cancer taken out of me last year— I’m pretty familiar with various ways corporations are willing to sacrifice health and safety to bump up their profit margin. I guess ignoring that kids were obviously playing in a swamp of toluene, PCBs, waste firefighting chemicals, and all sorts of other things on a plot not even within sight of the factory in the middle of a bunch of small farms was just the cost of doing business. As was my friend who, when he was in vocational high school, was welding a metal ladder above storage tank in a chemical factory across the state. The plant manager assured the school the tanks were empty, triple rinsed and dry, but they exploded, blowing the roof off the factory taking my friend with it. They were apparently full of waste chemicals and IIRC, the manager admitted to knowing that in court. He said he remembers waking up briefly in the factory parking lot where he landed, and then the next thing he remembers was waking up in extreme pain wearing the compression gear he’d have to wear into his mid twenties to keep his grafted skin on. Briefly looking into the topic will show how common this sort of malfeasance is in manufacturing.

The burden of proof is on people saying that they won’t act like the rest of American industry tasked with safety.


If you don't have laws against dumping in the commons, yes people will dump. I don't think anyone would dispute that notion. But if the laws account for the external cost of non-market activity like dumping pollution in the commons, then by all indications markets produce rapid increases, improvements in quality of life.

Just look back over the last 200 years, per capita GDP has grown 30 fold, life expectancy has rapidly grown, infant mortality has decreased from 40% to less than 1%. I can go on and on. All of this is really owing to rising productivity and lower poverty, and that in turn is a result of the primarily market-based process of people meeting each other's needs through profit-motivated investment, bargain hunting, and information dispersal through decentralized human networks (which produce firm and product reputations).

As for masks, the scientific gold standard in scientific reviews, the Cochrane Library, did a meta-review on masks and COVID, and the author of the study concluded:

"it's more likely than not that they don't work"

https://edition.cnn.com/videos/health/2023/09/09/smr-author-...

The potential harm of extensive masking is not well-studied.

They may contribute to the increased social isolation and lower frequency of exercise that led to a massive spike in obesity in children during the COVID hysteria era.

And they are harmful to the development of the doctor-patient relationship:

https://ncbi.nlm.nih.gov/pmc/articles/PMC3879648/

Which does not portend well for other kinds of human relationships.


> If you don't have laws against dumping in the commons, yes people will dump.

You can’t possibly say, in good faith, that it think this was legal, can you? Of course it wasn’t. It was totally legal discharging some of the less odious things into the river despite going through a residential neighborhood about 500 feet downstream— the EPA permitted that and while they far exceeded their allotted amounts, that was far less of a crime. Though it was funny to see one kid in my class who lived in that neighborhood right next to the factory ask a scientist they sent to give a presentation in our second grade class why the snow in their back yard was purple near the pond (one thing they made was synthetic clothing dye.) People used to lament runaway dogs returning home rainbow colored. That was totally legal. However, this huge international chemical conglomerate with a huge US presence routinely, secretively, and consistently broke the law dumping carcinogenic, toxic, and ecologically disastrous chemicals there, and three other locations, in the middle of the night. Sometimes when we played there, any of the stuff we left lying around was moved to the edges and there were fresh bulldozer tracks in the morning, and we just thought it was from farm equipment. All of it was in residential neighborhoods without so much as a no trespassing sign posted, let alone a chain link fence, for decades, until the 90s, because they were trimming their bill for the legal and readily available disposal services they primarily used, and of course signs and chainlink fences would have raised questions. They correctly gauged that they could trade our health for their profit: the penalties and superfund project cost were a tiny pittance of what that factory made them in that time. Our incident was so common it didn’t make the news, unlike in Holbrook, MA where a chemical company ignored the neighborhood kids constantly playing in old metal drums in a field near the factory which contained things like hexavelant chromium, to expected results. The company’s penalty? Well they have to fund the cleanup. All the kids and moms that died? Well… boy look at the great products that chemical factory made possible! Speaking of which:

> Just look back over the last 200 years, per…

Irrelevant “I heart capitalism” screed that doesn’t refute a single thing I said. You can’t ignore bad things people, institutions, and societies do because they weren’t bad to everybody. The Catholic priests that serially molested children probably each had a dossier of kind, generous, and selfless ways they benefited their community. The church that protected and enabled them does an incredible amount of humanitarian work around the world. Doesn’t matter.

> Masks

Come on now. Those businesses leaders had balls but none of them were crystal. What someone said in 2023 has no bearing on what businesses did in 2020 based on the best available science and their motivations for doing it. Just like you can’t call businesses unethical for exposing their workers to friable asbestos when medicine generally thought it was safe, you can’t call businesses ethical for refusing to let their workers protect themselves— on their own dime, no less— when medicine largely considered it unsafe.

Your responses to those two things in that gigantic pile of corporate malfeasance don’t really challenge anything I said.


>You can’t possibly say, in good faith, that it think this was legal, can you? Of course it wasn’t. It was totally legal discharging some of the less odious things into the river despite going through a residential neighborhood about 500 feet downstream

That is exactly my point. Nobody would dispute that bad things would happen if you don't have laws against dumping pollution in the commons and enforce those laws.

>Doesn’t matter.

It does matter when we're trying to compare the overall effect of various economic systems. Like the anti-capitalist one versus the capitalist one.

>What someone said in 2023 has no bearing on what businesses did in 2020 based on the best available science and their motivations for doing it.

Well that's an entirely different argument than you were making earlier. There was no evidence that masks outside of a hospital setting were a critical health necessity in 2021 and the intuition against allowing them for customer-facing employees proved sound in 2023 when comprehensive studies showed no health benefit from wearing them.


> exactly my point

Ok, so you’re saying that because bad things would happen anyway then it doesn’t matter if it’s illegal? So you’re just going to ignore how much worse it would be if there were just no laws at all? Corporate scumbags will push any system to its limit and beyond, and if you change the limit, they’ll change the push. Just look at the milk industry in New York City before food adulteration laws took effect. The “bad things will happen anyway” argument makes total sense if you ignore magnitude. Which you can’t.

> anti capitalist

If you think pointing out the likelihood of corporate misbehavior is anti-capitalist, you’re getting your subjects confused.

> 2021

Anywhere else you want to move those goalposts?


I'm saying that under any political ideology or philosophy, those things would be illegal and effectively enforced. So this is not a failing of any particular ideology, this is just a human failing showing how it's difficult to enforce complex laws in a complex world.

I think what you're promoting is anti-capitalism, meaning believing that imposing heavy restrictions beyond simply laws against dumping on the commons is going to make us better off, when it totally discounts the enormous positive effect that private enterprise has on society and the incredible harm that can be done through crude attempts to regiment human behavior and the corruption that it can breed in the government bureaucracy.

See, "everything I want to do is illegal" for the flip side of this, where attempts to stop private sector abuse lead to tyranny:

https://web.archive.org/web/20120402151729/http://www.mindfu...

As for the company mask policies, those began to change in 2021 mostly, not 2020.


Like with Cruise. One freak accident and they practically decided to go out of business. Oh wait...


If that’s the only data point you look at in American industry, it would be pretty encouraging. I mean, surely they’d have done the same if they were a branch of a large publicly traded company with a big high-production product pipeline…


> nobody knows why

But we do know the culpability rests on the shoulders of the humans who decided the tech was ready for work.


Hey look, it's almost like we're back at the end of the First Industrial Revolution (~1850), as society grapples with how to create happiness in a rapidly shifting economy of supply and demand, especially for labor. https://en.m.wikipedia.org/wiki/Utilitarianism#John_Stuart_M...

Pretty bloody time for labor though. https://en.m.wikipedia.org/wiki/Haymarket_affair


Wait, why would we want 10x more traffic fatalities?


We wouldn't, that's their point.


Every statistic I've seen indicated much better accident rates for self-driving cars than human drivers. I've taken Waymo rides in SF and felt perfectly safe. I've taken Lyft and Uber and especially taxi rides where I felt much less safe. So I definitely would take the self-driving car. Just because I don't understand am accident doesn't make it more likely to happen.

The one minor risk I see is the cat being too polite and getting effectively stuck in dense traffic. That's a nuisance though.

Is there something about NYC traffic I'm missing?


There's one important part about risk management though. If your Waymo does crash, the company is liable for it, and there's no one to shift the blame onto. If a human driver crashes, that's who you can shift liability onto.

Same with any company that employs AI agents. Sure they can work 24/7, but every mistake they make the company will be liable for (or the AI seller). With humans, their fraud, their cheating, their deception, can all be wiped off the company and onto the individual


The next step is going to be around liability insurance for AI agents.

That's literally the point of liability insurance -- to allow the routine use of technologies that rarely (but catastrophically) fail, by ammortizing risk over time / population.


Potentially. I would be skeptical that businesses can do this to shield themselves from the liability. For example, VW could not use insurance to protect them from their emissions scandal. There are thresholds (fraud, etc.) that AI can breach, which I don't think insurance can legally protect you from


Not in the sense of protection, but in the sense of financial coverage.

Claims still made: liability insurance pays them.


Sure, that's unrelated though to the question which was if one would feel comfortable taking a self-driving car in NYC


It is amazing to me that we have reached an era where we are debating the trade-off of hiring thinking machines!

I mean, this is an incredible moment from that standpoint.

Regarding the topic at hand, I think that there will always be room for humans for the reasons you listed.

But even replacing 5% of humans with AI's will have mind boggling consequences.

I think you're right that there are jobs that humans will be preferred for for quite some time.

But, I'm already using AI with success where I would previously hire a human, and this is in this primitive stage.

With the leaps we are seeing, AI is coming for jobs.

Your concerns relate to exactly how many jobs.

And only time will tell.

But, I think some meaningful percentage of the population -- even if just 5% of humanity will be replaced by AI.


Isn't everybody in NYC already? (The dangers of bad driving are much higher for pedestrians than for people in cars; there are more of the former than of the latter in NYC; I'd expect there to be a non-zero number of fully self driving cars already in the city.)


That doesn't answer my question.


It does, in a way; AI is already there, all around you, whether you like it or not. Technological progress is Pandora’s box; you can’t take it back or slow it down. Businesses will use AI for critical workflows, and all good that they bring, and all bad too, will happen.


How about you answer my question since he did not.

Would you trust your life to a self-driving car in New York City traffic?


GP got it exactly right: I already am. There's no way for me to opt out of having self-driving cars on the streets I regularly cross as a pedestrian.


Do you live in a dense city like New York City or San Francisco? Or places with less urban sprawl that are much easier for self-driving cars to navigate?

Also you still haven't answered my question.

Would you get in a self-driving car in a dense urban environment such as New York City? I'm not asking if such vehicles exist on the road.

And related questions: Would you get in one such car if you had alternatives? Would you opt to be in such a car instead of one driven by a person or by yourself?


> Would you get in a self-driving car in a dense urban environment such as New York City? [...] Would you get in one such car if you had alternatives?

I fortunately do have alternatives and accordingly mostly don't take cars at all.

But given the necessity/opportunity: Definitely. Being in a car, even (or especially) with a dubious driver, is much safer (at NYC traffic speeds) than being a pedestrian sharing the road with it.

And that's my entire point: Self-driving cars, like cars in general, are potentially a much larger danger to others (cyclists, pedestrians) than they are to their passengers.

That said, I don't especially distrust the self-driving kind – I've tried Waymo before and felt like it handled tricky situations at least as well as some Uber or Lyft drivers I've had before. They seem to have a lot more precision equipment than camera-only based Teslas, though.


Yes? I’ve taken many many waymos in SF. Perfectly happy trusting my life to them. I have alternatives (uber) and I pick self driving . Are you up to date on how many rides they’ve done in sf now? I am not unusual


I would


If there are any fully-autonomous cars on the streets of nyc, there aren’t many of them and I don’t think there’s any way for them to operate legally. There has been discussion about having a trial.


It depends with what the risk is .Would it be whole or in part ? In an organisation,failure by an HR might present an isolated departmental risk while an AI might not be the case.


We can just insulate businesses employing AI from any liability, problem solved.


„Well, our AI that was specifically designed for maximising gains above all else may indeed have instructed the workers to cut down the entire Amazonas forest for short-term gains in furniture production.“ But no human was involved in the decision, so nobody is liable and everything is golden? Is that the future you would like to live in?


Apparently I need to work on my deadpan delivery.

Or just articulate things openly: we already insulate business owners from liability because we think it tunes investment incentives, and in so doing have created social entities/corporate "persons"/a kind of AI who have different incentives than most human beings but are driving important social decisions. And they've supported some astonishing cooperation which has helped produce things like the infrastructure on which we are having this conversation! But also, we have existing AIs of this kind who are already inclined to cut down the entire Amazonas forest for furnitue production because it maximizes their function.

That's not just the future we live in, that's the world we've been living in for a century or few. On one hand, industrial productivity benefits, on the other hand, it values human life and the ecology we depend on about like any other industrial input. Yet many people in the world's premier (former?) democracy repeat enthusiastic endorsements of this philosophy reducing their personal skin to little more than an industrial input: "run the government like a business."

Unless people change, we are very much on track to create a world where these dynamics (among others) of the human condition are greatly magnified by all kinds of automation technology, including AI. Probably starting with limited liability for AIs and companies employing them, possibly even statutory limits, though it's much more likely that wealthy businesses will simply be insulated with by the sheer resources they have to make sure the courts can't hold them accountable, even where we still have a judicial system that isn't willing to play calvinball for cash or catechism (which, unfortunately, does not seem to include a supreme court majority).

In short, you and I probably agree that liability for AI is important, and limited liability for it isn't good. Perhaps I am too skeptical that we can pull this off, and being optimistic would serve everyone better.


Hmmm, how much stock do I own in this hypothetical company? (/s, kinda)


I guess - yes from business&liability sense? ”This service you are now paying for 100$? We can sell it to you for 5$ but with the caveat _we give no guarantees if it works or is it fit for purpose_ - click here to accept”.


Haha, they’d just continue selling it for $100 then change the TOS on page 50 to say the same thing.


Deterministic they may be, but unforeseeable for humans.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: