Hacker News new | past | comments | ask | show | jobs | submit login

It's a bit like all the arbitrary account terminations we hear about with Google. A machine makes a decision with virtually no recourse for the user. When governments are applying this same concept to welfare, it does indeed create a dystopian scenario where people are left to starve because some algorithm got it wrong.

In essence, it's about cost cutting. Governments seem to think that machines can replace humans in making decisions where there is nuance involved and the stakes are extremely high. It is frightening and should not be accepted. The furthest the automation should go is to flag irregularities for review. Instead, the machines are given far too much autonomy, with the robodebt collection being a scary example.




A human isn't necessarily less arbitrary, in fact it can be much worse.

You (and Guardian, it seems) are falling for the fallacy of the danger of automation - "it's acceptable for bad decisions to be made as long as it's been decided by a human".

Fear mongering is not how you prove/disprove automation, you should pick a useful metric (e.g. are people starving?) and benchmark against that - the same way you would measure if a department comprised of humans were doing the same job.


> You (and Guardian, it seems) are falling for the fallacy of the danger of automation - "it's acceptable for bad decisions to be made as long as it's been decided by a human".

Not necessarily. Even bureaucratic systems tend to recognize that humans make imperfect decisions -- we're susceptible to everything from bribery to exhaustion. These systems then tend to not treat first decisions as final, and they allow an opportunity to appeal.

But what happens when the decision is made by an algorithm, especially one that wasn't built to give an explicable reason?

> Fear mongering is not how you prove/disprove automation, you should pick a useful metric (e.g. are people starving?)

That's not a good metric.

Suppose the algorithm were in fact perfect and nobody starved -- except you. For some reason, it couldn't recognize your ID (and only your ID). By the "are people starving" metric, the algorithm would be doing a fantastic job compared to an imperfect human system, but it would also be profoundly unjust.


> Even bureaucratic systems tend to recognize that humans make imperfect decisions -- we're susceptible to everything from bribery to exhaustion. These systems then tend to not treat first decisions as final, and they allow an opportunity to appeal.

So the problem is not automation, is lack of recourse.

Lack of recourse is a real problem that already exists today. People do go unattended by not knowing how to navigate bureaucratic structures, and not having money to pay a lawyer. It also affects the most the countries the Guardian article targets as "automating poverty".

> For some reason, it couldn't recognize your ID (and only your ID). By the "are people starving" metric, the algorithm

This is not an algorithmic issue.

If you lost your ID, you wouldn't get your benefit even if you walked into a social security branch either.


> So the problem is not automation, is lack of recourse.

The problem is automation, if it exaggerates the "lack of recourse". It means that, without being careful, adding automation would screw up most of such systems. Thus such articles to boost awareness and make sure automation is done well. I'm disappointed with what a hard time many on HN seem to have with accepting the need to be cautious. Especially with Gov, were you cannot just find or build a better alternative on the open market.

> If you lost your ID, you wouldn't get your benefit even if you walked into a social security branch either.

Is that actually the case in the US? The programs (run by the church and heavily supported by german gov) I've come in contact with certainly did not. Never was at a soup kitchen, but would be surprised for them to ask for IDs.


I don't think you need to convince anyone here that automated systems fail. I think many of us have been affected by such systems. Accounts blocked for no reason, etc. Glitch.

The problem is not the automated systems. The problem is zero recourse.

If nothing was automated then we'd have to wait hours on the phone to get to an actual human to do something trivial. Or make an appointment, travel somewhere, spend few hours to get something simple done.

In an automated world most of the time things work out.

The situation becomes despicable where there's no one to complain to and no way to seek redress.

In the tech world it became pretty common to seek support by Twitter shaming. Often there's no other route.

Still, it's not the automation that is at fault. It's the greedy humans. They could save 80% of the cost while keeping everyone happy but they'll chose to save 90% of the cost at the price of making many miserable.

Can't change the human nature?


In the US, "social security branch" is not a soup kitchen. It's a branch office of the Social Security Administration, a government department which provides monetary benefits. They can't sign you up for benefits if you don't have an ID.

However, the human beings who work there could give you advice on where and how to get an ID - which might differ depending on your individual circumstances. A computer probably wouldn't give you any more information than you could read on the internet, and if that info doesn't make sense or doesn't apply to you, you're going to have trouble finding better information.


It's a matter of incentives. In non automation, someone can be on the hook if things go wrong, so they have an incentive to be right, even if they are flawed and make lots of mistakes. In automation you spread responsibility out, and things aren't so clear.

Automation can be better, more accurate, and in general kinder, but it can also be more profitable, efficient, uncaring and cruel, and when its pointed out the people at fault can avoid blame far far easier.


One of the problems with the nature of macroeconomic / statistical thinking is it removes any humanity from the decision, because we focus on just numbers. Anytime we discuss policy as citizens, we really need to keep in mind that every 1 in a number represents an actual individual which an actual family and life and life experiences.

An example is raising the social security age from 65 to 67. How many lives did that simple change affect? What repercussions did that have on that individual's family? We don't really think about it, we just think about it's effects on the bottom line.

"One death is a tragedy; a million is a statistic" -Joseph Stalin (1879-1953).




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: