Hacker News new | past | comments | ask | show | jobs | submit login

How do you know requirements or deadlines were a contributing factor here?





It's a very common cop out by engineers/technical folks when something goes wrong somewhere else, to blame it on management/deadlines/customers/etc.

Sometimes people make mistakes, sometimes people are incompetent, sometimes managers suck, sometimes it's a multi-layered issue.


Because some human errors are bound to happen, what is often missing is procedures to minimize them, catch them or prevent them from having catastrophic effects.

It is not even about being technical. Have a person put data on a spreadsheet and you can get into so many errors if the procedure of doing that is bad.


> It's a very common cop out by engineers/technical folks when something goes wrong somewhere else, to blame it on management/deadlines/customers/etc.

Look, I've been an executive and a management consultant for a long time now (started as a sysadmin and developer), and it's quite often the case that everything was late (decisions, budgets, approvals, every other dependency) but for some reason it's ok that the planned 4 months of development is now compressed in to 2.5 months.

I have been involved to some degree or another in probably close to 300 moderately complex to highly complex tech projects over the last 30 years. That's an extremely conservative number.

And the example I describe above is probably true for 85% of them.


Because in 99% cases it is “when we succeed it’s a team effort, when we fail it’s the engineers”.

Feels like the exact opposit story when I read engineers commenting online.

Engineers aren't shy about eviscerating each other's work when mistakes are made—sometimes too eager, frankly.

Whole courses are built around forensically dissecting every error in major systems. Entire advanced fields are written in blood.

You probably don't hear about it often because the analysis is too dense and technical to go viral.

At the same time, there's a serious cultural problem: technical expertise is often missing from positions of authority, and even the experts we do have are often too narrow to bridge the levels of complexity modern systems demand.


Oh come on, let's be honest. All of us have colleagues who we don't trust with mission critical stuff.

Your manager should also know that though.

Yes, but that's not the point OP was making here

How do we know that it was just a "human error" as the article/headline implies?

Answer: we do not know either, but this is the standard response so that companies (or governments or whoever this concerns) are absolved of any responsibility. In general, it is not possible to know for a specific case until a proper investigation is carries out (which is rarely the case). But most of the times, experience says that it is company policies/procedures that either create circumstances that make such errors more probable, or simply allow these errors to happen too easily due to lack of enough guardrails. And usually it is due to push for "shipping products/features fast" or similar with little concern to other regards.

It could be a different case here, but seeing it is about oracle and having in mind that oracle is really bad at taking accountability about anything going wrong, I doubt it (very recently they were denying a hack on their systems and the leak of data of companies that use them until the very last moment).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: