Because some human errors are bound to happen, what is often missing is procedures to minimize them, catch them or prevent them from having catastrophic effects.
It is not even about being technical. Have a person put data on a spreadsheet and you can get into so many errors if the procedure of doing that is bad.
> It's a very common cop out by engineers/technical folks when something goes wrong somewhere else, to blame it on management/deadlines/customers/etc.
Look, I've been an executive and a management consultant for a long time now (started as a sysadmin and developer), and it's quite often the case that everything was late (decisions, budgets, approvals, every other dependency) but for some reason it's ok that the planned 4 months of development is now compressed in to 2.5 months.
I have been involved to some degree or another in probably close to 300 moderately complex to highly complex tech projects over the last 30 years. That's an extremely conservative number.
And the example I describe above is probably true for 85% of them.
Engineers aren't shy about eviscerating each other's work when mistakes are made—sometimes too eager, frankly.
Whole courses are built around forensically dissecting every error in major systems. Entire advanced fields are written in blood.
You probably don't hear about it often because the analysis is too dense and technical to go viral.
At the same time, there's a serious cultural problem: technical expertise is often missing from positions of authority, and even the experts we do have are often too narrow to bridge the levels of complexity modern systems demand.
How do we know that it was just a "human error" as the article/headline implies?
Answer: we do not know either, but this is the standard response so that companies (or governments or whoever this concerns) are absolved of any responsibility. In general, it is not possible to know for a specific case until a proper investigation is carries out (which is rarely the case). But most of the times, experience says that it is company policies/procedures that either create circumstances that make such errors more probable, or simply allow these errors to happen too easily due to lack of enough guardrails. And usually it is due to push for "shipping products/features fast" or similar with little concern to other regards.
It could be a different case here, but seeing it is about oracle and having in mind that oracle is really bad at taking accountability about anything going wrong, I doubt it (very recently they were denying a hack on their systems and the leak of data of companies that use them until the very last moment).