Actually yes, I think stuff like this makes programming hard. A half ass implementation of "class", not behaving like a class, brings unnecessary confusion. Programming in the real world, is full of these details, you have to know to be productive. 0.1+0.2 = 0.30000000000000004 in many languages is another one.
(And semicolons are ugly and I avoid them, wherever I can get away with it, but no, are probably not the reason)
I agree that the JS implementation of "class" is bolted-on and obscures the underlying prototypical inheritance, and that this kind of thing makes programming harder. I wish JS had leaned more into the theory of prototypes, possibly discovering new ideas there, instead of pretending like they're using the same inheritance scheme as other languages (although perhaps we should have expected that from a language whose literal name is from bandwagoning Java). The way to reduce this difficulty is by making better programming languages, by improving the underlying theory of programming language design, software engineering, etc. Cleaner, purer languages, closer to the math (being the study of self-consistent systems). This is the opposite direction of "low code". It's more like "high code". Low code is chock full of this kind of poorly thought-out, bolted-on, leaky, inconsistent abstraction, because their entire point is to eschew ivory-tower theory; they avoid the math, and so become internally inconsistent and full of extraneous complexity.
I also agree that 0.1 + 0.2 != 0.3 is another thing that makes programming hard. This is intrinsic complexity, because it is a fundamental limitation in how all computers work. The way around this is -- you guessed it -- better programming languages, that help you "fall into the pit of success". Perhaps floating point equality comparisons should even be a compiler error. Again, low-code goes the opposite direction, by simply pretending this kind of fundamental complexity doesn't exist. You are given no power to avoid it biting you nor to figure out what's going on when it does. Low-code's entire premise is that you shouldn't need to understand how computers work in order to program them, but of course understanding how floating-point numbers are represented is exactly how you avoid this issue.
I think it is pessimistic to say that number precision is a problem fundamental to computing. The bitter lesson gives me hope that someday no one will have to care about non-arbitrary-precision math. Programming could get that simplified by a great platform.
I suspect that if you dive deeply into arbitrary-precision math (although I don't mean to assume you haven't), you'll probably find that a programming language that supports such a thing forces quite a bit more thought into exactly what numbers are and how they work. Arbitrarily precise arithmetic is deeply related to computability theory and even the fundamental nature of math (e.g. constructivism). A language that tried to ignore this connection would fail as soon as someone tried to compare (Pi / 2) * 2 == Pi; such a comparison would run out of memory on all finite computers. In fact it's not clear that such a language could support Pi or the exponential function at all.
A language that was built around the philosophy of constructivist math in order to allow arbitrary precision arithmetic would basically treat every number as a function that takes a desired precision and returns an approximation to within that precision, or something very similar to that. All numbers are constructed up to the precision they're needed, when they're needed. But it would still not be able to evaluate whether (Pi / 2) * 2 == Pi exactly in finite time -- you could only ask if they were equal up to some number of digits (arbitrarily large, but at a computational cost). If you calculate some complex value involving exponentials and cosines and transcendentals using floating point, you can just store the result and pass it off to others to use. If you do it with arbitrary precision, you never can, unless you know ahead of time the precision that they're going to need. There are no numbers: only functions. You could probably even come up with a number that suddenly fails at the 900th digit, which works perfectly fine until someone compares it to a transcendental in a completely different part of the software and it blows up.
This does not sound like it's simplifying anything. Genuinely, a healthily-sized floating point is the simplest way to represent non-integer math; this is why Excel, many programming languages, and most science and engineering software uses it as their only (non-integer) number format. It's actually hard to come up with a situation where arbitrary precision is actually what the users need; if it really seems like you do need it, then you might actually want a symbolic math package like MATLAB or Mathematica/Wolfram Alpha or something.
I'm sorry, but 0.1+0.2 != 0.3 is fundamental. It creates difficulty, but you are not capable of doing math in a computer if you don't understand it and why it happens. Even if your environment uses decimals, rationals, or whatever.
The SQL `numeric` makes the right choice here, putting the problem right at the front so you can't ignore it.
That said, I completely agree with your main point. Modern software development is almost completely made of unnecessary complexity.
(And semicolons are ugly and I avoid them, wherever I can get away with it, but no, are probably not the reason)