So you say we don't need OOP because you can use this half-baked (non type-safe) way of implementing your favorite OOP feature yourself in your favorite touring-complete language? While omitting the key concept of virtual functions.
"OOP" is a philosophy more than specific implementation details. I've programmed "object orientedly" in any number of languages, many of those with no object orientation helping syntax, like C, assembly, Forth (although that one's more malleable). It's always been about the mental model of your programming, anything else is syntax sugar.
All discussion about "your not doing true OOP because $REASON" arguments are useless, repetitive, and discouraging to exploration, discovery and learning.
I'm not sure where you got "we don't need" from, unless you consider any language feature to be unnecessary if there's a way to implement the concept without it. Which is not a very useful way of looking at language features! You can implement anything in assembly, that doesn't mean higher-level languages aren't needed.
I'm saying that OOP as a concept is simple enough that you can do the basics of it in pretty idiomatic and straightforward C, and all the other stuff that we associate with "OOP" is not really universal. And thus it's no surprise that discussions around OOP as a general concept tend to be vague and not very useful.
Why do you seem to feel that OOP defines a specific set of features, for instance virtual functions?
There are many different OOP styles out there. All one needs to do is to look at the languages that originated the concept, such as SIMULA and Smalltalk to see that they weren't channeling the exact same ideas.
Note that income is not the definition of rich. It's just it's derivative. You may earn well, but that doesn't (necessarily) mean you possess much. Most wealth is inherited nowadays.
It took years after I learned the Kalman Filter as a student, until I actually intuitively understood the update of the covariances. Most learning sources (including the OP) just mechanically go through the computations of the a-posterior covariance, but don't bother with an intuition other than "this is the result of multiplying two gaussians", if anything at all.
Figured I can save you a click and put the main point here, as few people will be interested in the rest:
The Kalman filter is adding the precision (inverse of covariance) of the measurement and the precision of the predicted state, to obtain the precision of the corrected state. To do so, the respective covariance matrices are first inverted, to obtain precision matrices. To have both in the same space, the measurement precision matrix is projected to the state space using matrix H. The resulting sum is converted back to a covariance matrix, by inverting it.
I am confused. Is that Epson scanner specifically for photos, or did you just use a regular document scanner and the document feeder happened to work with photos? Or did you actually do it manually?
i could have sworn i mentioned it was a photo scanner, specifically, but i didn't. Luckily i mentioned it elsewhere recently and it was in my browser history. I have 0 complaints for what i used it for, which was to immediately and finally back up all of my family photos.
Depends on what you want to learn.
The SLAM Frontend (computing motion information from sensors) offers a lot of variety through the chosen combination of sensors (wheel odometer, IMU, mono/stereo/multi camera, lidar, radar, sonar, to name a few).
At least for vision, deep learning should be very useful. For the others I have no experience how much machine learning is relevant. Geometry and physics based methods should work well here, but there is probably much room to tack on some ML.
The SLAM backend (optimization) is mostly old-school methods like nonlinear least squares optimization or particle filters. Not sure if that counts as ML today.
I'd go for g2o or Ceres for mapping (unless you expect to have no loop closures), as there's really no need to reinvent that. It's definitely useful to learn about the backend, but usually the combination of sensors and their properties will demand more algorithmic tailoring than the backend, which gets more abstract input (i e. motions & uncertainties) and can be used more black-boxy.
Interesting read, although I don't agree with everything. I like the distinction between different qualities of abstractions, made in the beginning. The following bashing of abstractions is too generalized for my taste.
The best part comes close to the end:
> Asymmetry of abstraction costs
> There’s also a certain asymmetry to abstraction. The author of an abstraction enjoys its benefits immediately—it makes their code look cleaner, easier to write, more elegant, or perhaps more flexible. But the cost of maintaining that abstraction often falls on others: future developers, maintainers, and performance engineers who have to work with the code. They’re the ones who have to peel back the layers, trace the indirections, and make sense of how things fit together. They’re the ones paying the real cost of unnecessary abstraction.