Excellent advice on software development based on like 20 years of experience teaching software architecture at Stanford IIRC. It's really good, lots of specific advice, and not very dogmatic.
Out of the tar pit changed how I look at everything.
Relational modeling can be the ultimate answer for wrangling complexity. It supports real world messy things like circular dependencies without a hitch. You can achieve the same in OOP, but then go and try to serialize or persist an instance of such a thing... you will find a very unhappy variety of dragon lurking in that stack trace.
If considering not just software, but engineering or systems overall, best of all I seen is Goldratt, Theory of Constraints (usually named TOC).
Resources (mostly books) are many, they even created their own accounting system and simulation software. For intro good books The Goal (1,2,3), they are for different markets and shows look on different aspect of TOC, and easy to read.
"The Theory of Constraints International Certification Organization (TOCICO) is an independent not-for-profit incorporated society that sets exams to ensure a consistent standard of competence"
Also exist lot of business organizations (official), who provide support on implementing TOC best practices, and sure, non-formal fan-clubs and lists of books and lists of other support files (learning software, accounting software).
This is a manual solution. Create system sequence diagrams of your ___domain as you’ll learn a great deal in the process and have a visual heuristic to revisit when needed.
This is probably the most important thing I learned. Draw it down if it doesn't fit in your head. Keep your diagrams around, connect them, copy and paste them if they are made on a computer to make changes. Experiment.
5. Finally, I’ll cut myself at lucky number five by pointing you toward Jüergen (You-Again!) Schmidhuber’s fascinating webpage: https://people.idsia.ch/~juergen/
Honorable mention to Margaret Hamilton’s T-Maps and F-Maps. Use a hierarchical numbered outline.
Zoom in and out of the fractal network. Remember your working memory holds seven (7) concepts, plus or minus two (2) … so chunk size around five (5) or less is ideal for cognitive accessibility.
I swear I got this recommendation here, but I can't remember who to credit and can't find the post.
Anywho, this is arguably the overarching reason Design Rules: The Power of Modularity[1] was written, at least in the abstract way you are asking it.
The book tells you about the process that led to the creation of IBM's System/360, and it shows how a set of patterns and rules have converged in parallel industry sectors. It was honestly one of the most interesting books I've ever read, and that's exactly what it is about. Highly recommend this book to virtually anyone on this board.
the concepts here are re: functional programming, but they apply to everything tbh.
whether your system is a single monolith or a bunch of microservices, the core ideas hold true: think about your system in the form of actions (mutations), calculations/computations (reads), and data as data.
I have lately been asserting that so much of computing interfaces is essentially materialized views into a database. Whether that database is relational or not really doesn't matter. Aggregation rollups and flat data entry are easy to see in that world with transformation from the flat data store that users directly write to into somewhat normalized backend systems that are all constantly running jobs to materialize the views that the rest of the systems use.
To that end, I heavily stress that there is no single "data as data" view of any data in the system. It is all a transformation of some sort from somewhere.
This is not to say that I disagree with your general statement. I would stress that the same "data" can have multiple representations in the "data" world. (This is not uncommon for outside of programming, either. Easy to consider that 1/3 is .33333... or 3.3...e-1, etc. Canonical formats for any data item are things you have to work to build, they do not come for free.)
At the end of the day, it is data. It might get read out of 3 different underlying systems, enriched by some kind of computation (dynamic or computed attributes vs ones that are actually persisted to a db, etc) but it is data.
If you can think in data, as you say the underlying mechanisms for storing and retrieving that are less important. They are when it comes to performance and persistence guarantees, but if you abstract things correctly, that becomes easier.
Recently I saw a post here that clean architecture was dead. This is hilarious to me and not true. If you are writing a low-level script, sure you can avoid that and optimize ... but for larger systems (ie, anything that is outside of a single process), clean+hexagonal is the way to go.
This textbook, an introduction to the principles and abstractions used in the design of computer systems, is an outgrowth of notes written for 6.033 Computer System Engineering over a period of 40-plus years. Individual chapters are also used in other EECS subjects. There is also a web site for the current 6.033 class with a lecture schedule that includes daily assignments, lecture notes, and lecture slides. The 6.033 class Web site also contains a thirteen-year archive of class assignments, design projects, and quizzes.
And learn TLA+. The core idea of TLA+ is that any problem, how ever complex can be represented as a state machine i.e a sequence of variables or states that change over time.
It can be used to describe software, hardware and theoretical concepts like consensus algorithms.
I suspect one low hanging fruit for TLA+ is to use it for describing physical phenomena gravity, magnetism, boiling etc. This I haven't seen work using it for that yet.
I will never forgive my college professors for not teaching this to me. Many years later, when I discovered SICP on my own, I recognized that my high-school CS teacher wanted expose me to this. I was too young at the time to understand. Bummer.
Over the years I've come to realize that every single piece of advice or opinion in software is ultimately about complexity. Either about how to control and manage it properly or a good illustration of how easy it is to weaponize it and/or get overwhelmed by it.
The greatest advancement in computing, IMO, was structured programming. Ifs, Loops, Functions. What do they have in common? They got rid of goto spaghetti.
Erlang actors excel in part because they are structured in supervision trees. There's that word again.
If you can, structure things as a tree. If it is not possible, make a DAG. Only use arbitrary graphs as a last resort.
This is why so many OO codebases turn into a mess, every object has references to who knows what, forming a complex dependency graph full of action at a distance.
You're describing a major part of Software Engineering. Most reasonable university/college SE 101 courses will focus here. Which is to say, if you haven't (or it's been a while), check reading lists for university SE classes and pick the ones that focus on modeling. steer clear of process/agile material.
Commonly used and recommended books would be GoF (if you're not familiar with the patterns already), Code Complete, and The Pragmatic Programmer.