It seems to me that a new generation of developers are simply rejecting oo-paradigms because they are oo, without an in-depth analysis.
I agree with the general consensus that oo inheritance hierarchies of the past became too deep and were overwhelming, but subtyping has a place.
Because interfaces are stateless, you generally cannot completely implement required interface functionality without wiring them up to each individual object, resulting in a lot of boilerplate code repetition.
This is exacerbated in GUI programming, for example, where subtyping makes a lot of sense (and doesn't devolve into Animal -> Rabbit nonsense).
With subtyping, you can take an object that is 95% of another object and just override where necessary.
I also don't believe composition provides the same level of encapsulation that can be achieved through traditional class-based oo. There seems to be this prevailing view that oo is all about inheritance, when the truth is that oo was really about encapsulation and isolation. You only expose what you need to, and objects (whether they be classes or structs or whatever) can be built and tested without fear of code collisions or meddling from the outside view.
I will concede that interfaces can and do provide a level of encapsulation, but in practice, it's just clunky.
Just look at some Rust objects and the sheer number of traits that they must be aware of and implement by hand. It's piecemeal when it could be one and done.
And please understand, I'm not talking about "...in the beginning there was a GObject, and that GObject bequeathed to...and..."
This boilerplate repetition that you refer to is only necessary when doing composition because the languages don't provide the requisite syntactic sugar for it, the way they do for inheritance. This is very unfortunate but totally fixable.
I'd be interested to understand how / what types of proposals exist to solve this very real ergonomic issue.
As I stated in my original post, the implementations of interfaces/composition that I have used (primarily C# and Rust) are stateless, so you must always re-implement. And honestly, if Rust were to say, allow the specification of struct members in trait definitions, one could argue that their structs are really sealed/final classes, just using different keywords.
I am genuinely curious, because I do believe that people have a valid point that composition does tend to lead to less object spaghetti than inheritance, even if I can't quite pinpoint why from a purely theoretical standpoint (my belief is that they're "holding it wrong").
For example, Kotlin has syntactic sugar specifically to implement interfaces by delegating to an inner object (with overrides etc): https://kotlinlang.org/docs/delegation.html
There are more interpretations of "OO" than there are people, but the overall direction is that OO isn't a panacea for code maintainability, and some frameworks ahem Spring create a mental model at times so distant from OOP, that I might as well have written everything in Python. I gain nothing from having classes when one half are records and the other half are singletons.
Inheritance is just a tool. It’s insane to reject it outright just because someone can use it incorrectly. It does some things extremely well and better than composition.
I think this is interesting, but I think the way Python is described is slightly wrong? In Python, you don't need to define a protocol to support duck typing, you can just implement the same behavior. Protocols don't have a function outside of static typing (they don't influence any program behavior at run-time.)
I guess it depends on whether you consider duck typing to be "passes the static type checker" or "it works the same".
"Because Animal is a Protocol, any class that defines feed() becomes an Animal"
is slightly incorrect.
By default, protocols are only relevant for static type analysis, but you _can_ enable runtime instance checking using the @runtime_checkable decorator.
(So you can do "if isinstance(foo, Animal):...", but AFAIK this decorator comes with a performance penalty)
Protocols allow you to explicitly declare a class implements them by making them a base of the class, but do not require it. Every class that implements the specification of the protocol is usable where the protocol is specified, not only those explicitly marked with it. Python didn't make c#'s doofy mistake of requiring the programmer to explicitly annotate everything. So, not every implementer of the protocol will have isinstance be true.
That is correct, but the @runtime_checkable decorator (which goes on the class that defines the protocol, e.g. "Animal" in the blog's example) makes it so that any class that implements the required methods (e.g. Rabbit) also passes the isinstance test (so isinstance(Rabbit, Animal) does actually return True, even though Rabbit is not defined as a subclass of Animal, i.e. NOT class Rabbit(Animal) !)
Whether it makes sense to use that is a different question, but it is possible :)
it looks rather gross to me, and more dangerous than useful. why bother with a half check that can only see if the methods/attributes are present, but that can't guarantee they do what you need? better to not offer such a check at all, I should think.
but I'm not a fan of a great number of things python has been doing over the last some years.
if you were going to do such a thing (perhaps to allow type-safe callbacks from untyped code), I would think a wrapper would be the proper method, that checks the incoming parameters and the outgoing return value, raising an error if those are of the wrong sorts, and possibly further wrapping if something was declared to fit a protocol.
I can program in a dozen languages or so, so I'm not coming at it from some place of knowing python and complaining about C#. There's plenty to complain about in python's typing.
Having to hand annotate classes to indicate interface membership instead of just writing classes and letting the compiler perform structural analysis depending on their usage is annoying.
What happens when member names collide and you match an interface you shouldn’t have? It’s an issue even in Go, which, unlike Python, has a usable type system.
I suppose if you were so unfortunate as to have two interfaces name the same function with different expectations on what it does, you'd end up having to use an intermediate object to wrap around your base one, to ensure it matches the required interface, possibly adding a helper to wrap the current object for convenience at callsites.
In C#, you would instead define a method and specify the name of the interface in order to override the method it receives to be different from the base method that is experiencing the conflict.
I know this is a real issue, but I don't think this is excessively common for either language.
And the real answer for both would be to rename/prefix the names in the interfaces so that they aren't conflicting in the first place, if you have the ability to do so :-)
There are certainly always trade offs when building something.
There are others that are also _very_ interesting (but also pre 90s). SELF fe, was object oriented, but did not have inheritance. It used prototypes. In essence, you would create something of the same "type" by cloning the prototype.
Anyway,
subtyping and inheritance are similar, but not identical, concepts. Also, some languages provide traits iso protocols (and maybe even both ?). Some languages provide functors. The goal is always the same: abstract commonalities. Let's just keep it at: "it's complicated"
;)
I think it's better to study the design mistakes of programming languages in a historic context.
For example: C++ offers multiple inheritance. This caused the diamond problem. Java tried to fix this via interfaces. This fixed the problem, but was also a mistake as interfaces cannot provide behaviours. So Multiple inheritance is not an issue if only 1 of the parties provides state; all others can provide signatures but also behaviours.
> So Multiple inheritance is not an issue if only 1 of the parties provides state; all others can provide signatures but also behaviours.
IIRC that's what Bertrand Meyer advised in OOSC (I think he called it "marriage of convenience" between abstract and concrete superclasses). But he also claimed the diamond problem is overstated, and (paraphrasing) it's not a deep semantic issue, but a trivial syntactic one, "simply" solved with rename-on-inherit :)
It's not easily solved with a rename on inherit, because the resulting object has to be substitutable in all the places where any one of the multiple base classes are expected. Those places expect the names to be what they are, they don't know about renamed names.
The diamond problem is actually about the situation when through at least two levels of inheritance, a class ends up inheriting the same base two or more times.
C++ has two choices for the diamond problem: virtual base inheritance results in one copy. Regular inheritance in multiple copies.
The clash problem is separate from the diamond problem. A clash occurs when you inherit from two bases that use the same names, but are usually separate bases. For instance a lottery game class inherits graphics and lottery; the former provides graphics::draw and the latter lottery::draw.
In C++ that is dealt with by leaving the name lookup be ambiguous. When the derived game object is used like this: game.draw(...), the name lookup is ambiguous. The program has to specify game.lottery::draw(...) or game.graphics::draw().
Places in the program that use the object through references to one of the bases do not face the ambiguity. Given a graphics &gobj, gobj.draw() is unambiguous, even if that object is really a game class instance that also has lottery::draw in it.
Scope resolution operator can resolve the ambiguity under the diamond problem, when inheritance is plain (not virtual). Say A inherits B and C. Both B and C inherit D. So now A has two D's. Say D has a member m. Given an A object aobj,I think we can separately reference aobj.B::D::m to get to the D::m that was inherited via B, and aobj.C::D::m to get to the copy inherited via C.
I haven't really seen the diamond problem referenced lately as an objection to inheritance, but I honestly never understood what the big deal was. In the event of a collision, the compiler can ask for further disambiguation, which does in fact boil down to an issue of syntax, not one of a soundness or correctness violation.
Python is probably the most CLOS-like language in common use tho, isn't it? Multiple inheritance, metaclasses, the ability to decorate functions.
No built-in multiple dispatch but I'm pretty sure I saw some implementations of that too over the years :)
Go interfaces are explicit, not implicit, the difference is more where they are defined.
With subtypes the interfaces are defined with classes(an animal does this and that), with interfaces they are defined at the point of use (I accept a param that does this and that).
Interesting that gosling said at one point he felt including classes (i.e. inheritance) in Java was a mistake. I feel this is one thing Go got right compared to many other languages. Turns out inheritance just isn’t very helpful.
I used Java for a while and then switched to Kotlin as my daily tool, and one interesting thing that I found was that Kotlin prevents inheritance by default, and the developer has to explicitly mark a class as "open" to allow inheritance. Eventually I stopped using inheritance and now prefer to use composition over inheritance wherever possible.
So the kicker in imperative OO for composition is that the class you are composing has some state / instance vars.
How do you include methods that can view/update those instance vars?
Sure it's easy to compose with static/pure functions.
Do you enclose the instance vars in holder classes and pass those holders to the composition implementation class in it's method signature?
In the end, inheritance or composition is about constructing a class /struct/whatever with a given set of expected signatures with some ideally documented guidance on any intricacies.
As a Java developer the author should take a look at Clojure protocols. Those are even more flexible and can be implemented for 3rd party classes or even standard library classes without wrapping.
Python has far more options for this if you use some of the newer typing features. For example, it now (as of 3.12) automatically detects covariance and contravariance when you are dealing with generic types, which has made subclassing much nicer for me when it comes to what a static type checker can catch or autocomplete.
class MyClass {
int variable = 0;
void foo() {
variable++;
}
void bar() {
variable++;
}
}
You can block the top one in a code-review because "global variable", and tell the developer to fix it by passing the variable in and out of functions when needed, but I have a feeling the bottom one's getting merged even if you object on the same grounds.
I don't think subtyping is something you should probably be using very often. I wouldn't say never, but you can go a very long time without finding a legitimate use for the feature, and your code will generally be better for it.
It's a bit weird, given how these elaborate class hierarchies were touted as such a big and important feature in Java originally, but when the dust settled it turned out more often than not to complicate the code.
Interface inheritance is indeed much less of a foot-gun than class inheritance, but I'll argue even interfaces are generally fairly overused. The single-class interface pattern that exists in some parts of Java-land is just bizarre.
It's mostly a culture issue. Mocking components is a code smell. You should aim to have the implementation sufficiently self-contained and test these modules end-to-end. Some behaviors cannot be controlled and should not be abstracted away in some more exotic cases, like Environment.TickCount64-based cache expiration, in which case you should reproduce the actual environment.
The goal of the tests is to answer the question "if the pipeline is green, does it give us confidence it will work exactly as intended in production?". Mocks work against this goal.
Especially nowadays when writing tests is dirt cheap because LLMs can often get them right at the first try, there is little reason to avoid approaching this problem without prioritizing sanity over following stupid cargo cult that should have died a decade ago.
It's a cultural thing. I've seen code bases where every class was an FooImpl and had an associated FooIf, all in name of decoupling and mocking. Though it may be a trend that's dying down.
You cannot eliminate complexity (although you can minimize it, to be clear). But for most things, you can shuffle the semantics around, you can paper over them in a new way, but at the end of the day, someone has to pay the piper.
In my experience, complexity eliminated from one spot will invariably bubble up somewhere else in an unexpected way.
I remember at some time you couldn’t mock in C# without an interface or maybe it was just the popular library that required it. Java never had that requirement so at least in the codebases I’ve been in you didn’t have IBar and BarImpl everywhere.
In Java, all method dispatch is virtual (dynamic), but in C#, methods being virtual is an opt-in, so intercepting and mocking such calls requires a lot more effort.
It's still possible, mind you. TypeMock has been offering this exact ability for C# for many years now. But the free TDD frameworks generally didn't have this.
Java's method dispatch is more complicated than that due to JIT compilation. The affordances are those of dynamic dispatch, but hot Java method calls will not go through a vtable-like lookup equivalent unless the code actually sees a need to.
Indeed. Both OpenJDK HotSpot and .NET RyuJIT perform guarded devirtualization of monomorphic or polymorphic with few instances callsites. OpenJDK also computes optimized call table for megamorphic callsites, which .NET does not need to do for virtual calls, it does have something similar for un-devirtualized interface calls however (virtual stub dispatch[0]).
This is not necessarily zero-cost however - if the compiler cannot prove specific type members being invoked, it has to construct an execution profile and then apply it to subsequent compilations, and also emit a guard when doing dispatch on those.
It seems to me that a new generation of developers are simply rejecting oo-paradigms because they are oo, without an in-depth analysis.
I agree with the general consensus that oo inheritance hierarchies of the past became too deep and were overwhelming, but subtyping has a place.
Because interfaces are stateless, you generally cannot completely implement required interface functionality without wiring them up to each individual object, resulting in a lot of boilerplate code repetition.
This is exacerbated in GUI programming, for example, where subtyping makes a lot of sense (and doesn't devolve into Animal -> Rabbit nonsense).
With subtyping, you can take an object that is 95% of another object and just override where necessary.
I also don't believe composition provides the same level of encapsulation that can be achieved through traditional class-based oo. There seems to be this prevailing view that oo is all about inheritance, when the truth is that oo was really about encapsulation and isolation. You only expose what you need to, and objects (whether they be classes or structs or whatever) can be built and tested without fear of code collisions or meddling from the outside view.
I will concede that interfaces can and do provide a level of encapsulation, but in practice, it's just clunky.
Just look at some Rust objects and the sheer number of traits that they must be aware of and implement by hand. It's piecemeal when it could be one and done.
And please understand, I'm not talking about "...in the beginning there was a GObject, and that GObject bequeathed to...and..."