Hacker News new | past | comments | ask | show | jobs | submit login
Why Architecture Oriented Programming Matters (2019) (metaobject.com)
62 points by mpweiher on March 26, 2023 | hide | past | favorite | 39 comments



I don't really understand the author's point. It seems like the author is making a judgment based on the "number of benefits" provided, without considering the magnitude of the benefits.

There is perhaps no simpler way to compose programs than with function composition (except maybe with relations). This leads to great clarity of thought and writing, and is not to the exclusion of architecture.

Avoid taking inspiration from mathematics at your own peril.


> the author is making a judgment based on the "number of benefits" provided

Not "number of benefits" but "kinds of glue" and "ways [to] glue solutions together".

And I guess putting the crucial quote in bold wasn't enough. I joked about wanting to add a blink tag around it, maybe I should have done that?

"The ways in which one can divide up the original problem depend directly on the ways* in which one can glue solutions together. Therefore, to increase one’s ability to modularize a problem conceptually, one must provide new kinds of glue in the programming language."

This is not me, this is the direct quote from Why Functional Programming Matters.

Note: "ways* and "new kinds". Plural. New kinds. Not "a single way". Not "a single kind of glue".

He goes on: "We shall argue in the remainder of this paper that functional languages provide two new, very important kinds of glue."

Except it's really only one kind of glue (see the update).

So, deftly switching metaphors, the fact that you have the best hammer of all time doesn't help much if we have two pieces that are best held together with a screw. Or arc welded. Or vacuum welded. Or soldered. Or glued with two-component glue. Or glued with super-glue. Or glued with wood glue (and I just learned that super glue really doesn't work when you need wood glue). Or rivets. Or joined together with glue-less and fastener-less wood joins.

And the fact that all these different ways of joining together things exist and are valid and make for better end results when used appropriately does not detract from the shininess of your hammer.


Yeah, I never saw an answer to the headline question. Lots of drivel about glue. My opinion on glue is that too much glue means you've got a suboptimal data representation. But I saw no insights even that basic. I didn't get the point.


I think the author is saying that in order to build larger structures out of modular structures you connectors (you can think of glue as one kind). As an example think of the many different connectors that Lego Technic provides. Unix pipes can be considered connectors as well. So can tcp/ip connections (for the largest distributed system in the world, the Internet).


It's always going to be compatible data that really enables different transforms or systems to communicate. Sending and receiving it is a directly solvable problem.


What is it about functional programming that rules out unix pipes, tcp/ip connections, etc? There's no basis for those to be excluded or any more difficult in that paradigm IMO.


I don't think the author is necessarily excluding FP as much as saying intrinsically it doesn't provide anything more than functional composition for what the author calls "architecture oriented programming". That is, while "structured programming" mainly focused on on modularity, how these components are connected (and communicate) is equally important. At least that is what I got from the article!


Well that's kind of what I think I took from the article as well, but I'm not sure I see the sense in that. It sounds to me a bit like saying "functional programming is not that great because it doesn't make thinking and planning obsolete"... It sounds to me like a category error, or unreasonable expectations. :shrug:


The article isn't saying anything about the quality or lack of quality of functional programming (except for disputing the unfounded 10x better claim with actual data).

The article is saying that irrespective of how good or bad function composition is, it is just one kind of connector ("glue"). And we need more than one.

For more info on connectors, see Procedure Calls Are the Assembly Language of Software Interconnection: Connectors Deserve First-Class Status by Mary Shaw.

https://resources.sei.cmu.edu/library/asset-view.cfm?assetid...


Are there some languages that implement connectors like in the paper?


There are the Architecture Description Languages, but they only describe, so you have to do the additional work of describing the architecture, and then program it conventionally. Not very appealing and they never really took off.

Unicon was one of the first projects I am aware of that tried to generate code:

https://www.cs.cmu.edu/~Vit/unicon/reference-manual/Referenc...

It is also the system that made me see the connection between architectural elements (connectors/components) and the elements of a programming language, the language's meta-model.

ArchJava ist the only attempt so far to add a connector abstraction to an existing programming language:

https://www.ics.uci.edu/~andre/informatics223s2009/aldrichch...

But since "connector" is a generalisation of "procedure/function/method call", you can't really just add connectors to an existing language, you have to generalise. Which is what I am doing with Objective-S: http://objective.st

(And no, the ObjS web-site is currently not very good at all. But it is being served by ObjS and can stand up to a HN hug of death, so at least something...)


Thanks, UniCon certainly looks interesting.

I did briefly look at Objective-S as well, but I'm not familiar with Scheme from before so will need more time to get a feel for how it works in practice.

You can get a lot of the advantages of this clear split in a regular language like C#, but I agree that a dedicated one probably can get you some synergies that are otherwise lost.


> You can get a lot of the advantages of this clear split in a regular language like C#

Absolutely! It's just not at all obvious and thus difficult to accomplish or do well. You need to have a very clear model of the solution in your head that you then encode very differently in your language of choice.

And once it's encoded, it's very difficult to recover that clear/simple model that only exists in your head, both for yourself and even more so for future maintainers. Meaning it will probably degrade more or less rapidly over time.

Linguistic support should make this much more straightforward, as you can just write down the model you had in your head. Or at least much more of it.


Recently I became aware of “FAC: A Functional APL Language” a 1986 paper by Hai-Chen Tu and Alan Perlis[1]. Unlike traditional APLs it allows you to define your own “operators” (second order functions), such as “x (F SWAP) y: y F x”, where x and y can be congruent arrays. It even allows infinite arrays, which can be evaluated lazily and provides a powerful operator on them (let’s call it “$”) such that “1 (+ $) 1” is the Fibonacci sequence! Since FAC functions are pure, in principle you can evaluate things in parallel.

It occurs to me that operators can be considered connectors!

[1] https://ieeexplore.ieee.org/document/1695470


The point is that if you have to write a lot of glue code, chances are you have the wrong kind of glue.

Your chances of having the right kind of glue increase the more different kinds of glue you have to choose from. (Which is not my point, it is the point John Hughes made, but I certainly agree with it). Of course it isn't just a numbers game, for example you could have lots of different kinds of glue that are all horrible.

But if you have only one kind of glue, the chances of it being the right kind of glue in all circumstances are pretty low, regardless of the quality of any one kind of glue.


You have to follow Marcel Weiher's work on combinators aka connectors.

The following article references his work on in-process rest, polymorphic identifiers, and storage combinators.

What Alan Kay Got Wrong About Objects

https://blog.metaobject.com/2019/11/what-alan-kay-got-wrong-...

Here's an ACM SIGPLAN talk he gave on storage combinators...

Storage Combinators @ SPLASH'19

https://www.youtube.com/watch?v=6FFlmkFS1YY

The ACM SIGPLAN talk is a very good starting place to get an understanding what he is talking about.


> The ACM SIGPLAN talk is a very good starting place to get an understanding what he is talking about.

What's the big difference from just using composition like we've done for ages?

I mean, his examples in that talk was very reminiscent of stuff I've been doing for many years, and I don't consider myself to be special in any way. Writing decent interfaces, write useful implementations, then compose them together.

Not trying to be dismissive, I just feel like I'm clearly missing something as it's a bit underwhelming for a 2019 talk to be all about "composition is great, use it".


imho the key concept is in-process rest as in representational state transfer as an interface. instead of an imperative interface it's more of a protocol based interface. think http middleware built around a standard interface like ruby's rack, asp.net's owin, or python's wsgi. once you have that style of abstraction it becomes pluggable and you can chain them together.

here's a past discussion on in-process rest.

https://news.ycombinator.com/item?id=21560626


Thanks, that's a bit more specific.

Think I'd have to play with it to see how it really differed from my normal composition of interfaces and implementations.


" each Smalltalk object is a recursion on the entire possibilities of the computer "

This is true to a degree. But a Smalltalk or any OOP object makes much too many assumptions about the inputs it gets via "messages". Most fatally they make the assumption the inputs are similar objects as the recipient, programmed in same or similar programming language. Note: In Smalltalk everything is an Object.

A Computer instead communicates with other physical objects by means of a very simple binary protocol of ones and zeroes. The computer then interprets the bits it gets according to some generally agreed upon convention. A Smalltalk Object does not interpret the messages it gets, it assume they are already interpreted for it. The protocol of Smalltalk objects communicating with each other has not been implemented between computers in general. That is perhaps a failure of standardization, but it nevertheless is the reality.

There are two ways to fix it:

a) Make every Smalltalk object more like a computer in that it communicates with other objects only via a text-based protocol. not by passing along arguments with predefined semantics like Objects or Functions or Methods.

b) Make every Computer assume the same of its inputs as a Smalltalk object makes of its.

I think b) is impractical but maybe there could be a Smalltalk whose Objects truly act like small Computers.


> The protocol of Smalltalk objects communicating with each other has not been implemented between computers in general

Not sure if this fits your conditions, but Croquet (https://en.wikipedia.org/wiki/Croquet_Project) is a P2P, "distributed computation" architecture based on Smalltalk and the replication of messages (i.e. method calls) sent to object clones on participating nodes. The replication and commit protocol is called TeaTime (https://oneofus.la/have-emacs-will-hack/files/2005_Teatime_F...). Croquet is basically a 3D interactive WWW, but based on synchronized Smalltalk VM execution (distributed computation) rather than downloading data (e.g. HTML).


Croquet and Teatime are great. If every computer would implement the Croquet protocol then that would be great. But my point is really that the protocol you use to interact with computers should be minimal and just typing in ascii characters on the terminal is about as minimal as possible.

Characters are the transfer-level (ISO something?). By assuming that all inputs come as a string of (UTF?) characters we retain the maximum freedom as to how to interpret those characters. That makes computers very general purpose.

The Smalltalk -protocol assumes some specific data-structures for representing the messages and results. Once you choose and require one such representation you would lose the ability to use any other representation, and thus we the ability to evolve ever more and multiple representations encoded as characters.

Also, it is not Smalltalk Objects that do the interpretation of messages, like a Computer. It it is the Smalltalk runtime that does the interpretation.

How we design the communication between humans and computers should be max flexible.

This has something to do with JSON. JSON allows many different types of programs and computers interact with each other -- because it doesn't allow you to pass around Functions. JSON only passes basic data-structures. Therefore it is very general and flexible. Whoever gets a JSON-message is free to interpret the arrays, records and strings encoded as JSON in any way they want. Plain ASCII is even more general and flexible.


Is this why Smalltalk never has a multithreaded vm?


Smalltalk implements its own threads which are instance of class Process. They are not OS-threads as far as I know.


Smalltalk/MT has platform threads


Good point forgot about that one. Now that you brought it up I realize that "MT" must refer to "MultiThreading".

Do you know if Smalltalk/MT is still being maintained?


The bold point really resonates with me.

Lately I’ve been writing a CLI application in Python to analyse and generally handle possibly megabytes of DNA data.

At first I wrote the code quickly and in haste of inspiration using some Numpy constructs, some functional maps (with list) and an occasional list comprehension. It has took WAY too long to convert to code to use iterators, because the syntax and semantics are so different.

In Lisp I could usually just switch the kind of map function to be lazy, iterative, or whatever I would like without changing and carefully thinking about the rest of the code.

I think this is the kind of glue the article points to.


I'm not sure what this article even is.

Does it mainly bash John Hughes for bragging about how good FP is?

Or does it mainly advertise for the author's language, Objective Smalltalk?

Objective Smalltalk -- http://objective.st/ -- is a mostly unimpressive, one-man project that claims to be the first general purpose programming language, and that all other programming languages are DSLs in comparison. The few simplistic "Hello World!" front-page code examples don't solve any problems related to software architecture. I'm tempted to believe he just likes to make small languages that resemble all the other languages, like the rest of us.

The article lacks perspective by measure of the lack of any references to Infrastructure as Code, first-class module systems, integration between language ecosystems and their package management and languages. I would have had more to say about how "software architecture" goes together with the lack of expressivity in most general purpose programming languages, how modern DevOps is heavy on DSLs that are executable architecture designs, and how there's both a lot of progress to be made, and a lot of noteworthy, active attempts.

The author cites John Hughes (of Haskell, QuickCheck fame) for saying that structured programming lets you design programs in modular ways, and modules allow for speedy development and re-use, and that function composition is the best kind of glue for making modules.

The author then links to an article [1] (Mary Shaw, 1994) which invents a word, Connectors, claims that this is great, and so Architecture Oriented Programming is great. It feels like the middle 80% part of the article that defines and connects (sic) these concepts is completely missing. Maybe I'm supposed to read the Mary Shaw article. He could have saved everyone a lot of time by just saying "Hey, I found this pretty great article. Also, I'm making a programming language that is better than yours."

[1]: https://resources.sei.cmu.edu/asset_files/TechnicalReport/19...

> it turns out that [...] lazy evaluation is essentially unused, [...] So that means the number of kinds of glue that FP provides is...one.

Demagoguery. Since we can apparently just make up numbers:

FP actually provides -1 kinds of glue, making it a product of Satan and so entirely a cause of the coming apocalypse.


2 - 1 = 1

Hughes claims FP provides two (2) kinds of glue: functional composition and lazy evaluation.

It turns out lazy evaluation is actually not really a useful kind of glue, so subtract that one from the two initially presented:

2 - 1 = 1

> Does it mainly bash John Hughes for bragging about how good FP is?

> Or does it mainly advertise for the author's language, Objective Smalltalk?

Neither. It notes that John Hughes had a fantastic insight, but one that seems to not get a lot of attention: we need lots of different kinds of glue.

And yes, the article somewhat assumes that you either know what a connector is, or that you would read the linked Mary Shaw article to figure it out, as it is foundational for the field of software architecture. If you don't know what a connector is and don't care to find out, the article probably won't make a lot of sense to you.

And it's good for me to learn that apparently a lot of people don't know what a connector is, so I'll have to keep that in mind when communicating about this stuff. Thanks!

You could start with the main Wikipedia article on Software Arcitecture: https://en.wikipedia.org/wiki/Software_architecture

Or one of the others: https://en.wikipedia.org/wiki/Architecture_description_langu... https://en.wikipedia.org/wiki/Software_architecture_descript...

They cover the basics and have lots of references.

Or take a look at one of the books:

https://www.amazon.com/Software-Architecture-Perspectives-Em...

https://www.amazon.com/Software-Architecture-Foundations-The...

Here's a taxonomy of connectors, that should give you a quick overview:

https://isr.uci.edu/events/wesas2000/position-papers/mehta.p...

And of course there is the classic from 1994, published in IEEE Software: "Architectural Mismatch: Why Reuse Is So Hard"

https://www.ics.uci.edu/~taylor/ICS221/papers/ArchMismatch.p...

Turns out that one of the reasons Reuse is So Hard™ is that the connectors don't match.

With the followup 14 years later: "Architectural Mismatch: Why Reuse Is Still So Hard"

https://repository.upenn.edu/cgi/viewcontent.cgi?article=107...

Turns out that the connectors still don't match.

Cheers.


Thanks a lot for your references.

I've saved them all. :-)


Interestingly, if you call you glue "reflection" and "annotations" or "attributes" and not "combinators" and use them in a modern language with good editor support, it becomes a reason to deride a language.

Java for example has some very interesting ways to meta program it, for example you can relatively easily dynamically select a number of classes based on different features and say for example: "apply all these to the input and count each output as a vote on an input."

But except for when listening to or talking to long time users I never hear about it. Two reasons off the top of my head (but I guess there are many more):

- many educators teach Java in an extremely uninspiring way.

- the long and sad history of old enterprise Java (and later, Spring, although Spring was a force for good for a while)

Edited to mention "attributes" as well as good editor support.


Really intrigued by your comment. Can you please elaborate on the part of meta programming in java. I am trying to learn it in java. What will you suggest to someone who is working in java for a while to dive deep into it?


I’m not the parent poster, but what they likely meant are annotation processors. They are basically a compile time mechanism that can create new classes based on the used annotations in the program (important, they can’t modify existing classes! This is so that reasoning about the code is easier).

This is mostly used by libraries and is seldom used for actual applications (I mean, annotation processors themselves, ones implemented by libs are definitely common), for example MapStruct is a really cool library that generates mapping code between two classes, one can specify which field/property maps to which and thus making a common, error-prone operation very readable and easy to maintain.

With that said I disagree with the original statement that one should absolutely know/use these tools, similarly to macros in other languages these are very advanced/last resort mechanisms that are very great in that rare chance they are needed, but overuse of them can make code very hard to understand.


Correct.

Annotations (like @Entity in JPA/Hibernate) and reflection is what I mean.

Several Java libraries use annotations in a good way and also once one masters Java, they aren't too bad to make oneselves.

Reflection is about taking decisions and even changing behavior at runtime, like "iterate over all classes in this package, filter the ones with name pattern/annotation (or whatever other information is available at runtime) and use/update them.

Reflection is very powerful and IMO somewhat more tricky than annotations.


It's not that the number of kinds of glue is large. It's that the one kind of glue (function composition) is particularly powerful.


It is. As well as procedural composition.

And in addition, for a long time the problems we were solving mapped well onto the kinds of decompositions we can do easily/naturally with functional/procedural (de-)composition.

In fact, if I had to choose one and only kind of connector, that's the one I would choose.

Dataflow expressed functionally or procedurally is pretty awful, but it has nothing on the horrors of fibonacci expressed using dataflow. Yes you can do it, but you really don't want to.

AFAICT, it is exactly because the connector we have is so good that we haven't really noticed how limiting it is to have just the one. A kind of gilded cage, or as I call it: The gentle tyranny of call/return[1]. And I really do mean the "gentle" part.

And for some weird reasons, whenever people proposed different connectors, they almost invariably proposed them as a single connector to replace all others. Which never turns out well.

[1] https://2020.programming-conference.org/details/salon-2020-p...


The success of ReactJS is bold claim of FP. Architecture matters.


React was developed as class based and functional react isn’t “functional programming” as it’s loaded with side effects by its nature.





Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: