Yeah, agreed: in that case spinning a container is evidently an operational advantage.
However, I was specifically answering to the different case highlighted by the parent comment: how a potentially cohesive application is cut along some of its internal APIs, and some functionality is allocated to different processes living in different containers.
It is an extreme point in the continuum "single thread" -> "multi thread" -> "multiple processes" -> "fully distributed". In that continuum scalability increases, while efficiency progressively decreases.
Cornering oneself to a specific point in the design space is problematic, and for some cases has direct implications on how many resources are wasted.
A very didactic experience is, for example, running a simple local application under a microarchitecture profiler, such as Intel Vtune. It is not uncommon seeing that even straightforward C/C++ programs use a core resources less than 10%.
What I am reflecting about is that the choice of fragmenting that program among tens of systems (maybe in a scripting langiage) should be conscious, and done after encountering performance or scalability bottlenecks.
How much of the resulting total system workload would be useful work?
The quantity of potentially wasted resources is astonishing if you think about it.
However, I was specifically answering to the different case highlighted by the parent comment: how a potentially cohesive application is cut along some of its internal APIs, and some functionality is allocated to different processes living in different containers.
It is an extreme point in the continuum "single thread" -> "multi thread" -> "multiple processes" -> "fully distributed". In that continuum scalability increases, while efficiency progressively decreases.
Cornering oneself to a specific point in the design space is problematic, and for some cases has direct implications on how many resources are wasted.
A very didactic experience is, for example, running a simple local application under a microarchitecture profiler, such as Intel Vtune. It is not uncommon seeing that even straightforward C/C++ programs use a core resources less than 10%.
What I am reflecting about is that the choice of fragmenting that program among tens of systems (maybe in a scripting langiage) should be conscious, and done after encountering performance or scalability bottlenecks.
How much of the resulting total system workload would be useful work?
The quantity of potentially wasted resources is astonishing if you think about it.