Hacker News new | past | comments | ask | show | jobs | submit login
Fighting Complexity in Software Development (github.com/atsapura)
257 points by atsapura on July 4, 2019 | hide | past | favorite | 115 comments



In mature OOP you have ways to write nice models and have good validations. https://guides.rubyonrails.org/active_record_validations.htm...

I will argue that the complexity of software development is not because of OOP vs Functional. Tooling, documentation, quality of libraries and people are what matter most. Ruby was a massive success is largely attributed to above. We are humans, we can understand and deal with a fixed amount of complexity if I can offload some of it to a framework, library or tool I will have more time to work on my problem.

Every time I try to play with anything Functional I got hit by a bus of undocumented frameworks(Erlang), multiple standard libraries (ocaml), competing half finished implementations (lisp), arcane tooling (scala), no tooling (Haskell) and broken tooling (F# on Linux).


but have you ever had to maintain a ruby project after the first year? The cost just goes up and up, and I think it’s because the language is so hostile to static analysis.


Yes. I've been working on an corporate internal RoR tool that launched in 2010. Various engineers over the years continued releasing new features and updating language/framework development... when I earlier this year, development/maintenance cost was no higher than any other software of that nature and age.

Cost only goes up and up when engineers go overboard with "clever" Ruby magic... which is human error, don't blame the tool.


>Cost only goes up and up when engineers go overboard with "clever" Ruby magic... which is human error, don't blame the tool.

I've only used a few languages in production for my non-developer job, but I've noticed very different frequencies of "clever" code between them.

R: This is my primary language. I love it, but it almost encourages clever coding. There are 10 ways to do any task without even touching third-party packages.

Python: My secondary language at work. Clever coding is definitely possible in Python, but the language's syntax makes it easy to spot. Detection isn't as good as prevention but it's better than nothing.

(I also grudgingly use SAS, but won't waste my time looking for ways it encourages clear coding.)


Couldn’t agree more. I was part of a $30 million Rails project that got unmanageable and burned after 2-3 years. Golang is so much more forgiving to human error.


30M. Well there’s your problem. I can just imagine the premature optimizations and the bloated abstraction that went in the planning phases.


Yeah, welcome to enterprise development (regardless of programming language). Every VP fancies themselves as the next Steve Jobs and like to be uncompromising about delivering their vision. Which ironically ends up as design by committee when you have that many assholes in a room unwilling to back down but also needing to save face.

And to be fair, most of the premature optimization and bloated abstractions are just your architects sitting around bored for 6 months while the project sponsors argue over reporting minutiae (but won’t approve the designs until their pet features are pulled in on the roadmap). They know the entire feature list is going to get changed at the last minute as the political winds shift, so they’re designing an overbuilt architecture to CYA for whatever random bullshit someone will pull a week before a deadline.


It's not even about programming languages at this point. It's Peopleware. People are the number 1 cause of all problems in software.


Have you maintained anything after the first year where that wasn't the case? Especially if you had a high degree of churn on the developers.

I'll grant that it is easier in more stable API environments. But we are our own worst enemies in that race.


if I pick up an old Java project, it might be a mess, but I'll be able to rename methods, refactor classes, and upgrade libraries until it gets into shape - and I'll have a fair amount of confidence that it'll keep working while I do that, because metaprogramming is rare, APIs are fairly stable, and the IDE will mostly be able to understand what the code is doing. In a ruby project - especially in a Rails project - you can never know for sure what code is calling what. There's no "Find usages" tool that can be relied on. And then, all your dependencies will have completely changed their APIs and behaviors, and they'll have a complex matrix of incompatibilities that means that you might have to upgrade several libraries at a time and rewrite a lot of code just to keep things working.


Apologies for the slow response.

I honestly can't compare to a rails app, as I don't have the experience. I can say that modern dev practices worry me with the constant upgrade train on api standards. On the one side, progress is quite nice. On the flipside, what you describe in "all your dependencies have completely changed their APIs" is just as annoying in Java as it is in any language. More so if someone has helpfully split your codebase into as many supposedly independent parts as they can.

More than that, it is amusing that many of the refactoring tools that you reference actually began in more dynamic languages. I've seen that folks that rely on their IDE to be able to refactor have a habit of making codebases that they have to use an IDE to refactor. I grant it could just be confirmation bias.


> metaprogramming is rare

In Java? Not really. Spring has been around since 2003 or so.


Having a framework do metaprogramming, and using that framework is one thing. I think that they mean user-level metaprogramming. This is MUCH more evident in dynamic languages.


I guess he was referring not to metaprogramming in the common sense, but to the change of behaviour of built-in methods and classes.


Ah, I thought that's called "monkey patching".


And probably also the non-local effects of using a framework - and a fairly magic one at that. All the effects of which are understood when creating the functionality, but the maintainer (even when it is the author) has a much harder time with probably only a partial understanding with certain things being out of mind.


I would stay Dynamic languages like Ruby are inherently more flexible and powerful, which more easily leads to complexity in the system... (if left unchecked)


Agree. You can write terse, maintainable, well encapsulated, testable and readable code in most any language. The problem is about the humans not the language they select.

But yes, complexity is also a major problem.


What I have learned is that what makes a language (or platform, or tool) "good" isn't how easy it is to write good code, but how hard it is to write bad code. Does a beginner following the path of least resistence, on a tight schedule, end up with maintainable code or not?

I'd argue that this is the weakness with (the traditional) OO languages. An experienced developer with plenty of time can write good software with almost any tool. But that's not what's interesting. I want to see tools that not just lets but rather guides inexperienced developers into making maintainable software.

Things like "no nulls" or "immutable by default" in Rust and most functional languages are two examples of such designs. OO itself doesn't necessarily mean the developers get trapped in poor code, but the traditional 3 (C++, Java, C#) sure do give developers lots of guns to shoot at their feet. Perhaps not mainly because they are OO, but because they inherited some poor fundamental decisions about mutability and nulls (from C) and about inheritance as a default method of abstraction (from C++) etc.


>isn't how easy it is to write good code, but how hard it is to write bad code

In many languages they make it harder to write bad code but they do so at the expense of delivering functionality quickly. There's a clear trade off there - one which is often very project dependent (some projects primarily need to develop functionality quickly; others prioritize stability).

The problem of slowing down development becomes particularly pronounced when developing something where the biggest risk isn't building the thing wrong, but building the wrong thing. It's extremely expensive to use a very strict language if you're developing code where requirements are highly in flux and cannot be discerned up front.

Ideally general purpose languages should enable a smooth transition from coding up a prototype/MVP all the way to production-hardened, spotlessly clean code.

I like rust a lot but I would only use it to write very low level code where the requirements are cast in stone and speed and stability is of the utmost importance. It's a lovely language but it's very slow to build stuff in, and in almost all cases it should be used to rewrite something existing rather than to build it anew.


I agree - in a prototyping phase it may be beneficial to be able to throw something together. The problem is that what you throw together is invariably the base for the real thing. The motto of "design one to throw away" is great, but I haven't seen it applied as much as it should.

Using a "stiff" set of tools might slow down the first stages of prototyping, but it also makes the prototype easier to modify as requirements change. The question I suppose is simply where the equilibrium occurs. I.e. does Rust or F# make a thing that is easier to modify (because of less coupling) already after one or two months, or only after one or two years?


>The problem is that what you throw together is invariably the base for the real thing.

This isn't invariable at all. More often what you throw together gets thrown away. I'd estimate this happens to more (working) code I've written over my career than not. The hardest thing to get right is often getting the contours of a tool right and ascertaining what it should do - not making it work right after it has proven itself.

>Using a "stiff" set of tools might slow down the first stages of prototyping, but it also makes the prototype easier to modify as requirements change.

This is only really the case where the prototype was fundamentally solving the right problem in the right way to begin with and the subsequent changes are incremental in nature. If the design of, say, a microcomponent is flawed from the outset or it did the wrong thing and you have to re-do it, those "stiff" tools slow you down.

Using an extremely strict set of tools in a prototyping environment also invariably means a lot of extra up front work dealing with the tools' attempts to protect you from bugs which have an extremely low probability of occurring and/or an extremely low cost when they do occur.

If F# or rust or haskell really was quicker and more effective in a prototyping environment as well as when writing production hardened code, programmers would likely eventually converge on only using them. That isn't what is happening.


> If F# or rust or haskell really was quicker and more effective in a prototyping environment as well as when writing production hardened code, programmers would likely eventually converge on only using them. That isn't what is happening.

This does not seem to me to be a safe assumption to make. The market is irrational. There is no reason to believe popularity has any significant correlation to the effectiveness of a tool.


> The market is irrational.

To some degree. But programmers are not, in general, easily herded, even by other programmers.

> There is no reason to believe popularity has any significant correlation to the effectiveness of a tool.

Programmers are not stupid. They are not totally ineffective at pushing management to use sane tools. One or both of those would have to be true for your statement to be true.

Well, you may say, it's a slow process. But Haskell has been out there for more than 30 years. F# is 14 years old, and supported by Microsoft. These languages have had plenty of time for programmers to notice that they were actually more effective in the real world.


> This is only really the case where the prototype was fundamentally solving the right problem in the right way to begin with and the subsequent changes are incremental in nature. If the design of, say, a microcomponent is flawed from the outset or it did the wrong thing and you have to re-do it, those "stiff" tools slow you down.

This. So, so, so, so much this.

I absolutely agree that getting the contours of the tool right (nice analogy there) is (one of) the hardest parts. The main issue I find when using strict, type-driven FP langs for "first draft" style implementations is that I waste time with the unavoidable ceremony that most of these languages require, when what I really want to be doing is probing around to discover the rough edges. The "my last name is Curry" style of FP almost requires you to declare these edges up front as you code.

I actually find that TDD is even more useful in FP contexts than in procedural - mainly because a) it's a good way to help think about the shape of the tool before building it, and b) the FP implementations are often more mentally complex with recursion, pattern matching, and other such things that (IMO) require more brainpower to grok than simple procedures, and to be honest I just find myself needing the tests so I don't go mad trying to be a human compiler. I think even the most staunch TDD fanatics wouldn't try to argue that it's a fast way to prototype.

These days I find myself reaching for dynamic languages with gradual typing for prototypes that might hit production (JS+TS, Python, even PHP). When I'm satisfied with the general shape, I'll add a few type hints here and there to make my IDE friendlier. After a while, usually at the point where there are large additions to requirements, I'll find myself rewriting at least a chunk of the original in a completely different manner, usually in a more type-driven manner, usually with a more FP slant, and often in a different language with more strictness.

I would like to see more mainstream languages support both type-driven FP and simple procedural code without using Haskell-inspired syntaxes or turning into the incomprehensible mess that is Scala. Java is slowly morphing that way, but it still lacks some of the FP fundamentals for when you do want to go full-zealot.


> some projects primarily need to develop functionality quickly; others prioritize stability

But they get the ballance completely wrong. In Python, I can just write code that returns some object or None (Python equivalent of null). In Java/C/C++ I can't - I have to appease the type system (i.e. declare the return type), but there is also no way to even later tell the compiler that it can't be null!


On the other hand, very few languages are capable to rival C++, Java, C# in terms of tooling and available libraries.


On the other hand, it seems a bit wild to propose that tools don’t have inherent affordances of their own.

What complicates assessment is that for many attributes, it’s impossible to assess the tool and the user in isolation. This is not unique to programming languages.


When you mention "no tooling (Haskell)" what do you mean? I am using Haskell (mostly for hobby projects) and am wondering what I am missing out on, since I feel the tools available is sufficient.


This is already achievable using annotations in Java or attributes in C#, in a less verbose way. You just tag your method parameter with `@Valid` or `[Valid]` or what have you, and the framework you're using automatically ensures that the validations you specified on the data model are valid at that point in time.


The irony is that Smalltalk already offered many FP patterns.

In the end multi-paradim languages will win.

What many FP advocates fail to acknowledge is that all FP languages that got some kind of mainstream adoption, might be FP first, but they are actually multi-paradigm.


No FP language has gotten mainstream adoption. Mainstream languages may have had FP features added, but that's different.

I confess I don't "get" the benefits of FP for the type of applications I work on. Most examples are for a ___domain completely different, make unrealistic assumptions about ___domain patterns of change that I actually see, or fill in for weaknesses of a given language's OOP model.


I mean those FP languages that enjoy some commercial use like Common Lisp, Clojure, Scala, F#, OCaml and Haskell.

All of them also expose OOP concepts.


Lisp is flexible enough to be just about every paradigm. Too much flexibly can be a problem in some circumstances, but I don't want to get into yet another "Lisp fight" here about that. There are hundreds on the internet already.


> competing half finished implementations (lisp)

As well as one of the most complete specifications in ANSI Common Lisp.


I'm curious about lispers. I thought quicklisp made a lot of things frictionless.


I think the argument is that for a lot of cases OOP isn't the right paradigm


I'm sure this is partly because I don't read F#, but it looks like they've moved all the complexity into meta-programming madness, this is just being way to clever to play code golf at a high level, this is exactly the sort of complexity we should be fighting against.

Even the initial c# version was over complicated. The complex fluent interface with lambdas and callbacks could be done with a few if statements that would be simpler, faster and require no knowledge of the FluentValidation library. Unnecessary getters and setters to satisfy the encapsulation gods.

If you want to fight complexity got back to basics, you can have a static method returning a validation result with code like this:

  if (!string.IsNullOrEmpty(card.CardNumber) && CardNumberRegex.IsMatch(card.CardNumber))
    validations.add("Oh my");
Converting if statements to more elaborate constructs is creating complexity not fighting it.


The problem is when you want validation errors which contains the field name, and a descriptive error message. Oh you want them localised as well?

You then want each field to be validated individually. So you get an error for each field which is wrong.

So you have if statements for each field creating a localised validation error object then placing in a list.

You have 8 fields coming in on your request. It's starting to look like a big method now with 8 if statements creating these localised validation objects.

You also want to share your validation rules between different use cases.

FluentValidation makes that quite quick and terse to achieve compared to simple if statements.


> The problem is when you want validation errors which contains the field name, and a descriptive error message. Oh you want them localised as well?

So the above example would become something like this:

  if (!string.IsNullOrEmpty(card.CardNumber) && CardNumberRegex.IsMatch(card.CardNumber))
    validationContext.add("CardNumber", Localizer.MessageFor("InvalidCCNumber"));
  if (x.ExpirationMonth < 1 || x.ExpirationMonth > 12)
    validationContext.add("ExpirationMonth", Localizer.MessageFor("InvalidCCExpiration"));
Throw in some lambda's for the property name and static strings for the message names if you really need to be type safe. Also I'm not sure if FluentValidator handles this, but you need somewhere for root level errors, not all errors map neatly to a property.

> You have 8 fields coming in on your request. It's starting to look like a big method now with 8 if statements creating these localised validation objects.

There's no local state, the errors are stored in a glorified dictionary, it's a simple imperative series of if statements that anyone who's gone beyond hello world in any language can understand. Big methods are not bad just because they're big (not that 8 if statements is big), they're bad when there is a lot of mutable state that the programmer has to track in their head, validation logic rarely has this problem. It would be fine if there were 1000 properties because the complexity is flat.

> You also want to share your validation rules between different use cases.

So you make a function. It doesn't look like FluentValidator offers any improvement here, it seems like custom rules with this library basically just wrap a function call: https://fluentvalidation.net/start#including-rules or you create a "function" at runtime with rulesets: https://fluentvalidation.net/start#including-rules

> FluentValidation makes that quite quick and terse to achieve compared to simple if statements.

From the examples I'm not sure it's any quicker or more terse after you include the extra boilerplate setup. All it seems to do is turn if statements into where/must calls, for loops into RuleForEach calls and functions into custom RuleSets. It also adds the complexity of using a library.


Your starting to build your own validation library inside that validation context. Inside I presume it must be creating some kind of error object to a list.

Next problem, rename a field on the object using a refactoring tool. You now have to change validation code to change the field name. You may forget about the validation code if your not looking at it. You have good tests though so you would probably would catch it. But you want it to be automatic. Maybe nameof?

The class is getting a lot responsibility, and you want to seperate validation out into its own class responsible for that. Maybe extract to a validation object which operates on a request/command class?

Might point is you eventually you end up building something like fluent validation. With own set of default rules, validation classes etc. Maybe fluent validation is overly complicated but I'd rather get the speed boost of using a well tested library that I already know instead of gradually refactoring into something custom.


> Your starting to build your own validation library inside that validation context.

I'm building a simple composite data structure, outside of NPM this is not remotely a library. With a bit of luck not even that, I'm a bit rusty but I think the MVC framework had one built in to handle this.

> Next problem

I already addressed it, a lambda function instead of a string that can extract the property name, but your right that nameof might be a better option these days.

> The class is getting a lot responsibility, and you want to seperate validation out into its own class responsible for that. Maybe extract to a validation object which operates on a request/command class?

I already have, the actual validation is in a static method somewhere, so it has only one responsibility and the validation context is mostly a simple data structure, that's not too much responsibility.

> Might point is you eventually you end up building something like fluent validation. With own set of default rules, validation classes etc. Maybe fluent validation is overly complicated but I'd rather get the speed boost of using a well tested library that I already know instead of gradually refactoring into something custom.

It's not an unbounded problem with lot's of gotchas down the line, it's a well known and simple to solve problem that practically everyone has seen before. What you're ignoring is the complexity of adding a dependency in general and the complexity of this library in particular. Adding a dependency is not a free lunch, it has a cost in time, mental overhead and maintenance. This particular dependency increases the complexity of your code and delivers practically nothing. As for the speed boost, assuming it's true does not mean it reduces complexity, faster (initial) dev time very often comes at the cost of creating more complexity.


The main source of complexity is requirements. Gatekeeping against the influx of requirements will keep complexity down.

Then there is unnecessary complexity from doing incomplete refactorings and rewrites. If some code cannot handle the addition of a new requirement, it should be replaced. Otherwise you add complexity that roughly takes the logical (if not actual) form if (these cases) { new code } else { old code }. And there is overlap! new code has taken over requirements for which old code still exists, but because of some lingering requirements that only the old code handled, all of it is still there (due to laziness, dependencies or whatever). It's not obvious that some of that code is never used; someone diving into it faces the complexity of figuring out what is the real payload in production now and what is the historic decoy.


It's easy for us to blame "requirements" as the main source of complexity. This isn't accurate. Software exists to serve the needs of the business. Depending on the maturity and stage of the business or the software itself, it's possible that there's a changing set of requirements. As developers, it's out job to figure out how to deliver, not to say "no" to new enhancements and requirements.

The main source of complexity is how we write software, not that the software has requirements.


I think it's fair to say that changing requirements coupled with a limited amount of time results in non-incidental complexity.

It's a systemic problem that results when the entire leadership stack isn't aware of how good software is created. Because of the limited amount of time and resources given, quite often it's a business/management problem.

And that's not to say that it isn't also a software dev problem. We've all seen some horrific things. But I've also seen horrific things because there was no one senior there because they wouldn't pay enough for it.

it's all intertwined, there's not a simple explanation. But changing requirements is definitely a source of complexity.


One source of complexity is accretion. We have certain requirements for our application. Some of those requirements, we don't implement ourselves; we need libraries and frameworks. For instance, we don't make a GUI toolkit from scratch because we need a GUI. But those third-party components are built to requirements of their own. Those requirements outnumber ours. Many of them aren't required for our use cases (like everything that is implemented in any API function we don't use, directly or indirectly). Many are. A simple requirement like "provide a screen where the user can edit their profile settings" translates to numerous detailed requirements, down to how the pixels are pushed to the frame buffer.


I feel like I’ve responded this before, but I feel like people often attribute their increased knowledge of how to develop systems without bugs to the new fancy language they switched to.

Fact is they could build better software in the old language as well, assuming they started from scratch.


I strongly disagree. We use a strongly typed Lua-like language at my company and it has everything you need to build decent applications, but we hit so many bugs. It took me 12 hours over 5 days to make a simple modification to the business logic (half of that was figuring out and fixing bugs). It took me 4 hours to write something far more complex in Rust with virtually zero bugs; I attribute this almost entirely to sum types (Rust enums), a better type system, an unforgiving compiler, async/await, a better module system, lifetime checking, and an ecosystem of easy-to-grok libraries.

These things just make bugs disappear.

When it comes to IDE experience, the parts that I use often are mostly the same between Rust and our language.

Edit: I'd say it's both, in a multiplicative manner. You need experience and a good set of tools (the language itself being the most important tool) to write good code fast.


I attribute this almost entirely to sum types (Rust enums), a better type system, an unforgiving compiler, async/await, a better module system, lifetime checking...

This seems like an example of a language effectively abstracting common complexities and pain points, which were probably discovered in earlier languages...


I think it's fair to say that rust is the first mainstream languages bringing these notions out of the woods. Who on earth knows about ML or cyclone ? 0.0001% of the programming crowd maybe.

World operates in spirals. We branch off trying things and then go back to old forgotten ideas, etc etc


Kotlin has sealed classes which are basically the same as Rust's enums. It also has async / await in the form of coroutines and a decent module system. It also doesn't need manual memory management, which while unsuitable for many applications does remove another potential class of bugs.


> We use a strongly typed Lua-like language at my company [...]

Is this homegrown? In my experience this alone has a major impact in productivity because it is generally hard to create a good implementation and a new language and/or implementation only pays when the existing solution is too bad. (Source: I have made a Lua type checker at work. It worked, but fell into disuse as I moved on and the entire org abandoned Lua in spite of my work.)


Which is a longish way of giving the answer no one wants to hear: Experience is everything.


Erm surely ability is important as well. Programming is no different in this regard to pro sports - some people have more talent (which probably in turn breaks down into innate attributes, determination and learning opportunity earlier in life).


Ability comes from practice, there really are no shortcuts.

Most of the difference seem to be in motivation.


Some people really don't get programming while others do.

When I studied at university many years ago, my course had a reputation for being tough. Before the course started properly, there was a three week intensive Java course with an exam at the end. They suggested that if you didn't pass the exam, then it probably wasn't the subject for you. A couple of people failed that exam and continued with the rest of the year long course anyway. Those people did struggle and I don't think any of them passed.


Are you arguing that all tools are equal? Or that some tools are better than others but the difference is negligible compared to experience? I don’t agree that experience is everything (if by everything you mean that all other factors have zero influence).


Without experience, tools don't matter. A skilled developer will most likely write better code regardless of language/framework.

It's a repeating pattern in this society, fools with advanced gear doing what fools do best.


I'd argue that no matter how "good" you are, you are capped at what tools you're using.

Consider using a hammer against a pneumatic tool. A newbie will hose you with better tools, they're wielding that condensed knowledge at their hands, their sum overshadow yours with only a hammer.


To a degree. Python has plenty of advanced features that I have rarely used. 20% of the language gets used 80% of the time.


In my opinion it's usually worse using a new framework or language as you don't know the ins and outs of it. Your first project is going to be a learning experience in that technology. I have seen plenty of Python code that was clearly written by Java developers wanting to try out something new.


Figuring out the Mikado method was one of the bigger shocks of my career. I thought I already knew all this stuff, and of course once I saw it I could explain it all. But knowing something is true and seeing it first hand can be a very different experience.

The simpler solution is often hard to see. We get attached to the wrong details or suffer sunk cost fallacies.

When you switch languages the cost of porting is higher, so it shouldn’t be a surprise that you end up with something much simpler. And if the target language attracted you because it makes some part of the problem simpler, that’s important but maybe not the dominant contributing factor to the experience.


I'd agree, but some tools just make it easier to design and build a complex system whatever the level of experience of the programmer(s), designers and so on. This programming language and IDE https://en.wikipedia.org/wiki/Clarion_(programming_language) is behind some of the biggest databases in the world. Because of the openess of the IDE, in one instance it was possible to migrate one country's main cancer charity app from ISAM files to MS SQL, and rewrite it from procedural to OOP code in just two hours! Admittedly it took a week to build a program to do the coding changes, but that program became a tool in its own right to migrate other programs, but the original devs thought it would take a human 3months to do the work, which is already several months less than if the program was written in another language!

These are just some of the big corps who use Clarion. https://en.wikipedia.org/wiki/LexisNexis https://en.wikipedia.org/wiki/DBT_Online_Inc. https://en.wikipedia.org/wiki/Experian

Various banks and other stock market listed companies. Even various military use it for their own top secret work.

The key to its success is the template language, which enables the programmers to work at a higher level of abstraction which for some reason just doesnt seem popular amongst many programmers. You can use the templates to write code in other languages, including Java, PHP, ASP.net, javescript and more.

Its safe to say, that everyone in the Western world will have some of their details stored in a Clarion built database, and its not just limited to building databases, its even been used to build highly scalable webservers. Theres also C/C++, Assembler and Modula-2 built into the compiler, so you can get right down to low level coding if required, and there's a Clarion.net version which is mainly like C# but has some of the data handling benefits of F#.


I can attest to this but it's definitely misleading. Some languages have features and patterns that are straightforward and easy to learn in that language, that once you've learnt you can emulate/replicate in almost every other language but only because you already know how it works, why it works and where the limits of your abstraction are.

And these abstractions will be often be overlooked or misused by developers who have not used them in languages where they're native; making them a net negative instead of an obvious benefit.


"Started from Scratch". I have never seen any project where something has been started from scratch actually turn out well.


People are going to jump on the imprecise wording here but this is generally correct. People need to be very wary of "let's just throw this out and start over". There are cases where it makes sense, but 95% of the time it amounts to "the old code is complicated and it's a lot more fun to start over than it is to figure out the old stuff some other guy wrote". The old code is complex because it actually works and has taken the punches of production deployment.

Most often, people start down this path bright-eyed and bushy-tailed, and end up realizing after about 4 months that actually all that complication was doing something pretty useful. People need to be careful before they dismiss real working-in-the-wild code.


Rebuilds are generally a bad idea, and I've never had a good experience doing that. Apart from what you've already stated, it's kind of an insult to the team before you, that you just deleted their code.

Replacing a system is also not really solving a problem that the business cares about. So this enevitably leads to feature creep, "If you're rebuilding it, can you add X, Y, Z...". This then leads raises the bar even higher...

The better alternative is to modularize the system somehow and replace seperate chunks... but that's easier said than done


Now read that again. Every single piece software that exists, successful or otherwise, was at some point started from scratch.


In the context I was replying to it should have been obvious that I was talking about rebuilding ab existing project from scratch.


It does play but no. Languages and paradigms do make a difference. I had to come up with a tiny DP answer for a java interview and it was a massive burden. Even though I could write the same (quality and perf) version in other languages in 2 minutes.


This can be validated or refuted by going back to that language and building something. If the same old roadblocks reappear, it was the darned language, after all.


On the C# API I develop for we overcome these issues in a few ways.

1. Our way of implementing DDD helps us organize code into infrastructure and ___domain. Domain objects typically aren’t allowed to have external dependencies. Infrastructure code is primarily for data access and mapping. Our API code (controllers and event handlers) ties the two together.

2. Given the above we are able to write a) very clear and concise unit tests around ___domain objects and API endpoints and b) integration tests that don’t have to bother with anything but data and other external dependencies.

The result is that when we go to ask, “How does the system respond given X?” we can either point to a test we have already written or else add a new test or test case to cover that scenario.

We can even snapshot live data in some critical areas that we can then drive through our high level ___domain processes (we process payroll so it’s a lot of data to consider). If someone wants to know how a particular pay run would play out, they can just craft the data snapshot and write some assertions about the results.

We also use FluentValidation (on API objects only) and test those as well (but only if the rules are non-trivial).


I’m quite happy to be seeing conversations about the benefits of simplicity, boring tech, etc lately.

It’s a breath if fresh air from the sadly too common (IMO) flavor of the month new tech promotion.


Sometimes it's hard to disentangle if a conversation is having an upward trending trajectory, or if you just happen to pay attention more to the links that mention some subject you happened to have caught an interest in.


Yeah, I’ve been hearing a lot about the Baader-Meinhof effect recently


Was this intentionally a subtle self-referential joke? Or was that an accident?


Domain modelling made functional is fantastic book(What this article is inspired by). Taught me a lot about encoding business logic using functional programming techniques.

Functional programming is basically my goto when there is complicated business logic involved now.


I really love the concepts provided by Domain Driven Design (DDD), regardless if you choose OOP or FP to implement it with.

It's fine if you want to choose C#, and there're better ways of addressing the approaches to validation in OOP than were provided in the examples. Value objects are a nice way to ensure strong immutable types like credit cards can be created and passed around without requiring separate validation classes or wild abstract base classes.

I like exceptions in C# - when I used to code that I'd make a lot of ___domain/business exceptions that the code would throw anytime there was a violation. Here I think Java is a lot stronger in that you are forced to declare what types of errors can be thrown from a function so you have a chance of handling them. In C#, Typescript, I'm finding myself having to lean on codedoc "@throws" to do the same thing (though not as reliably).

That said, I generally am fine for most exceptions to not be handled and instead bubble up "globally". If it happened because of an API request? Let middleware map it back to a 400 Bad Request with the error body. If it happened because of a message handled? Log it, retry the message until it gets dumped to the DLQ. If it's not a violation, then it may not be an exception in the first place, in which case it can be returned with a compensating action performed.

I really like F#, but I struggled to find the actual benefit of it in this article from a DDD perspective.


I find doing things like having types that flow through different stages -> UnvalidatedEmail, ValidatedEmail, VerifiedEmail is a lot better in f#.

In c# you need to create a lot of value object classes for that. Or have some kind of property inside the value object which indicates current state of the email.

Even then you won't be able to exhaustive pattern matching on it to guarantee each situation is handled.


Maybe I'm missing something, but what stops you from having Unvalidated<T> etc wrapper classes in C#?


I think the problem with OOP is that it is merely a pattern and yet has been integrated into many languages in a very prominent way. This can mislead people into thinking it's something more fundamental and generally applicable - but it's not, it's just another pattern, which for some things works great, and for others is terrible. Just like when you read that aweful code that someone wrote just after reading a book on pattern x and forced it upon a project, the difference is OOP is forced on almost everything.

Once you realise this it's fine, you just don't use those features when it doesn't make sense.


An excellent click bait article for functional programming. For example, the brief talk of validators, why couldn't you just put the validator in the credit card object? The credit card validator should only be used in the credit card object so the discussion about not being able to see it can be accessed by simply navigating to the classes that compose the credit card. Putting it in the constructor or other credit card methods fixes the not being forced aspect (when I look at the FP example, it looks like it's doing just this). And you can use the validator anywhere in the CC object.


In my experience, one of the best ways to fight complexity is by reducing the amount of things you have to keep in your head. In practical terms this translates into being able to do local reasoning about your code.

I find that imperative OO style naturally leads to complexity. Passing references to mutable data all over the place creates tight coupling across your entire application. This makes it impossible to guarantee that any change you make is local without considering every other place that references the data. Meanwhile, objects are opaque state machines and programs are structured by creating many interdependent objects.

These aspects make it pretty much impossible to tell what any particular piece of code is doing just by reading it in large applications. The only option is to fire up the debugger, get the app in a particular state and look at the data. However, there are typically many ways to get into any particular state, and it's really hard to know that you've accounted for them all. So, a debugger is a heuristic at best.

FP and immutability tackle both of these problems head on. Immutable data directly leads to the ability to do local reasoning about your code, and allows you to write pure functions that can be reasoned about independently. Meanwhile, data is not being abstracted inside opaque state machines that provide ad hoc DSLs as their API. Instead, it's explicitly passed through function pipelines to transform it.


Much of the time the "big picture" isn't local. That's just the nature of the big picture by definition. I find it better to put "big picture" stuff in the RDBMS, including UI issues (see "Table-Oriented" nearby), and keep only local details in code.

For example, the menus and navigation can almost all be tracked and managed in the RDBMS. It's easier to query and study the structure that way because I can sort, search, group, and filter it by any way --I-- please for any given need; I don't want to be stuck with YOUR single grouping; I want to be the Grouping God when studying the app. File-centric code can't do that (at least not without an IDE that reinvents a database). Therefore, don't do it. Use code where code is best, and RDBMS where RDBMS is best.

Code sucks at the big-picture and FP won't change that.


I honestly have no idea what you're talking about.


I suppose it would take a fully coded example really explain it. I cannot provide one at this time.


Reasoning locally, means maintenance on FP software should be easy.

So beginners could solve simple bugs and add simple features to FP applications.

Yet it's not a common thing.

Why ?


My team regularly hires co-op students who have no trouble solving all kinds of problems, as well as implementing features. I'm not sure where you get the notion that it's not a common thing?


Is it just me? I feel this is confused and adding complexity in some areas, not genuinely optimal for simplicity.

I tend to feel that unexpected, unrecoverable exceptions are best treated as just that -- exceptions. Applying a C-style function return value check seems backwards.

And the "Interpreter"?? Proper purpose of Interpreter or DSL is for dynamic (configurable or user-input) code, not to implement basic sequential flow and the 'if' statement which the underlying language already provides.


Seems like a pretty classic case of second system syndrome to me.


I agree that OOP is limiting, but Functional is not the fix. Table-Oriented-Programming is the future. Your code snippets for validation etc. could be associated how your ___domain prefers, and you can query them to be together by field or by any other grouping as needed. You just have to make sure you have the proper meta-data in place, such as field name/ID, entity, screen, event type, etc.

File-centric code forces a hierarchical big picture structure, but many relationships are either not hierarchical, or need additional non-hierarchical ways to view/group them. Relational is more powerful and more flexible than file systems. (It has some rough areas, but they can be worked around or fixed.)

Start backward next time and think how you would LIKE your code organized, forgetting about frameworks you know. If you do this often enough, you'll realize RDBMS-based code management is where we should be heading. About 90% of validation and field management could also be attribute-driven: data-dictionaries would do most of the grunt work.

With OOP and FP you are forced into choices such as "should this be its own class, or an object instance, or a group of classes for composition?" etc. etc. When tablizing your event & validation snippets, you are not forced to choose. They are grouped "by" anything you want, and by multiple groupings at the same time: multiverse. I agree that FP is probably more flexible than OOP, but it's also less disciplined: large FP systems look like the spaghetti-pointer databases that preceded RDBMS. Hierarchical, logical, and pointer-based DB's thrived for a while, but relational won, for a reason.


Spreadsheet-like dataflow programming actually works quite naturally in FP. See re-frame and javelin as examples:

https://github.com/Day8/re-frame

https://github.com/hoplon/javelin


I didn't see any actual spreadsheets at those links. But my point was that paradigm matters less if you use table-driven designs (TDD). With TDD you are dealing less with the issues of attaching & managing behavior to/with structures (which the article seemed to emphasize), focusing mostly on specific business logic and exceptions to rules (oddities). The majority of your app would work without writing a snippet of code (except maybe regular expressions for validation & formatting.)

Your actual code would be "dumber" and event-specific such that paradigm differences matter less. Complex associations are managed via the RDBMS so that code rarely has to manage them.

I should make a distinction between framework coding and application coding. I won't say which paradigm is "better" for the first; I'm mostly focusing on the application-side coding here.


One really excellent resource that I'm making my way through currently is "A Philosophy of Software Design" by Ousterhout. It's very practical and provides several strategies for reducing complexity and identifying practices that contribute to it.


Reducing complexity starts with abstracting bound contexts, understanding relationships between them, identifying ubiquitous languages, understanding the autonomous bubble pattern, and then worrying about code.

I’d add DAL should be shelved for a repository pattern and business layer shelved for root aggregates and value objects.

It’s all in Eric Evans’ Domain Driven Design book that still carries enormous weight.


Perhaps the best way to take this seriously is to ensure developers RTFC (Read The Fucking Code). While that sounds like a given it’s taken for granted that developers actually do that before forming all manners of biased or incomplete assumptions.


That's necessary, but the aim of good software practice should be to make it as easy as possible for the reader of the code to follow and understand it. Including skipping over well named functions which should not nest unintuitive behavior.


This. All day long.

On the current code base I work on I swear every single one of my pull requests contains large amounts of variable and method renaming to make it easier for the next person because if it takes me half a day to compute an understanding of what "var result = apiTotal - total" is actually doing then it isn't named nearly clearly enough.


Having recently jumped into a project with MongoDb as the database, I have realized how much easier a schema definition makes understanding the project. It's so much easier to understand a number of create table statements than it is to get that information from code.


I really support this kind of writing, I don't think this kind of stuff is written about enough, and it's exactly what everyone learns the hard way working on software projects. Skill wise, knowing the kinds of reasoning/techniques that articles like these discuss is the difference between "junior" and "senior" developers (though I very much dislike those terms).

One thing I want to point out -- if at all possible do not use decimals for money:

> We could use decimal (and we will, but no directly), but decimal is less descriptive. Besides, it can be used for representation of other things than money, and we don't want it to be mixed up. So we use custom type type [<Struct>] Money = Money of decimal .

The custom type is a great idea (try to write code in languages that make this concept easy, I suggest Haskell & type/newtype). The problem here is that decimal is the wrong type for storing money[0]. Your first IEEE754 floating point bug teaches you this, but in general trying to write code around manipulating decimals can get very messy really quickly when precision is involved in any case. Another example is JSON, JSON numerics are actually all floats under the covers, so this means if you store more precision or a bigger number than it can handle, things can get wacky if you're not careful -- this is one of the places where being "stringly typed" (and defining your own unpacking to go with your ___domain types) can be very helpful.

Libraries like dinero.js[1] exist because of how surprisingly hard this problem is, kind of like how moment[2] exists due to how hard dealing with time can be.

[0]: https://stackoverflow.com/questions/3730019/why-not-use-doub...

[1]: https://github.com/sarahdayan/dinero.js

[2]: https://momentjs.com


> The problem here is that decimal is the wrong type for storing money

Note that the code here is .NET, where decimal is a type explicitly for use in financial calculations[0]. It is still floating point, which means you still need to really watch what you are doing, but it's very high precision.

In general though, if your application allows for it, you should store money using integers representing cents (or the relevant smallest unit for the currency).

[0]: https://docs.microsoft.com/en-us/dotnet/csharp/language-refe... (this is a page for C#, but it is more useful than the general page for Decimal)


> It is still floating point, which means you still need to really watch what you are doing, but it's very high precision.

The key difference between a decimal type (in any language) (regardless whether it's fixed-point or floating-point) is that it's not BINARY floating point. Yes, you need to watch what you are doing (shouldn't you always?)... but you can limit analysis to the appropriate precision, without worrying about binary conversion artifacts like 0.3 isn't 0.3.

> In general though, if your application allows for it, you should store money using integers representing cents.

What you're describing is a poor man's fixed point. Much better to just use a fixed point decimal type so you don't need to remember to apply the scale factor everywhere.

In any case, you can't get around the need to determine a ceiling in your necessary precision.


Agreed. I dislike the terms "sr" and "jr" developer, but I think we too often call someone a senior because they know a lot of technology or they've been around for a while.

I've been thinking a lot about these terms and have been defining these roles internally as:

junior: can build relatively simple solutions, still gaining experience.

mid-level: can build complex systems, but the solution is going to be complicated.

senior: builds simple solutions to solve complicated problems.

I mentor internal devs to help shift their focus from learning more complicated technologies toward learning how to produce much cleaner and simpler solutions. This is the career path that I think helps them the most and also helps our company scale.


I love F#, but using F# instead of (type|java)script would add a lot of complexity to my 'full stack'. Let's see I replaced my node.js backend with an F# one, the following issues would arise:

1. I now have two different languages for client and server, and can't share e.g. validation code.

2. Dealing with 2nd class linux support (no REPL)

3. No library that combines the maturity and simplicity of express.js. Yes I am aware of ASP.NET Web API and no I do not think that compares.

So as much as I love pipes and partial application and concise syntax, F# would create an explosion of complexity, not fight it.

EDIT: This also follows the common trend I see of starry eyed functional programmers who - to put it bluntly - don't seem to know what they are criticizing.

Of course some things from here we can do in C#. We can create CardNumber class which will throw ValidationException in there too.

A Result datatype with all the bells and whistles is what, a few hundred lines in C#? I agree that it sucks that there isn't one built in, but if you like them and you're stuck in C#, code one up and forget about the 'issue' ever again.

But that trick with CardAccountInfo can't be done in C# in easy way

That example looks trivially translatable to an abstract class with two concrete sub-classes to me.

EDIT 2:

The F# docs are awful. Struggling to find the API for the Result module. There's a guide on how to use it, but when you look at the core namespace it's missing.


I really don't think it adds complexity. Sure, it's going to be different, so there's a mental hurdle to clear in learning how a new stack works. That doesn't mean it's going to be more complex.

> 1. I now have two different languages for client and server, and can't share e.g. validation code.

You could use Fable to transpile F# to Javascript, keeping a single language

> 2. Dealing with 2nd class linux support (no REPL)

FSI is available under Linux, but I admit it's got some hairs on it. (FWIW, FSI on Windows needs a lot more TLC too).

> 3. No library that combines the maturity and simplicity of express.js. Yes I am aware of ASP.NET Web API and no I do not think that compares.

This is purely a matter of taste. While express.js is very accessible, I don't think it's any more so than Giraffe (a wrapper around ASP.NET Core).

I'm all-in on F# now - once you're over the (mild) initial learning curve, you see huge dividends from the smaller codebase, static typing, fewer null-checks and drastically less testing required.


Hi, thanks for the open minded response. It's an issue I enjoy discussing.

You could use Fable to transpile F# to Javascript, keeping a single language

That seems like another explosion of complexity. Source maps, mapping F# to a JS constructs when debugging, niche tooling, wrapping other JS libraries in a nicely typed F# package... IME it's best to avoid transpiling anything more exotic than typescript.

FSI is available under Linux, but I admit it's got some hairs on it. (FWIW, FSI on Windows needs a lot more TLC too).

I appear to be mistaken - I remember I was excited when the dotnet tool came out, but then they took away the REPL. FSI is mono only, right?

This is purely a matter of taste. While express.js is very accessible, I don't think it's any more so than Giraffe (a wrapper around ASP.NET Core).

Yes it's very subjective, we can agree to disagree here.

I'm all-in on F# now - once you're over the (mild) initial learning curve, you see huge dividends from the smaller codebase, static typing, fewer null-checks and drastically less testing required.

I love F#, I used it professionally and it was a great experience. And you are right, the conciseness is hard to beat. But the static typing/fewer null checks thing is easily solved in JS land with the typescript compiler.


I think these usually end up with two people agreeing to disagree :)

Yes, you do lose source maps by transpiling, but in my (albeit fairly limited experience), the type checking means I have drastically less debugging to do in the first place. You're right, though, the tooling leaves a lot to be desired (and this is also on the assumption you have TS definitions for third party libraries - I agree it becomes a lot more difficult if you don't).

FSI is available in both Mono and .NET Core 3 (which is still in preview).


I'm able to use FSI with the core-sdk 2.2.300, but I believe FSI is still in preview regardless of which SDK you're using.


> You could use Fable to transpile F# to Javascript, keeping a single language

That would only solve your runtime issues. You would still need to write the same code twice.


How do you figure?


What would the concrete implementation of the at class that represents the deactivated credit card look like?


It would be an abstract class with an abstract processPayment method. Then two concrete classes that implement it, each with their own implementation of processPayment.

https://en.wikipedia.org/wiki/Expression_problem


``` type CardNumber = private CardNumber of string with member this.Value = match this with CardNumber s -> s static member create str = ```

Haha, no thank you

>OOP languages for modern applications gives you a lot of troubles, because they were designed for a different purposes

For which purposes if not software development exactly?


> For which purposes if not software development exactly?

Well, if you recognize only 1 category "software development", I can't help you. C was designed for software development as well, but you wouldn't chose it to do web development, I hope? It's a good tool when you need to develop something small and resource efficient. OOP fits good when you don't have much of concurrency, but you manage complex states in memory (which you do with help of objects and inheritance). And functional programming fits well when you need to manage data flow applications.

And truth is in a big complex system you need a decent support of both FP and OOP. Point is that languages like C# decently support only OOP.


Fighting stupidity, laziness and a misplaced love for typography in software developers




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: