Hacker News new | past | comments | ask | show | jobs | submit login

"Testing does tend to have a negative impact on API design"

I think this sentiment is the single root cause of the current back in forth. In my experience this is completely untrue. For DHH and others there appears to be some sort of culture of dogmatic polluting of APIs for testing which I'm not familiar with.

Everything else seems to be talking around the real issue and finger pointing.




Let's take as an example dependency injection. The genesis for DI was to facilitate the testing of objects that are dependent on objects that are not easily constructed, such as DAOs. And the whole reason this was done was so that these classes, the ones that have a DAO dependency, could be more easily and quickly tested.

So, in order to use (constructor) DI you would have a constructor that looks like this:

    public InvoiceController(InvoiceDao invoiceDao) {
        this.invoiceDao = invoiceDao;
    }
The key point to remember here is that this has been done so that in our unit tests we can inject a fake InvoiceDao. Now, let's say that our InvoiceDao class has one and only one constructor:

    public InvoiceDao() {
        // sets up connection parameters, etc.
    }
Then the constructor for InvoiceController could be simplified to:

    public InvoiceController() {
        this.invoiceDao = new InvoiceDao();
    }
This is quite a bit cleaner from an API perspective, and that is the entire point. This is only a simple example. For more complex classes, with multiple dependencies, it really becomes cumbersome. What if InvoiceController also needed access ReporterDao? Well, then you need to add that as a parameter to the constructor as well. Your API is made more complicated, all in an effort to make testing possible.

This does not, of course, invalidate the benefits of unit testing, which are many. But it does expose a negative that is not frequently acknowledged, and that's what DHH is talking about, and what Kent has failed to address.


"The genesis for DI was to facilitate the testing of objects that are dependent on objects that are not easily constructed, such as DAOs. And the whole reason this was done was so that these classes, the ones that have a DAO dependency, could be more easily and quickly tested."

I disagree with this central premise. If you have introduced DI simply for testing reasons than you have missed the point. The reason for DI is that a very common point of change with software is at the interaction between clients and services. Tight coupling at this point of change is usually a design smell. That DI enables automated testing is a by product of it being loosely coupled not the point of the loose coupling. That TDD drives you to this sort of solution, all other things being equal, is one of the advantages of TDD.

If DI has become cumbersome and is making your code hard to reason about/read/maintain etc then blame DI, not the testing. You also might want to consider the higher level architecture of your solution if you are finding yourself with lots of complicated dependency chains.


"The reason for DI is that a very common point of change with software is at the interaction between clients and services. Tight coupling at this point of change is usually a design smell."

The problem with TDD is that if it is NOT a common point of change, TDD adherents often say that you should introduce DI anyway to make testing easier. "One day you might want to change out the whole database, so you should abstract out the DB into a blah blah blah pattern, with a layer of the rah rah rah pattern between to ..."


> The problem with TDD is that if it is NOT a common point of change, TDD adherents often say that you should introduce DI anyway to make testing easier.

To the extent that's a problem, its not with TDD, nor with DI, but with the particular people giving the advice. Why do people think that "Some of the people who follow X give bad advice Y" is a criticism of X when Y is not something that that X requires?



This is a very insidious comment. You have actually committed the logical fallacy as your position is "All TDD proponents do X". When someone points out, no "Some TDD proponents do X and many of us TDD proponents think X is bad practice" you blame them of the No true Scotsman fallacy, when in reality your position is the one claiming a universal property about TDD.


Tell me who is a good representative of TDD that has written a lot about it. Where is the cannon of TDD so to speak.


Not sure if I'm qualified to appoint a cannon but I can tell you of the books that were very influential to me with regards to TDD:

Test-Driven Development by Example, Kent Beck. This came out in 2002 and I haven't read it since then but at the time it really influenced a lot of my TDD thinking. I expect if I read it now I'd have some complaints based on my decade of experience with the process.

Growing Object-Oriented Software, Guided by Tests, Freeman & Pryce. Much more recent and includes more modern thinking about TDD with acceptance/integration tests.

Working Effectively with Legacy Code, Michael Feathers. A great book for dealing with TDD when you aren't doing green field development. He is a bigger fan of Mock Objects than I am, but he illustrates some examples when mocking is the most appropriate response to the current requirements.

All that said, TDD is like any other software methodology. It is a set of patterns and principles that each practitioner interprets for themselves. At it's core though it's pretty simple, automatic verification of specifications are as important as implementation of the specification for any sufficiently long lived, complex software system. Writing those verifications first provides, design, process, and management advantages over the historical process of writing them last and has a high correlation with well factored code. That's it, no doctrines about unit vs integration tests, mocking out DB access or 100% code coverage.


"and management advantages over the historical process of writing them last and has a high correlation with well factored code."

Where is the data supporting this claim? I don't believe it is true.

"That's it, no doctrines about unit vs integration tests, mocking out DB access or 100% code coverage"

Why do you think so many of us feel that it is an ideology? Are we seeing ghosts and imagining the whole thing?


"Where is the data supporting this claim? I don't believe it is true."

This is one central premise of TDD and is not proven or disproven yet. What we can say is that previous software methodologies were lacking, and cannot prove or disprove their superiority over TDD with regards to this statement. If you distill all debates about TDD down to their essence, it revolves around this premise.

I am fine with someone being skeptical of this claim, but I would prefer if they offered A) a measure we can use to test this hypothesis and B) a contrasting methodology that performs better with regards to the measurements provided in A. The single biggest outstanding problem in software engineering is finding a metric by which we can judge software quality objectively. Because it is such an elusive goal other less precise proxies for software quality have been proposed to stand in for the more complex metric. TDD hangs it's shingle on "testability". Even though this has obvious defects, I've seen no evidence for any other objective measure as a more precise indicator of software quality and quite a few advantages to "testability". Namely, it is measurable.

"Why do you think so many of us feel that it is an ideology? Are we seeing ghosts and imagining the whole thing?"

No, I'm sure that you have encountered well meaning but flawed practitioners of TDD. Your skepticism of their process doesn't bother me. What bothers me is your (and DHH's) painting of all TDD folks as cultists. I don't believe this stuff because Uncle Bob told me to believe it. I believe it because my experience shows that rigorous use of TDD practices trend toward better software than a lack of rigor outside of an objective measure for software quality.

If you have an alternate rigorous approach that you believe trends (or better yet is provably) better in software quality, by all means outline it. I know that TDD is flawed and am happy to find its successor.

Let me ask you this, prior to the rise of TDD, how prevalent do you think automated testing was? If it was very prevalent why is it only after the rise of TDD that automated testing became a central part of any build workflow and the entire concept of Continuous Integration/Deployment developed?

In my experience, automated testing prior to the rise of TDD was not wide spread. But maybe I was seeing ghosts and imagining that.


Neither of us ever said all TDD folks are cultist. We are both tired of being told that we are absolutely doing it wrong if we don't practice TDD. For an example of people telling us this please see this talk by bob Martin and jump to the 58 minute mark. http://m.youtube.com/watch?v=WpkDN78P884

Bob is not an obscure figure by any stretch of the imagination. You have admitted to having no real data to back up the claim that writing tests first leads to better code, yet we have bob here telling us we are absolutely wrong if we don't follow his religion.


> Why do you think so many of us feel that it is an ideology?

Probably because you've run into cargo cult practitioners that treat it that way -- the same as every methodology that's become known outside of a narrow initiating group has -- and human perceptual biases tend to overemphasize and overgeneralize the most extreme examples.


Is bob Martin a cargo cult practitioner?

jump to the 58 minute mark. http://m.youtube.com/watch?v=WpkDN78P884


That wasn't the central premise. The central premise is that there is a trade-off in class design when doing unit testing.


And I say that if you are making design decisions BECAUSE Of testing you are doing it wrong.

The premise of TDD (and one I've found true) is that there is a natural correlation between easily tested code and well factored code, not that you should be compromising your design for testing. Thus the original comment about bad api design as a result of TDD rings false with me.

You have apparently encountered cases where people have introduced unnecessary DI in the name of ease of testing when in reality they should have been fixing their central design problems. Software with long dependency chains and complex IoC containers are not easy to test, though looking through the comments it seems that many people have fooled themselves into thinking they are.


> Let's take as an example dependency injection. The genesis for DI was to facilitate the testing of objects that are dependent on objects that are not easily constructed, such as DAOs.

Really? Because I've always seen DI advocated for loose coupling, with testability just coming along for the ride the way it does in general with loose coupling, I've rarely seen it advocated for testability independently of the broader architectural motivation for loose coupling.


But that's not true at all. The genesis for DI was nothing close to that (as I learned it). DI's main purpose isn't testing, it's flexibility. DI isn't there solely for testing, or if it is, you're probably missing the point of it.


Right. However, in many DI-heavy apps I've seen, nobody ever makes use of that flexibility. Otherwise it's just extra, unnecessary complexity, sitting around in verbose XML files and annotations, causing code clutter.

These are "enterprise" Java apps with Spring, etc.


I personally don't see how a couple parameters in the constructor are adding so much complexity that you'd be better off without DI.

A good DI container can make it much easier to follow a pretty good amount of general best practices. Single Responsibility, DRY, and Open-Closed Principle to name a few.


This has been my experience as well. The core of Spring -- IoC/DI -- solves a problem that is largely of its own making.

I've had better luck and cleaner code without it than with it.


You miss the point of DI.

DI is solving a problem common to non-trivial OO apps - where do you create the objects? Every sane approach to this tends to lead to a form of DI.

Spring is a bad example of DI because it encourages an implicit, DI-everything style. Take a look at Guice or other frameworks where bindings are explicit.


"Testability" tends to add unnecessary (by KISS) configurability and indirection to code composed out of sub-units.

Because a testable unit of code must have its inputs and collaborators controlled, all the inputs and collaborators need to be configurable or replaceable, even when such configuration is unnecessary to the business value aimed at.

In statically typed languages like Java, this tends to exhibit as a proliferation of interfaces that only have a single implementation, namespaces with too many identifiers publicly exposed, and lots of open extension points - which hurts versioning. It tends towards requiring complex IoC controllers with associated config. Code ends up filled with boilerplate glue and over-abstraction (factory-factories etc.), and much harder to read because IDEs can no longer follow the link between a method call and its implementation - because the method call is all too often dynamically resolved via an interface implementation.


> "Testability" tends to add unnecessary (by KISS) configurability and indirection to code composed out of sub-units.

You mean "White Box Testability". Don't forget Black Box testing is agnostic to the implementation.


To be sure, "negative" is the weak point in the statement/sentiment. It's a value judgement subject to where the person is drawing a line between the goods and the bads (methodology always unstated), so the argument wallows in indeterminacy.

As long as people throw out a simple "the real issue" without explaining what they mean by it, it's inevitable. Pretty soon, every player is talking past one another, someone writes a blog post purporting to lay out the landscape of all sides, someone counters rap-battle style, someone else posts to HN "Ask HN: your favorite reasons to TDD," which provides the foundation for an ebook, after which three people post "Show HN: site to plan an optimal TDD strategy." After that, we wait for someone to posit a successor and the cycle starts over again.


This is exactly an argument I am having at work right now, from someone that hates dependency injection.

If I have a controller that calls a service, and a service that calls a dao, then the service needs the dao injected:

  public Service(Dao dao) {
    this.dao = dao
  }
Which means (in the absence of autowiring) that the controller needs to be "aware" of the dao:

  ...
  service = new Service(new Dao());
  result = service.serviceMethod(param1, param2);
  ...
And that is objectionable because of putting too much of a burden on people writing controller logic, because of separation of concerns, etc. I am finding it difficult to argue against that point.

I get that this goes away if using a proper inversion of control container so one can autowire, but that is not always practical.


> And that is objectionable because of putting too much of a burden on people writing controller logic, because of separation of concerns, etc.

No, the controller needs to be aware of a method of creating new services. Depending on the specific language and other constraints, that might be a factory function passed into the controller's constructor, a factory object passed into the controller's constructor, a refernce to a service locator object, or some other mechanism. Specific direct knowledge of the DAO class is generally neither necessary nor desirable.


How is that a burden? The controller has to know about the methods of the DAO no matter what. The debate you're having seems to be "what is the best way to get a concrete instance of the DAO into the controller?" The argument being made for constructor/accessor injection is that it makes testing easier. The argument against it is that it makes the API more complicated than it would otherwise be.


If you don't think that the AngularJS API is Vietnam then I don't think I'd be able to change your mind regarding this.


You're implying that AngularJS's API is Vietnam? I get that the DI system is complicated, but in my mind, it's a very well thought out approach which relies minimally on global objects. I'd take Angular's approach over jQuery's any day.


Yes, I am.

It might be clever but it's unintuitive and as a consequence developers (even otherwise experienced ones) who have not yet mastered AngularJS will write the kind of mess that is painful to clean up later. This is more or less true for any technology, but as a general rule the more complicated or different the concepts are, the less likely it is that people will get things right the first few times. And this gets worse as the application gets larger and / or more people get added to the team. A simpler approach with less cleverness would help.


I don't know the first thing about AngularJS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: