Roy Osherove has been posting a bit about testing, OOP, TDD, and the like. You can go to his post and find tons of comments, links and so forth. Because of all the different interpretations people have put forth, it’s hard to summarize the discussion without prejudicing it.
But what the hell, it’s my blog, so here’s a thumbnail sketch: the adoption of unit testing is hindered by it being tied to TDD, design considerations, and confusing terminology (“a mock? a mock what?”).
A very good post by Udi Dahan takes a pragmatic stance about the whole issue, but contains two things that I want to comment on.
The first is this:
“In a well designed system, most ‘logic’ will be contained in two ‘layers’ - the controllers and the domain model. These classes should be independent of any and all technological concerns. You can and should get high unit test coverage on classes in these layers…Most other layers have some dependence on technology that makes unit tests relatively less valuable. Coverage on these layers is most meaningless. My guidance is to take the effort that would have been spent on unit testing these other layers and invest it all in automated integration tests.”
The second is a comment by Casey that Udi agrees with:
“I think, and hope, what you are saying is any code that does not add *business* value is of low value, and tests that have no clear purpose, or that further concrete an already weak design, will ultimately decrease business value.”
I agree with both of these, but in my own special way.
In almost any business environment (I can think of a lot of other systems/environments where the following isn’t true…a health diagnostic system for instance), software exists primarily to deliver business value. Or at least, it should. One of my strongest gripes with Alt.NET is that while I think just about everyone would give lip service agreement to this notion, it is quite often de-emphasized, and the focus is placed on ‘reducing friction’ or ‘increasing maintainability.' And clearly, if you do those things, you increase business value, right?
Not so fast. Notions like ‘friction’ and ‘maintainability’ are relative, usually to the developer in question. Various people have blogged in great and painful detail about what reduces friction or increases maintainability, but what they advocate often times makes it clear that what they advocate is something that would reduce friction and increase maintainability *for them*, but which would do the reverse for most everyone else. Since this post is about TDD, I’ll use that as an example in a minute, but just to throw out another example: anyone who is advocating ‘deprecating the database.’ It isn’t that there is necessarily a *technical* argument against it (though I think there could be), but there are so many other considerations that go into software development that the technical merits or demerits of software design is almost always a very minor aspect (I’m betraying my roots in operations/deployment/production support here). There is almost no environment where ‘deprecating the database’ is even a possible solution.
side note: I’ve made the following point in many different ways, and in many different places, but I think it right to make it again. In large part, I 'follow’ Alt.NET (even helped to create the Chicago Alt.NET user group, not sure how that happened…think there was drinking involved) out of laziness and greed. I am trying to ‘shortcut’ my way out of learning many techniques through experience, because learning through experience is usually painful, and hurts someone else (usually a business/client). You can’t completely do this, obviously, and I know that, but whether it is learning how to implement IValidator, IMapper or other techniques that I’ve ‘stolen’ (if you can’t tell, I just spent 15 seconds looking at one of my code bases), I hope to be able to avoid learning through the mistakes, and just learn from the end results of developers who I already know are better than I am. Developers will be developers, so there will always be some numb-nut advocating a technically stupid design under the Alt.NET rubric, but in general, if you want to learn how to be a technically better developer, just read the Alt.NET blogosphere. And if you don’t know what that is or what counts, look it up. Google is your friend.
TDD is one of those techniques that has its fair share of evangelists/advocates, and that can decrease business value if done incorrectly. On one of my code bases, I am forcing myself to use it as stringently as possible. In almost every client situation I have come across it, it has been implemented poorly (and in the obvious case I can think of where it wasn’t, it was because of the single-minded determination and/or skill of the developer implementing it). Like agile advocates, TDD advocates seem to be painfully addicted to confirmation bias (“I did it myself once and it worked great!!!!”), but that, in and of itself, has more to do with advocation (is that a word?) than TDD. But it is pretty clear that in order to do TDD ‘the right way’ requires a lot of training, an eye of newt, and a lot of luck and/or skill. In and of itself, this makes me skeptical of it, because any methodology that requires near-perfection in its implementation is essentially doomed to failure in the long run.
BUT, if it provides business value, which it can, you should use it. I like very much what Udi said about layers that have dependencies on technology (I will expand this to include ‘protocols’ in a second) and what Casey said. I’ve long advocated (yes, using that word on purpose) integration tests over unit tests (since there is always a limit on time and effort, if you have to limit what you can test, test the code that is actually in production. Not mocks, not stubs, your production code. If you have to run, e.g., Waitn tests, suck it up and do it), because of the ‘business value’ position. No one in the business will generally give a crap about the latest developer ‘fad’ (since they generally neither know or care what counts as true progress versus fad, since they can’t judge it), but a set of tests that catch actual bugs in production code, before it actually gets to production, that usually gets people’s attention (if you are really good, your non-integration, TDD tests will give you the same, if not better, results…in theory, see side note).
How can you tell if you are providing business value or not? That’s hard to say. But, I will offer the following thought experiment (though it is based on a real-life example) as a guide:
Suppose you need to write code that will use FTP to go out to a site and download a file. This is a typical requirement in almost every single business shop in the world. If you immediately thought of creating an IFTPService interface, you have problems, and are probably part of the problem. The FTP protocol is not going to be re-designed and neither should your FTP code. Once it is built, it is done. “But what about testing how the code handles different response codes from FTP?” Setup a local FTP site that does whatever you want it to do, and create integration tests. If you think an IFTPService interface is a good idea, not only are you wasting people’s time, but you are losing the semantic argument. If you already have a TDD and/or top-heavy unit testing organization in place, then creating stupid interfaces like this is potentially okay because you can write the ‘extra’ code in a few minutes, but any seasoned developer is going to (rightfully) laugh you out of the arena if you think an IFTPService is a good idea. Which will kill any chance you have of getting TDD in where and when it matters and can supply business value.
Business value, good. Useless tests written because someone you read somewhere said you needed to have 100% code coverage, bad.
BDD, really bad. But that’s another post.