So, the topic of ‘Maintainability’ has come up again in various forums, sparked by Frans Bouma with his blog post and other comments (Update: though I should add that Maintainability wasn’t the direct topic of Frans’ post, as he correctly points out. However, a lot of the commentary that followed from it centered around this topic, and what they saw as the implications of his post, and that is what I did as well). Jeremy ‘counting till 10 till jdn calls me names on the net again’ Miller has this, Ayende has this, Patrick Smacchia has this, Jimmy Bogard has this (okay, maybe this one is slightly off-topic), and I’m sure there are a couple dozen/hundred more that could be listed.
I’ve said various things about the topic here, here, here, as well as (sort of) here and here. Gosh. That’s a lot of ‘here’ here.
A ton of what I’ve read over the past few years about maintainability has focused on developer activities. And what I’ve argued (or at least stated) previously is that maintainability as a concept only really makes sense when tied to the the context of those who are maintaining it. For the sake of argument, let’s assume that it is a given that, in some abstract sense, writing a .NET web application using ASP.NET MVC is better than using ASP.NET web forms, if only because MVC better separates concerns by default. It’s easy to caricature things, and since it’s my blog, let’s do that: cramming everything into the code behind of a web form, in some abstract sense, is clearly worse than separating out concerns into separate models, views and controllers. If you have a direct data access call in every web form, then if you need to change the implementation of those calls, you have to (potentially) re-write those calls in multiple places, whereas you have to re-write them in fewer places otherwise. This is a given.
However, if you have a web application using web forms, you have a much larger pool of candidate developers who are familiar with that way of doing things than with MVC. I’m leaving aside the fact that you can use the MVP pattern with web forms, and also leaving aside the fact that you can teach people, and, with some exceptions, you can teach people to use MVC even if they are familiar with web forms.
The point is, when talking about the maintainability of an application, at a developer level, you should take into account who is writing the code now, and who will be maintaining the code in the future.
Many people will argue (or at least state) that any application that follows, e.g. SOLID principles, is, in principle, more maintainable than one that doesn’t. I agree with this, as long as I’m allowed to point out the ‘in principle’ part. If you are in charge of maintaining an application of any significant size, you have to take into account whom you think will be doing the maintaining.
Regardless of all that, what I think is really more important when it comes to deciding things (is that vague enough? Yes, I think it is) is to talk about the sustainability of an application.
Sustainability involves the entire life-cycle and process chain, which includes how the organization that has/runs the application defines gathering requirements, validating functionality, develops code, deploys the application, and manages the application in production. This is *always* determined within a context, the context of the organization as a whole.
One of the fun things about being a developer who reads a lot of ‘alt.net’ type stuff is to read about all of the ‘newer’ (not ‘new’ in the sense of invented, usually, but ‘new’ in the sense of learned from other developer communities, for instance) ways of writing code, code that is easier to change and manipulate down the road. But, in my experience, the ‘developer’ side of an application is often a much smaller piece of the puzzle than other factors.
Suppose for the sake of argument that you need to implement a new piece of functionality in an existing application that was built with only the slightest understanding of SOLID principles. Suppose this new piece of functionality takes a month to implement in development. For the sake of argument, suppose that you know that if the application had been built with even a slight understanding of SOLID principles, you could implement it in a week within development. I think that some/many/most people would say that the latter case was more maintainable, in some abstract sense.
But suppose that, regardless of the amount of actual development time involved, it still takes a month to manage the requirements, and then a month to QA the functionality, and then a month to deploy the functionality. The development part of the equation starts to lose importance. If it also requires a significant retraining of production support staff to learn new ways of figuring out production problems, it really starts to lose importance.
Learning how to build a sustainable application involves a hell of a lot more than just having the development team employing the latest tricks and techniques that they read off the latest post from their favorite blog. Though this might seem controversial, having an application that has less separation of concerns but is easier for the production support staff to understand, so that that staff can potentially fix production problems without having to call the developers who developed it, is in many instances more sustainable. And, as someone who managed 2nd and 3rd shift employees, I want to make it clear that this isn’t a matter of insulting production support staff. But, the fact of the matter is that, generically speaking, brilliant developers don’t want to work 2nd and 3rd shifts.
None of this should be taken as a suggestion that you should lobotomize your development team. Far from it. But, it should be taken as a suggestion that when it comes to developing software, what makes it more maintainable for the developers might not be the most important thing.
If it takes 12-18 months for a business requirement to make it into the developer pipeline (which I’ve experienced, theoretically), that it takes a month versus a week for the developer to get their job done is rather irrelevant. It would be much more important in that context to make it take even 2 months for the business requirement to make it into the developer pipeline than to spend all that much time worrying about if a developer has to cut and paste some code.
Sustainable software development requires focusing on the entire process, not just on what developers do. Even though what developers do is usually the fun stuff.