Posts
832
Comments
691
Trackbacks
1
September 2009 Blog Posts
Adding Tests to a Legacy System through New Requirements

I’m going to try to describe the way that I would approach adding tests to a system that doesn’t have them, and wasn’t built to support them.  It will be a simple example, and isn’t intended to be anything other than a way in which I might approach this situation.

When I say that it wasn’t meant to support them, I mean it has certain characteristics:

  • Uber methods that access the file system, do some processing, make a call to a web service, send a new file through FTP, etc.
  • No interfaces
  • No ‘full’ dev environment (more on this below)
  • No CI processes

Obviously, in a situation like this, there are some challenges.  It is tempting to begin to think of ways to re-architect the entire system, but like many temptations, it is one that should be avoided, unless and until there is a directive that will support it.  Instead, I think it is better to begin to put hooks into the system in a simpler fashion: add tests as new requirements come in.  This has a number of advantages: it limits the amount of work that needs to be done, it makes it easier to define and develop the tests themselves, and it can function as a pilot program for introducing testing into an environment and/or for a team that isn’t familiar with it.

A couple of obvious caveats are in order.  You have to have the ability to introduce tests in the first place.  If you are lucky, the other members of the team will be amenable to it.  Also, as I mentioned at the start, I won’t pretend that the approach I will be describing is the best way or the only way or that I will do it perfectly.  In other words, blah blah blah.

The Requirement

Big boss man: “We need to audit the accounts we send externally against the audit file they send back.  No one will read the report, it’s just to get compliance off my back.”

Obviously, this is a somewhat vague requirement, but that is normal in most cases.  Depending on whom is generating the requirements, you will get more or less technical detail, more or less justification for it, and more or less indication of how important or urgent the requirement is.  You will obviously then need to do more or less work to get the technical details you have to have in order to write code. 

In this case, the technical details are something like this:

  • The accounts are taken from a database and sent via xml to an FTP site, which will then process it through a web service.
  • The audit file sent back is a compressed .gz text file, comma-delimited, with a trailer record that gives the number of accounts in the file, and a timestamp.
  • There are clear rules of what counts as a discrepancy.
  • A file must be created as part of the auditing process that lists all accounts that have discrepancies and stored in a more or less common place.
  • There is a more or less standard email format for other reports.

How to start

Once you have enough detail, you then need to determine how to start.  As we’ve discussed, we will want to create specifications of the things that need to be tested, before we write the code that implements them.

If you noticed, I used the word ‘specifications’, not ‘tests’.  Unit testing is often described in terms of TDD (Test-Driven Development), but it seems to me that there are problems with this.  TDD focuses the developer on individual classes, but that might not be the right place to start.  Furthermore, it doesn’t really give any clear guidance of what to start with.  Since there are databases and files and FTP and email involved, code will obviously need to be written to manage these things, but it is a mistake to start here.

BDD (Behavior Driven Development) tells you to start with the behavior of the system, but that is also (potentially) vague.  What is least vague (since there is almost always going to be vagueness along the way) are the specifications.  What are these?

Start with the ‘domain’

I’m using scarequotes here because “Domain” is an overloaded term.  It is clear that in a situation like this, you are unlikely to have a fully developed domain model, and are likely not to even have a partially developed one.

But in this case, there are two items that are key: the idea of an account and the idea of a discrepancy.  This is the heart of the requirement.  Given what we might call a source collection of Accounts and a target collection of Accounts, each Account will either produce a match or a discrepancy.  If an account is in the source, but not in the target, that’s a discrepancy.  If an account is in the target, but not in the source, that is a discrepancy.  If an account exists in both the source and the target, but doesn’t meet the rules, that is a discrepancy.  Otherwise, there is a match (one could argue that the idea of a match also exists in the ‘domain’ and that it should also be made explicit…whether to do so or not is often a matter of discretion).

Given this, there are a couple of obvious specifications that can be written out:

  • when_an_account_exists_in_the_source_but_not_in_the_target_there_is_a_discrepancy
  • when_an_account_exists_in_the_target_but_not_in_the_source_there_is_a_discrepancy
  • when_an_account_exists_in_both_the_source_and_target_but_fails_to_meet_the_matching_rules_there_is_a_discrepancy
  • when_an_account_exists_in_both_the_source_and_target_and_meets_the_matching_rules_there_is_no_discrepancy

A couple of things are worth noting here.  The underscore syntax porn is something that has always bothered me, but I’ve learned to live with it for two reasons.  First, when using something like MSpec, the underscores are replaced with spaces in the reports that are generated.  Second, at least for me, I’ve come to accept that it is easier to read specifications when using them, as opposed to using camel casing.  I still notice the spaces, obviously, but I’m used to them enough by now that it is almost like reading a sentence with spaces.  Almost.

Another thing to notice is that when writing out specifications, it is usually evident that there are many of them.  TDD encourages writing one and only one test, then writing the code, then writing the next one, etc.  This has always bothered me, as I personally find that I naturally think of multiple specifications at once (and if you are lucky enough to be able to work through the specifications with a business user and/or an end user (sometimes these are the same user, sometimes not), you will naturally work out many of these at a time), and I don’t see any reason why you shouldn’t write out multiple failing specifications at a time, and then implement the code one by one.  If the specifications cross many ‘domain’ items, it would probably be best to group them, and then work with a group at a time.

Most importantly, especially since there is no full dev environment, by focusing on the ‘domain’ you can avoid dealing with the ‘tough’ things upfront, like file access, FTP work, email notifications, etc.  Even if you could run an end-to-end test in a dev environment, you don’t want to be tied down here.  You assume that, at some point, you will be able to get the source data from a database, and that you will be able to access a file, and that you will be able to send an email, etc.  But for now, to start, you only should care that given that you can get source and target data, you can identify all of the possible discrepancies and properly find matches.

Needless to say, you can write this code easily and locally.  Given a source collection and given a target collection, produce the results you want.  Red-green-refactor or what have you, easy money.

A crack at a ‘tough’ item, file access

I’m not going to go through all the different ways one might handle database access, email notification, etc.  There are many different ways to approach these.  But I will try to give a general sense of how I might approach all of them by talking about how you might handle file access.

Since it is always good to use services, the immediate idea is to create an IFileService.  Interfaces are good.  Since one needs to get a set of accounts from an audit file, the immediate thought is to create an GetAccountsFromAuditFile method on this interface.

But is this right?  Once you create such a specific method on an interface, then every implementation of that interface has to, well, implement it.  And that doesn’t seem so great.

This is where discretion comes in.  Given the specific example that I’ve given, I think it is okay to do this.  I don’t know, as I’m adding the specifications/tests for the new requirements, exactly what will come from it.  I could end up with a half a dozen methods on this interface.  I could end up with many dozens.  Since I don’t know for sure, start with what is simplest.

More importantly, I think it is best to avoid creating generic GetFile methods on this interface, though it is tempting to do so.  When faced with a non-specific method like GetFile, it is easy to get lost in trying to think of all of the various possible ways that you might use it in the future, ending up with multiple parameters to handle all of the possible permutations.  Instead, stick with the actual requirements.  You need to retrieve an audit file for a specific purpose.  Make the code explicit with an GetAccoutsFromAuditFile method.  The bad thing about this is that you won’t be able to reuse this method for future requirements.  The good thing is that you won’t be able to reuse this method.  Reuse is good when it is well thought out reuse.  Reuse is bad when it is random and non-specific.

Summary

When you can, start with the ‘domain’ logic.  Test to the requirements.  When you need to start writing code to implementation details, try to find a way to limit the specificity of those details.

posted @ Sunday, September 27, 2009 10:54 PM | Feedback (0)
On Laziness, Cowboys and Duct Tape

So, Joel has posted something that is causing a bit of discussion (if you don’t need substance and want to kill half an hour, click ).  Though there are many points of interest in his post, I’m guessing that one of the passages that caused the most consternation was this part:

Zawinski didn’t do many unit tests. They “sound great in principle. Given a leisurely development pace, that’s certainly the way to go. But when you’re looking at, ‘We’ve got to go from zero to done in six weeks,’ well, I can’t do that unless I cut something out. And what I’m going to cut out is the stuff that’s not absolutely critical. And unit tests are not critical. If there’s no unit test the customer isn’t going to complain about that.”

In a previous kerfuffle, Joel seemed to be denigrating unit testing in general and TDD in particular, and this naturally got the Craftsmanship Fascists’ panties in a bunch.  A large part of the difficulty is that people are talking past each other.  This is normal, by the way.  Communication is possible, but it is also hard.  Joel seems to equate TDD advocates with architecture astronauts, which is obviously wrong.  The Craftsmanship Fascists (CF from here on out) seem to think that the ‘duct tape programmer’ is just someone who throws some crap code together and slams it into production and hopes for the best, which is also obviously wrong.  Luckily, I’m here to help.

You aren’t a duct tape programmer

Joel explicitly makes the point that the people whom he would call duct tape programmers are few and far between:

One thing you have to be careful about, though, is that duct tape programmers are the software world equivalent of pretty boys... those breathtakingly good-looking young men who can roll out of bed, without shaving, without combing their hair, and without brushing their teeth, and get on the subway in yesterday’s dirty clothes and look beautiful, because that’s who they are. You, my friend, cannot go out in public without combing your hair. It will frighten the children. Because you’re just not that pretty. Duct tape programmers have to have a lot of talent to pull off this shtick.

The average schmuck writing code isn’t one of these.  Part of the issue here, I think, is that the CF group denigrates duct tape.  Duct tape saves lives.  Don’t believe me?  Check here and look at the section on Apollo 13.  Case closed.  Regardless, it is clear that what Joel is talking about is not about randomly slinging shit together and hoping for the best.  It is instead that the end goal is what matters, and it doesn’t matter if there are people who think the processes that lead to a successful end goal think it is pretty or shiny enough.  This is why he says that shipping software is what matters.  He is completely right about this.

Cowboy Programming

During the dot.com phase of my career, our company purchased a site that (for whatever reason lost in time) needed to be converted to something that was on our platform.  For various reasons (lost in time), the item on the task list (or whatever it was) sat there for quite a while, until a cowboy programmer decided to rectify the situation.

I’m using the phrase ‘cowboy programmer’ here in the (normally understood as pejorative from an agile perspective) sense of someone who doesn’t follow TDD/XP/Agile/Whatever and just codes what he feels is right to get the job done.  In many, if not most, situations, cowboy programming is looked down upon, and with good reason.

The thing is, this guy was good (Hey Keith!).  He, more or less, locked himself into a conference room over a weekend and ported the site to our platform.  He certainly didn’t do TDD.  He definitely didn’t have detailed requirements.  The requirement was '”port site to our platform.”  And because he was so talented, he did met that requirement.

When you have the talent, being a duct tape cowboy programmer lets you be able to accomplish a lot of things that seem to bypass all of the CF standards that are supposedly so important.  But, the kicker is that you have to have the talent.

Laziness

One of my high school Physics teachers used to say something like “Intelligent people are inherently lazy.  They don’t want to spend time doing more work than necessary to get a job done.”  I’m pretty sure this was not a vocational message approved by the Administration.

Anyway, I can relate.  When I was doing the whole dot.com thing, I was motivated.  Really motivated.  Like many of my co-workers, I had visions of retirement at age 30 from all of those stock options.  Regardless of the visions, I was hot shit.  I could work 80 hours a week, I could be assigned tasks that I’d never ever worked on before and fulfill those tasks as if I was an expert.  We rocked as a team and as individuals.  And I was a perfectionist.  I hated when things didn’t work right.  Our team took over the websites for the NHL at one point, and I remember being angry when the CEO talked about how happy he was about the flawless launch.  It wasn’t flawless.  The individual team sites didn’t work for at least 12 hours (can you imagine!) because of some bugs we didn’t anticipate.  Flawless!  Please.

Fast forward a bit.  I’m still a perfectionist but understand a little bit more about what ‘flawless’ means.  It means providing the expected business value within a reasonable timeframe.  And what counts as ‘reasonable’ and ‘expected’ is defined by the business, regardless of what my perfectionist nature prefers.

More importantly, I’m lazy (or perhaps it is just that I’m tired).  I don’t want to work 80 hours a week, I don’t want to work on oddball tasks, and I don’t want to have to fix things.  So, I write tests because I know well enough by now that if I try to code something without them, I am much more likely to make some mistakes that won’t be found until I run the application, launch the web site, etc.  I want the flexibility to be able to change some code (because the requirements changed two days before production migration) and click a frickin’ button and see green, or (perhaps) better, see red and know I have some things to fix.  Actually, I want to be able to click a button and see gray, but that’s another story.

So, is Joel correct?

The answer is, of course, yes…but.  The ‘yes’ is that software is there to fulfill some requirements.  The nature of those requirements is determined by the nature of the business.  Unless the business requires the shiniest of software as imagined by the CF group, quality is tangential.  Ship what is required.

Then there is the ‘but’.  Things like unit testing are preferable to the typical developer like me because they allow me to fulfill the requirements of the business in the safest, easiest manner.  I know, from personal experience, that the time I don’t spend using SOLID principles, unit testing, etc. will probably be spent, on an order of magnitude higher, in a debugger trying to figure out what went wrong.  I’m experienced enough to know when a certain code path is most likely to cause me pain down the road, and because I know this, I know it is best to create tests around those areas.  Do I need to have 100% code coverage?  Of course not.  Only the CF group thinks that.  But don’t think you can ignore setting up tests where it is required, unless you are a duct tape cowboy.

posted @ Friday, September 25, 2009 8:28 PM | Feedback (1)
Porcupine Tree, 09/22/2009, Chicago IL

Overcoming a less clear sound mix, Porcupine Tree put on another solid show at Vics Theatre, mixing up the second part of their set from the previous night, as I had hoped.

It could have been where I was standing for their set, but the sound mix for Kings X was muddy, to say the least.  For a number of songs, Frontman dude would play some opening bass riffs, and you couldn’t tell what the hell it was really (which sort of made the 8 string bass number a miss), with only the lowest notes discernable. 

Frontman dude did a variation of his ‘first church of rock’ rap, and since it was coming, I paid a little more attention to it.  He talked about knowing that he didn’t want to be a lawyer or a doctor or a preacher or a factory worker (he was from Joliet, IL and referenced a particular factory, but I couldn’t tell which exactly, as they had trouble bringing his voice up on the mix, which was surprising, given how strong his singing voice is), and that he knew he wanted to make music.  And he added a nice touch that just because you have a dream, it doesn’t mean you will get rich or famous or that ‘you will be the shit’, but you shouldn’t spend your life doing things you don’t want to do.

Which is all well and good, except for the fact that being a responsible human being often, unfortunately, involves doing things that you don’t necessarily want to do.  Thinking of all the working parents (especially immigrants) that I worked with back in the years I worked in restaurants, I think of the parents who ‘sacrificed’ themselves, often working multiple jobs, because they wanted to make a better life for their kids, and not because they loved manual labor (Lord knows I didn’t enjoy it much, and I was just a single twerp working through college).

But, it’s only rock and roll.

One thing that they did again (though less successfully than the previous night in Cleveland, in part perhaps because of the mix) was their extended version of Over My Head.  I can’t say for sure, but it looked like the stagehands would single Frontman dude how long they wanted them to stretch it out, and they made it a good 15 minutes, including about 4 minutes of once again solid guitar work.  The impressive thing to me was the ‘sing-along’ part.

There’s a chorus that goes something like:

Music, music, I hear music
Music, I hear, music, music
Music oh oh oh Lord
Music over my head

Now, the meter of it isn’t straight 4/4 time, and it carries over a few bars.  Maybe I’m just underestimating the ability of a half-blitzed rock crowd to carry a non-basic rhythm for multiple rounds or maybe I’m just a non-rhythmic white boy (these aren’t mutually exclusive), but it was impressive to me that it went on for longer than a minute, without instrumental backing.

Anyway…..

The mix for PT wasn’t anywhere near as bad as it was for Kings X, though not quite as clear as it was in Cleveland.  Given that a lot of The Incident involves slightly intricate acoustic guitar, this was a little annoying, but not bad.

Set list:

  • The Incident (all 55 minutes of it)
  • (10 minute break)
  • Start of Something Beautiful
  • Russia on Ice
  • Anesthetize (partial, same 5 minute till 12 minute section)
  • Remember Me Lover
  • Strip the Soul / 3
  • Way Out of Here
  • (encore)
  • The Sound of Muzak
  • Trains

Remember Me Lover was played live for the first time on the current tour.  I was particularly pleased to get to hear Way Out of Here and The Sound of Muzak played back to back, as they contain my two favorite SW guitar solos. 

Whenever you go to a live concert, you always have to deal with the people around you and usually that isn’t a problem.  A couple of minor twists tonight.  One involved a couple that was standing behind me, and apparently, she was incredibly unhappy with the view of the stage (it wasn’t clear if they had been in a better section earlier and made a move or what) and made it clear, quite vocally.  So, for The Incident, there was a lot of , “Please don’t be angry with me.”  “I’m angry with you.”  “Please forgive me, I fucked up.”  .  I imagined him imagining singing I Drive The Hearse to her when it was playing.  Bad imagination.  Bad. Bad.  Then there was the muscle guy who moved up with his buddy towards the end of The Incident right in front of me, and was really intent on carrying on a conversation with his buddy, which was hard to do since there was a concert going on, and so required him essentially shouting through various songs.  Brilliant. 

But all that aside, it was another solid performance by the band.  SW mentioned that they planned to be back early next year for the next leg of the tour, so since I tend to see one group a year it seems, I have a mental note to look forward to that.

posted @ Wednesday, September 23, 2009 12:03 AM | Feedback (0)
Porcupine Tree, 09/21/2009, Cleveland OH

Thwarting late arriving equipment (making it unfortunate they didn’t play Arriving Somewhere But Not Here), Porcupine Tree pulled off an impressive show at the House of Blues in Cleveland OH.

Right before the opening act came on, the tour manager for Porcupine Tree came out and announced that the equipment from LA was arriving late, but would make it there and the show would go on.  There would just be a longer break after the opening act.

Said act being King’s X, who has been around since the late 80’s, whose only song you probably heard of was Over My Head.  They used to play songs with religious themes (and may or may not have been an overtly Christian band), and I guess they still do, but they are now (apparently) only or mostly negative.  Frontman even had the cliché Satanic belt buckle.  Nice.  Two best songs were an extended version of Over My Head, including something like a four minute guitar solo that was quite good, and a song called Pray with the theologically correct lyrics:

if you think that jesus has saved you,
mother mary is waiting there for you,
if you think that god has spoke to you then,
don't forget to pray for me, hear me now
pray for me,
pray for me,
if you really do believe,
then don't forget to pray for me,
hear me now

Quite right.  Anyway, he ended with some babble about the ‘First Church of Rock’ and how there are so many gods to believe in that he doesn’t believe in any of them, and how you should do what makes you happy and fuck everyone who tells you not to.  Not quite Spinal Tap territory, since he was sincere, but there you go.  Thus endeth the lesson.  Oh, he also got Porcupine Tree’s name wrong at first, though he covered quickly.

So, then the delay.  Not only did PT’s equipment arrive late, but something at the soundboard (which I was standing next to) broke, and required replacing.  It ended up being an hour long break between sets.

The set list was similar to the other shows, and something like this:

  • The Incident, all 55 minutes of it
  • (10 minute break)
  • Sound of Something Beautiful
  • Buy New Soul
  • Anesthetize (partial, from after the beginning solo, and without the ending part, so from about minute 5 to 12)
  • Lazarus
  • Strip the Soul / .3
  • Bonnie the Cat
  • (Encore)
  • Mother and Child Divided
  • Trains

The Incident played out rather well.  They did stop after Blind House for a minute so that SW could thank the crowd for their patience and give a shout out to the crew, but other than that and an occasional ‘thank you’, they played right through it and it was quite solid.  Drawing the Line was particularly well done, and Time Flies, the centerpiece of the new release, is a great homage to Pink Floyd, and a great work in its own right.  I particularly like the closing I Drive the Hearse, a beautiful song of despair and regret:

And pride is just another way
Of trying to live with my mistakes
Denial is a better way
Of getting through another day
And silence is another way
Of saying what I want to say
And lying is another way
Of hoping it will go away
And you were always my mistake

Hey, the release was born from SW coming across a fatal traffic accident, you were expecting Mary Poppins?

After the break (with a visual timer so that the smokers in the crowd knew they had time to run for the exits and what not), PT came back and played a strong second half.  I know other sets on the tour so far have included The Sound of Muzak, Normal, and Way Out of Here, so I’m hoping they play those in Chicago tomorrow.  The truncated Anesthetize is a little awkward, but I guess they didn’t feel like taking up 60+% of the concert with ‘two’ songs.  I wish they wouldn’t play Mother and Child Divided, it’s not one of my favorites (okay, I think it’s boring), but then again, they didn’t play either Gravity Eyelids or Halo, so I guess it evens out.

Overall, an excellent show, and I expect the same tomorrow night in Chicago (hoping they play the songs I noted from other sets).

posted @ Tuesday, September 22, 2009 12:24 AM | Feedback (0)
Porcupine Tree – The Incident (First Listen Thoughts)

So since I have no life, I paid the extra bucks to be able to download the newest Porcupine Tree release a little earlier than others.  Having sat through the first listen, here’s a couple of thoughts.

Disclaimer: there have been times in the past when my first listen thoughts ended up being radically different from what I thought later on.  I didn’t like Marillion’s Clutching At Straws at first, then came to think of it as the best release of the Fish era.  In a totally different area, I thought Jane Siberry’s Since I Was A Boy was, well, boring, the first time I listened to it, but now think of it as one of my favorite releases of anyone at anytime.  So, this is subject to change.

It sounds as much as a Stephen Wilson solo project as a Porcupine Tree, although how or why that is, is difficult to describe in specifics.  I’m not entirely sure how the 55-minute song cycle fits together, other than that SW thought it was a song cycle.  Since I don’t have any lyric sheets yet, it’s hard to know if there is something there to see (though, as I’ve made it clear on numerous occasions, I don’t look to rock music lyrics for anything of significance…if it ain’t in the Bible or Shakespeare, it probably isn’t that important…but I digress).  There is one minor recurring musical theme, but it is pretty minor.  ‘Time Flies’ is obviously supposed to be the main song, and it is good.  It does bring about an immediate Pink Floyd comparison, as a part of it sounds like the end of ‘Dogs’ from Animals, but then again, the ending of ‘Anesthetize’ sounded like ‘Echoes’, and that didn’t hurt.  I think I like ‘I Drive the Hearse’ (the closing track) the best, but it’s only a first listen.

The four ‘other’ songs that aren’t part of the song cycle seem to me to be pretty throwaway.  ‘Black Dahlia’ is nice, but other than that, nothing really stands out.

Although I don’t think anyone has ever confused SW with Eddie Van Halen, I do look forward to a nice solo from him.  There isn’t really one, except maybe on ‘Time Flies’, so that’s probably a negative.

All in all, it is hard to compare this release to other PT releases.  This could be a good thing or a bad thing, or both.  It’ll take a few more listens, at least, to think about.  I like it, which is good, especially considering that I am driving out to Cleveland on the 21st to see them (even though they will be in Chicago on the 22nd, which I’m also seeing). 

There isn’t a single band or artist that I’ve liked that hasn’t released ‘that horrible album’ at some point or another, except maybe PT (I can’t say I have a strong opinion on their earliest stuff).  Luckily, this one isn’t it.  Whether it is more than good or just good (in the way that Marbles is brilliant while Happiness is the Road is just really good) I’m not sure.

posted @ Monday, September 14, 2009 8:03 PM | Feedback (0)
Technology Driven ‘Convention over Configuration’

In a recent post, Jeremy Miller talks about two different type of developers (those that say there are two different types of developers, and those that don’t….no, I kid) and talks about one type:

This is what I have.  The other fellow started his solution by asking himself “how do I use the existing infrastructure in Prism to solve this problem?”  He restricted himself to the patterns and mechanisms in Prism and used a pretty idiomatic WPF type of solution that would be pretty familiar to most WPF developers (a major point in its favor).  Actually, “restrict” is the wrong word.  What he did was simply to take the tool he had and figured out how to solve the problem using that tool.  Quite reasonably.

If you think about it, one way to think about this perspective is to think of it as a technology driven ‘convention over configuration.’  Give the convention of, e.g. using Prism, how do you get the job done?

posted @ Friday, September 11, 2009 8:02 PM | Feedback (0)
Some things I think I like about ‘Specification-style’ development

I specifically say ‘Specification-style’ to make it clear that I don’t pretend to be doing “full-blown” Context/Specification BDD as described by Scott Bellware here, or even that I can use MSpec as developed by Aaron Jensen (typical ‘non-approved software’ issue…whatever).  Instead, what I am doing is taking what I take to be an important aspect of it and running with it.  Of course, just because I take it to be an important aspect doesn’t mean it actually is, blah blah blah.

I’m working on a system that does stuff in a very ‘legacy’ way.  Keeping in mind that ‘legacy’ often means simply “code that I myself didn’t originally create or code that I myself did create but a while ago before I knew what the hell I was doing or code that I myself did create but long enough ago that I don’t remember what the hell I was doing”, this code is very ‘legacy’ in that it has those wonderful methods that do 17 things, like pull a file from some file system somewhere, import it somewhere, work on the data somehow, write out the now modified data somewhere, ftp it somewhere, then if necessary send an email to someone somewhere telling them that the data was modified and somehow incorrect.

Great stuff.  Easy to deal with.

Anyhoo, I had to modify some of the code in one (or more) of these wonderful methods and needless to say, the code isn’t easily modifiable, and obviously had all these dependencies (one or more files, ftp, email, etc.).

But I had a requirements document.  And some access to the business users to ask questions.  Not perfect, but okay.

The actual requirement was basically along the lines of “Given a collection of data A, if any item B within A failed validation based on condition C, send an email to D.”

Digression

Now, BDD has at times been described as “TDD done right.”  I’ve always thought this was flawed because I think TDD is flawed.  But, since I think that TDD is flawed in some sense like how Churchill viewed Democracy as flawed, e.g. ‘Democracy sucks, it just happens to suck less than any other system of government,”  I could work with it.

TDD is flawed in many ways, but one way in which it is flawed is that it is focused at the class level.  If I’ve got a DoStuffClass, then I need to have a DoStuffTestClass that has 100% code coverage of DoStuffClass.  I’ve never liked this because I don’t like the concept of 100% code coverage (sort of, read on) and because I don’t know why you should test at the class level.  Classes exist to allow you to do stuff.  If the ‘stuff’ that you are doing doesn’t require you to test every possible permutation of what one might possibly do with the class, then why do it?

Should you test the constructor of your class?  Should you test that each getter/setter on the class works?  I think that it depends on what the consumers of your classes are doing.  This is actually one of the reasons why I now see the reason why you might want to get rid of setters (and maybe getters) on your classes.  If you allow random consumer code to change an instance of your class because of its setters, then you have a risk that the consumer might do something that invalidates it.  If, instead, you only allow your class to be modified through commands in a way that guarantees your class can’t be invalid, then you only need to test those paths. 

The tie-in to ‘specification-style’ development is this:  your code should only do what it needs to do to fulfill the specifications that you know *right now* it needs to fulfill.   To go back to the (pseudo)-requirement, I need to validate item B based on condition C.  I can imagine conditions E, F, Q, etc. and may be tempted to code that in now based on the imagined conditions.  But no one has asked me to do that validation, and when they do, they will probably come up with requirements that don’t actually match the E, F, Q, etc. that I imagined.  So, be lazy.  Don’t do it.  It can be comforting to have more test cases that end up green (or grey, if using MSpec), but if you haven’t actually been asked to do what those test cases are testing, you aren’t really accomplishing much.

Back to the main point

When working with legacy code, you obviously want to abstract away from particular implementations.  There are many ways of doing this, but in almost every instance that I’ve come up with, it is pretty easy.

If your data is coming to you from a flat-file, it is almost always easy to come up with some interface or entity that represents the data that exists in the flat-file.  Write that code, and test against it.  If you have to test that an email is sent when some validation fails, create a wrapper class or an interface and test against that.  It isn’t necessary to create services and repositories and whatnot, but that works for me most of the time. 

Most importantly, code to the specifications.  There is nothing in “Given a collection of data A, if any item B within A failed validation based on condition C, send an email to D” that specifies any particular implementation, other than the email part.  You could probably pretty easily abstract that part to some INotification interface anyway. 

To lay it out:

Collection of Data A: though I know it comes from a flat file in production, the data has a structure I can define, and I can stub this out.

Item B failing Condition C: I can write code that does this pretty easily.

Send email to D: I can write a wrapper class or interface that has a ‘send’ method and verify in tests that this was called.

Most importantly, I can create a service call that passes in the data and then either calls or doesn’t call the send method.  This is easy to mock/stub out.  I don’t need to know about files or emails or whatnot.  I just need to know, in the heart of my code, does it fulfill the specification(s) or not?

None of this, obviously, means your code will work in production.  Getting code to work in production means dealing with real emails, real ftp sites, etc. etc. etc.  This is why you should expand the notion of ‘specifications’ to include production specs.  But that’s another topic for another time.

posted @ Wednesday, September 09, 2009 8:21 PM | Feedback (0)
The beginning of College Football season….

means a new round of stupid songs that we get to hear ad nauseum on ESPN/ABC and this years losers are:

“This is our moment” by Kenny Chesney.  Nothing like an insipid country ‘thriller’ to get one fired up for a contest, especially written to suck, I mean, written specially for this.  You can read a rousing endorsement of it here.

Ah, but there’s more!  The Dave Matthews Band has been named the Official Band of College Football Coverage on ESPN!!!  Because, really, you have to have an official band for that, and also, who doesn’t, when thinking of college football, immediately think of Dave Matthews?  That’s what I thought.  More about it here.

Oh, how I hate ESPN.  Too bad they now broadcast just about everything. 

posted @ Saturday, September 05, 2009 3:23 PM | Feedback (0)
Myths of Software Development: Open vs Closed Source

The thought occurred to me recently that there are some myths about software development that involve misperceptions of the similarities and differences between OSS (Open Source Software) and, well, everything that isn’t OSS (we’ll call it Closed Source Software (CSS) for the hell of it). 

I’m making no claims about how widespread these myths and misperceptions are, or even how accurate they are.  This is my blog, I can make things up if I want to.  But, I think it might be helpful to consider them.

The Stereotype: Machine.Specifications vs Visual Studio

In many of the places that I’ve worked at as a contractor or consultant (the difference is that, as a contractor, the client tells you what you have to do, while as a consultant, they ask you for your opinion and then, generally, tell you what you have to do anyway), there is something like an ‘Approved Software’ list, which, surprisingly, tells you what software applications, frameworks and libraries are approved for use (and which versions, typically).  More often than not, it includes both OSS and CSS.  There are pros and cons to be considered about the value of the existence of such lists, but that’s besides the point here.

Something that I think is probably not on a majority of such lists is an OSS project called Machine.Specifications, a library/framework/whatever ‘headed’ (by which I mean, I think he has a leadership role related to it, but not sure exactly what it is…inventor?  main contributor?  grand poobah?  all?  none?…again, doesn’t really matter) by Aaron Jensen.  What is it?  Well, it is a library/framework/whatever that assists .NET developers to practice BDD.  What is BDD?  Well, you should really just Google the term, but it is a practice/methodology/whatever of software development that revolves around creating specifications (surprise), which are usually created in tandem with business types, and which are executable in code.  And with Machine.Specifications, it means you get to produce ‘pretty’ HTML reports that, well, report on which specifications pass during development.  You can find more information about all of this by, well, using Google.

Anyway, what sort of misperceptions about Machine.Specifications and Aaron can we come up with, as a stereotype here? 

The first might be that there are probably only 17 or 18 people in the wild that actually practice BDD (or something like it), and as such, any library/framework/whatever that exists to assist in this has a user base of basically squat.  I mean, I use it, but that sort of proves the point.  It was created to ‘scratch the itch’ of a particular user or very small group of users, and so is largely irrelevant to almost everyone.

There’s next to no documentation (well, there are all those test cases, sorry, specs in the actual source code, but of course that doesn’t count) available.  The source code itself moves from place to place, which is really annoying.  You used to be able to get it through Subversion, but now it appears that the only way to get the most current version is through git, the latest fad source control tool of the kool kids, and no doubt within the next year or so, when the next latest fad source control tool of the kool kids comes out (mercurial?), you’ll have to get the source that way.  Which also means that whatever version of the code that I am currently using is almost definitely not the most current, and so I won’t be able to do what I need to do.  I actually came up against this recently.  Some blogger had some post about how to get the Resharper tool to execute the specifications, but since I don’t have the latest version, I couldn’t set it up.  As anyone will tell you, I’m not a kool kid, still working with Subversion, and so now I have to pretend to be one and setup git?  Great.

Also, I happened to meet Aaron at the Alt.NET Open Space conference in Seattle during April 2008 through Adam Tybor (whom I know from the Chicago Alt.NET group), and, well, he made me feel old.  Young guy.  One of the ‘Elutian Guys.’ 

So, there’s a nice stereotype for us.  OSS tool created by some young guy to fulfill the need of an incredibly small user base and you have to be a kool kid to be able to get the latest source.  Oh, and I’m guessing there’s no paid support for it either.

On the other hand, there’s Visual Studio.  Now, I’ve never worked for Microsoft (despite what some people think) and so don’t actually know for sure how the team that develops the IDE actually works, other than what I’ve read, or what I think I’ve read, about it.  Which if fine, since we are talking about misperceptions here.

Obviously, Visual Studio has a *huge* user base.  It is definitely CSS and has a team of God knows how many people working on it.  Because it has such a huge user base, and because the team that develops it works within an environment with a pretty significant regression test suite, there is a really good chance that if and when I use the latest version of it (which is whatever the current release is), it will behave as I expect it to. 

There’s also an incredible amount of documentation available for it, both online and offline.  If I have questions about how to use it, I can search through it and usually find what I need.  Even if I get ‘crazy’ and choose to use a CTP or Beta or RC version of it, I can usually find out how it works.  There are almost always QuickStarts to, well, help me start quickly on common usage scenarios.

And there is both paid and unpaid support for it, if and when I need it.

So, there’s another nice stereotype for us.  CSS tool created by a large team to fulfill a very large user base, with a solid source you can get, and which can provide you with good support if and when you need it, which is less likely, since there is comprehensive documentation for it.

Stereotypes are fun, but….

An obvious misperception of OSS is that it has to be produced by the young guy with the small user base.  The obvious counter-example is Linux (or, at least, the Linux kernel).  As far as I understand it, the vast majority of the people who work on Linux are paid to do so, whether because they are from IBM or other sources. 

But there is another contrast that I think is just as important.

The CSS you should be comparing OSS to is your company’s code base

Most of the clients that I have been a consultant/contractor with use a significantly sized code base that is internal to the client and which drives their business (to some extent).  And some of these clients have had a significant user base, i.e., the internal users.  Despite this fact, there is nothing about working with a client with a significantly sized code base with a significant user base of internal users that means that their documentation is well kept, or that the development standards they use follow anything like decent development practices/methodologies/whatever.

Within the .NET OSS space, it is easy to point to certain projects that are almost undoubtedly better developed and better documented.  NHibernate, Rhino Mocks (‘headed’ by Ayende), and StructureMap (‘headed’ by Jeremy Miller) are obvious examples.  Are your coding standards and/or documentation levels of your company’s code base equal to these?  I’m willing to bet, probably not.

In fact, I’m beginning to think that, in terms of quality, one should think of one’s company’s code base as an OSS project that just doesn’t happen to have an OSS license. 

Does this mean that OSS is inherently better?  No, I don’t think so.  But, I do think that the quality of the code that someone produces within the context of the development practices/methodologies/whatever that produces it is largely independent of whether it is OSS or CSS.

posted @ Friday, September 04, 2009 8:08 PM | Feedback (4)
The new VisiCalc

Though I sometimes feel like it, I’m not actually old enough to know what VisiCalc was, but Joe Stagner wrote a post in which he made an analogy between the two:

VisiCalc was THE spreadsheet of the day !  It might have done 2% of what Excel 2010 does, but when it was released it was an AMAZING innovation…ASP.NET/Web Forms was a perfect match for the skill set of the day (2001/2002) and was exactly what the industry needed at the time.

I don’t know that I would say it was a *perfect* match, but I generally agree with the sentiment.  It was SO much better than ASP, and made programming for most situations so much easier.  I remember feeling a little jealous that one of our sister companies at the time (HSN.com) re-wrote their search functionality using Beta 3 (I think) of ASP.NET 1.0, and got a go-live license, and got immediate benefits in terms of performance and scalability.

Not everyone shares the same view.  Jimmy Bogard wrote a counter-point post that takes a decidedly different view:

When I moved from ASP classic to WebForms, the whole postback business was just completely baffling…But in the end, I could really care less about the WebForms stalwarts.  I’ll never work with it again by choice…the WebForms architecture wasn’t really needed in the first place.  But hey, if you’re the type of guy that enjoys getting poked in the eye with a sharp stick, you can have your ViewState.

I think this is wrong on a number of levels.  One of the canards that gets thrown about a lot is that postback and ViewState are inherently confusing.  They aren’t.  In almost every instance, you can pretty easily work with them, and you can get a lot of benefits.  Plus, I’ve never really understood the unadultered love that some people (maybe not Jimmy, hard to tell) seem to have for the HTTP protocol.  As a programming paradigm, I’d say that HTTP actually really kinda sucks.  There’s nothing really all that cool or fun about POST and GET (PUT is kinda sexy though…LOL), and request/response blows.  It just happens to be that you can do a lot with it, despite its suckitude. 

And you can have testable apps with WebForms, it’s just harder.

In any event, MVC (whether it is the ASP.NET version or not) does have a number of benefits over WebForms.  I wouldn’t go so far as to say that you *must* use it for sophisticated apps (there are tons of very sophisticated apps using WebForms…somehow they figured out that ViewState thing-y), but it certainly should be considered.

Regardless, there is nothing ‘revisionist’ about how ASP.NET could make web programming significantly better when it was released.

posted @ Tuesday, September 01, 2009 10:21 PM | Feedback (0)
Marillion – Easter (Live)

This was previously posted on YouTube then pulled and has now appeared again.  No idea why it was pulled in the first place.

This was the best song on the album after Fish left, and I liked it when I first heard it, but forgot about it.

Years and years later, I rediscovered it.  Steve Rothery rarely solos extensively (for whatever reason), but this is just about his best.

posted @ Tuesday, September 01, 2009 1:18 AM | Feedback (0)
I hate fantasy football

There, I’ve said it.

I hate it.  Really.  A lot.

There are many reasons why I hate it.  I hate tuning into a sports radio station and finding people discussing it.  I hate watching ESPN (which I hate in general anyway) to find some blowhard talking about some 2nd receiver on the Bengals (or whatever) and why everyone should pick him up for the week.  I hate hearing that Don Brown is at worst going to get 900 yards and 8 TDs for the Colts this year (um, no, at worst, he’s going to blow out a knee in Week 2 and be out for the year).  I hate wanting to know the scores on other games and having to sit through the horrible scroll of fantasy stats (T. Owens, 2 catches, 42 yards, 1 TD) to get to them.

More to prove my point….good friend of mine.  Soccer dude.  Played collegiately for IU.  Hates football (not futbol).  Went to IU, so understandable, since they haven’t had a good team….ever.  Like I said, hates football, college or pro.  No interest in it.  Member of two fantasy leagues during the NFL season.

I rest my case.

posted @ Tuesday, September 01, 2009 12:29 AM | Feedback (2)