Posts
832
Comments
691
Trackbacks
1
August 2010 Blog Posts
cqrs for dummies – an interlude – Greg Young’s sample app

Greg has posted that he has created a sample app for CQRS, which you can find here.

It is indeed called “SimplestPossibleThing” and it’s pretty accurate, but it is good code to review.  I especially like “InfrastructureCrap.DontBotherReadingItsNotImportant.cs", I mean, come on, what else would you look at first.

Seriously though, it’s important to see a simple implementation of all of the main components.  Check it out.

posted @ Tuesday, August 31, 2010 1:33 PM | Feedback (0)
I Like Windows Home Server

I took my computer into the place I bought it, so that I could get them to figure out why it won’t turn on.  I’m hoping it’s just the power supply that’s crapped out.  Turns out there’s a six day waiting list till they even get to it, but out of stubborn principle, I’ll wait.

there are just certain things I don’t want to do anymore, damn it.  I used to be a line cook (though a friend of mine is/was a chef, I never got that bug, which is probably a good thing, because being in the food industry, for most people, for the most part, largely sucks.  I have nothing but respect for the people who get up and work in front of 500 degree ovens on the line, often at two places, so 8-16 hours a day sometimes, day in and day out, week in and week out, in many cases immigrants making a better life for their kids.  They are fantastic people.  I just don’t want to do that sort of thing anymore), and amongst the many tedious jobs was peeling and deveining shrimp.  Brings down food cost to have manual labor do it.  But I’m an adult now and if it costs me an extra buck or two to buy it already stripped, I’m doing it damn it.  Same thing with computers.  I’ve opened waaaaayyyy too many computer cases in my time, and unless it is something like adding a video card, I don’t want to.  So if I have to a stupid $70 diagnostic fee and wait six days because I don’t want to replace a power supply, I know it is a waste of time and money, but damn it, I don’t care.  But I digress.

The likelihood that I will actually wait for six days without just buying a new computer and copying over any data not backed up is pretty close to nil (I’m guessing in about 4 hours I’ll surrender), so one of the things I’ve been doing is checking to see what exactly gets backed up automagically every night.

As it turns out, apparently just about everything.  It’s sweet.  I’m not sure how they do it either.  I keep Outlook open all the time, and normally that locks the .pst files.  Nope, backed up.  SQL Server data and log files?  They’re in the backup. 

Of course, I won’t know for sure what’s up till I attempt a restore, so I’m copying over my entire ‘My Documents’ folder to another machine to see if the mail files load, but it looks like, short of hibernation files and the like, it copies the entire damn hard drive.  Sweet.

Now, to try to resist dropping a grand on a new rig……..

posted @ Tuesday, August 31, 2010 1:11 PM | Feedback (0)
An effective test strategy

Jimmy Bogard has a good post up on LosTechies about the different types of tests you might run on a code base.

As he points out, categorizing different types of tests can be annoying, but he separates them into full-system, subcutaneous, and unit tests.

I’m long on record bitching about unit tests.  Unless you are building a framework, where you don’t know up-front how your framework might be used (users are fun that way), TDD is terrible.  Some form of ‘godawful syntax’ BDD type spec stuff is much better, since you know what the inputs will be from the end users (as fun as they are) and so that is all you really need to test, and it drives what your tests will look like (as opposed to the random “let’s create 8 vars and see what happens” method of testing TDD encourages.

I’m interested in his distinction between full-system and subcutaneous testing.  When building apps (like the web apps I’m typically working on), I prefer the full-system tests.  Those are what your users are actually doing and you can actually work with them if a spec fails.  When dealing with batch files, I also prefer full-system tests (there’s almost nothing more useless than a TDD-style test of a batch file process).  At the same time, I have also used (what I didn’t know what to call) subcutaneous tests, where you are, more or less, testing your system end-to-end but without UI interaction.  To use a simple example, if you are testing tax rate calculations, it’s often really important to know that the ‘behind the scenes’ processing is giving you the right tax information for a given order, and there’s no need to drive this from a UI.

In any event, I think it’s a good read.  Check it out.

posted @ Monday, August 30, 2010 8:35 PM | Feedback (0)
Reminder: Check your backup strategy

Backups are great, but only as great as the last time they were tested.

I run multiple machines on my home network, but one is always designated as the main one, and sure enough, it bit the bullet Sunday night.  I think it is only the power supply.  At least I hope so, because if it is the motherboard, pain is in my future (well, more pain than getting the power supply changed, something I used to be able to do in my sleep but which I now pass off to the store I bought it from because I am old and slow and stupid).

Because I use Windows Home Server, most everything is backed up automatically each night.  At least I think it is, that is one of the things I’m verifying now.  The code that I am working on is distributed on multiple machines (albeit manually most of the time) so I’m confident of that part.

Especially if you aren’t running a multiple machine home network, are you confident your important data is being backed up?

posted @ Monday, August 30, 2010 8:19 PM | Feedback (0)
Creative Zen X-Fi 2 64 GB Review

I wanted to like it.

I have used various music players over the years, but mainly the Creative Zen.  I don’t use iPods because I have too many things that are in .wma format (yeah, yeah, I know) and I hate iTunes.  I’ve used Sansa products before, but after returning the 3rd one with a white screen of death, I decided they didn’t quite have the most robust quality control.

My current player (which I’ve had for a few years now) is a 32 GB MX and it is close to full.  You can put things on an extra SD card, but you have to access it differently from the other content.  And it’s a few years old.  And a contract was renewed.  And I got a raise.

Anyway, for the longest time, 32 GB was about the high limit, so when I noticed that Creative had a 64 GB unit, I decided to give it a try.  I was pretty disappointed, which is why I’m returning it.

There were two types of problems with it, technical and usability.  Since I didn’t try it for very long once the technical problems popped up, perhaps I would have figured out the usability problems.  The immediate one is that on my current player, if a song pops up then you can pretty quickly find a song you like better if it happens to be on the same playlist or loaded close to it.  Perhaps because the X-Fi is touch-enabled, I couldn’t figure out how to do it.

The technical problems were the deciding factor.  When you add content to a Zen, when you disconnect it from your computer, it has to rebuild indexes (or something), and is unusable while it is doing it.  On the MX, it takes maybe 20 seconds to do it.  With the X-Fi, it took 20 *minutes*.  Even if you didn’t add content, it would rebuild indexes after disconnecting (at least it did once).  So that really wasn’t going to work out.

The other issue is that the firmware has a hard-coded 8000 file limit on what you can load into the X-Fi.  Well, I already have more than that many files on my MX, so I had the luxury of a 64 GB player that I couldn’t add another file to.  Well, I guess that might not be technically accurate.  It did load the files, but you could only ‘see’ 8000 of them.  After the fact, I saw some other online comments about this.

Needless to say, I decided it wasn’t worth the time to screw around with it.  Back to Amazon it goes.  Maybe a Zune, or maybe when the new iPods come out in the coming weeks, I’ll give them a look.

posted @ Saturday, August 28, 2010 5:02 PM | Feedback (0)
Initial Impressions of UberProf, Part 2

In a previous post, I talked about the first 30 minutes or so of working with the various parts of UberProf.  It’s been a bit longer, so I wanted to update some of that.

EFProf

Previously, I noted the following:

“Since it is long before I care about profiling this app (since it is a test database used to satisfy my specs), I won’t go into much of the details, but one interesting thing I noticed is that, in a routine where I am doing naughty N+1 things in a loop, it only flags the last three selects as N+1, even though all six are identical (except for the values in the where clause).  I’m going to have to keep an eye on this.”

I can update that by noting that this is per design.  Let me digress as I am wont to do.

What counts as ‘bad’ when it comes to profiling SQL statements is, to a certain extent, context-dependent.  Ayende (and whomever else) designed the profilers to note things as alerts based on their experience.  But, he also made a number of them configurable.  The Select N+1 alert is one of those configurable items.  The ‘naughty’ things I was doing happened to be in a loop where there were six of them. The first three did not get alerts in the UI, the last three did.  Well, as it turns out, the configurable piece of this was very clearly set to only start to mark an alert on the fourth item, and so the UI did exactly as it was configured to do.

IF (and this is a big if) the end user (me) thought that this setting should be on the first Select N+1 or the 97th, then I could have done so.  Once I set it to alert on the 1st Select N+1 (as a test), it did so.

What I like about this is that the profiler will always (well, it should, see below for more) profile every call, and allow me to set some settings to the levels that I prefer.  It’s up to me to decide those levels.  In other words, the tool allows me to put in my own expectations.

In my mind, this is a very good thing.

L2SProf

Things get a little more rocky here.

As the previous post noted, the profiler didn’t work on my L2S web app.  It either threw an error (using the v4 dll) or didn’t profile (using the non-v4 dll), neither of which was a good thing.

After mailing the issue to the Google support list and not getting a response in a day or so (I think), I did email Ayende directly, after which all communication has been through the support list.

This should be taken with a large grain of salt, but it was a bit troubling that the initial message made it through proper moderation but didn’t get action until I emailed Ayende.  I believe he happened to be in the midst of travelling as he always does, so I understand it, but it was troubling, not so much to me personally, since I knew it would be addressed, but more to the business partners I’m working with, who don’t know Ayende, don’t know Alt.NET, etc.  This is something that is endemic to ‘smallish’ support, and something I am very painfully aware of.  Unless I am personally working on a support issue, various external groups don’t necessarily know that their questions have been acknowledged.  To be perfectly clear, with large support groups, you often have the reverse issue, where you might get some immediate acknowledgement that your support question has been noticed, but you don’t know if there is an actual human being working on it till it works through whatever internal ticketing/queuing system they happen to have.

The long and the short of the whole shooting match is that my code was creating L2S data contexts in a way that was different from the standard way of doing it (I am overriding the standard constructor), and once I made that clear, Ayende was able to give clear and explicit instructions of something I needed to change in my code base, while also fixing the “security transparent method” error I was getting on his end with a new build.  (Note: As I’m writing this, there may be a lingering issue, but I’ll leave that to another post if it really is an issue).

Having said that, there was some ‘broken window’ problems.  To use an analogy, in NYC, at some point, there was a crackdrown on addressing broken windows.  Though I completely leave aside the question of the efficacy of the thing, the idea was that if the authorities in Times Square started to address issues like broken windows, real serious crime would also go down because of the added focus.  An individual broken window isn’t a serious crime issue, but if it is addressed as one, then there’s an expectation that serious crimes would be addressed.

All of the profiler tools have a notification when a new build is available.  Within L2SProf, when you clicked on the notification, the app crashed.  This is not a confidence builder.  It’s a broken window.  “If the new build notification process doesn’t work, what else doesn’t work?”  It’s a simple thing, one might say a trivial thing, but it does make a difference.

Now that I’ve said all that, let’s get to the good stuff. 

Once the profiler started working, it immediately raised very clear and targeted alerts about what could be improved.  Having only worked with it for an hour, these alerts seem to me to be spot on and accurate.  There are very clear improvements that can be made to my app and how I coded various methods.  This is exactly why I bought the thing.

Quick Summary

The process by which I got the L2SProf app to work was a bit less smooth than I would have liked.  If I had not bought the thing and just done the trial, I probably would not have made the purchase but put it on the back-burner.  Since I had bought the thing, having worked through the initial difficulties, it seems to be worth the price.

I plan on posting a before and after comparison post once I analyze some data further.  I am encouraged by what I see so far.

posted @ Friday, August 27, 2010 9:06 PM | Feedback (0)
cqrs for dummies – an interlude – notes from Greg Young’s USA Weekend Class

Greg Young has been doing a number of EventBrite Live Meeting type sessions over the last month.

I haven’t been able to attend any of them, but Jeff Doolittle posted a summary (if you can call a 17-page PDF a summary) of the latest one.

Good stuff, check it out.

posted @ Thursday, August 26, 2010 12:24 PM | Feedback (2)
Public Image Limited–Ease (not even remotely live)

Normally, I wouldn’t post a ‘video’ where there isn’t a video, really, just a shot of the cover.  The oddness of the lyrics of the song, combined with the brilliance of the guitar solo and the fact that there was never a live version of this that could do it justice, makes this an exception.

Recorded arguably before Steve Vai was Steve Vai, this song was on a release by PIL that was, well, odd, in many ways.  If you got a cassette (“Daddy, what’s a cassette?”…”Shut up”), the title of the release was “Cassette.”  If it was an album…you get the picture.  In one of the weirder combinations of all time, you get John Lydon singing a song with an unbelievable guitar solo from Vai.  I haven’t the slightest idea how this was thought-up, but it works.  Supposedly, Vai thinks this is one of his best solos ever, but I read that on the Interweb, so who knows how accurate that is.

What I remember from the review in whatever guitar magazine I was reading at the time when the release came out was the description of the solo as “pinballing.”  No, I don’t know exactly what that means either, but it seems to fit.  Enjoy.

Procreation
Have a nice day
These things in ease

What makes you happy
Your misery
These things in ease

Susan and Norman
You're so normal
Susan and Norman
You're so normal

 

posted @ Friday, August 20, 2010 10:02 PM | Feedback (0)
Can you help find me find the song that matches this lyric, #1

Google search is an awe inspiring thing (well, sort of).  I’ve mentioned before that I sometimes wonder how developers wrote code before Google existed.  I simply cannot seem to memorize syntax well.  Bouncing between css, html, c#, t-sql, vbscript, perl, foxpro, and other things these last couple of years, without some combination of Google and intellisense, I would be lost.

Ironically, I also have, at times, near photographic memory.  Now, I think just about everyone has near photographic memory when it comes to significant events in their lives.  For instance, I remember where I was when the Challenger exploded, or what was happening the day of 9/11, and also for more personally significant events, and that isn’t something that I think is that unusual. 

But I also have near photographic memory of all sorts of things that, one would think, wouldn’t qualify as being significant enough.  One area appears to revolve around song lyrics.

To give an example:

Back in what must have been 1997, I was driving to Boca Raton, FL from South Beach to teach one of my classes at Florida Atlantic University.  A song came on the radio that I really liked, but the station didn’t identify who it was by at the time.  I think I knew at the time that it sounded a lot like Peter Gabriel.  8 years later, I remembered the experience, and remembered the line “what was it we were thinking of.”  Through the power of Google, I was able to identify it as “Secret World” of off Gabriel’s Us album, and it remains one of my favorite songs of all time (you can guess a YouTube post will be coming at some point soon if there’s a decent version of it).

Why is that something I would remember 8 years later?  Beats the heck out of me.

Anyway, this leads to the point of the post.

Given the client I was working at, I know it was around 2004.  The client was in an industrial warehouse type place, and so for lunch, I would go drive around for half an hour and listen to the radio, then pick something up and come back.  The radio station was WXRT, which at the time had a terrible web site that wouldn’t give you the playlist for the day, so you couldn’t just go look up what you heard.

The singer was someone I didn’t recognize (a male), and the two lyrics that I remember are:

“I haven’t said grace at dinner/Since the day my father died” and in the chorus “Sing a simple song” (which I think was sung by a female or female lead chorus, but that I don’t remember as strongly)

My Google-fu is strong, but I’m drawing a blank.  “Sing a simple song” is a lyric that is fairly common (I think I used it back in the day when I wrote songs), so you get a lot of hits on Sly and the Family Stone or the Temptations, but the other would be a straight hit on Google, if anyone had ever posted it.

I thought that Google recorded everything that had ever happened in the course of human history, but perhaps not.

Or I could be “mis-remembering” it, but I don’t think so.

posted @ Friday, August 20, 2010 8:52 PM | Feedback (0)
Initial impressions of UberProf

After being convinced that Ayende wouldn’t steal my data and sell it to the Russian mafia, I finally bought UberProf, and here are my initial impressions of working with it today (keep in mind this is a total of about 30 minutes of real work).

UberProf isn’t actually a single application, it is a license to use any of the four (with more on the way) existing profilers, which target NHibernate, Hibernate, Linq to SQL, and Entity Framework.

This made the obligatory “thanks for giving us your money” email more confusing, since it explained that to use the license key, I should ”Open UberProfiler” (after having downloaded it from nhprof.com) and browse for the license key.  I’m guessing that he simply re-used a stock “thanks for giving us your money” email that really applies to the individual applications, and didn’t think to clean it up with better instructions.

The next problem I had was that I couldn’t download the Hibernate profiler.  All others downloaded just fine, but the Hibernate profiler download would either never initialize (no ‘save as’ dialog box) or hang.  I mailed Ayende about it (since I had an email thread active about UberProf anyway), and his 3G chip was working, so he responded with some confusion as, of course, the download worked on his machine.  The downloads are through Amazon AWS and neither IE nor Firefox would let me get the file.

After 30 minutes, it just started working.  The Hibernate profiler is, for me, the least important profiler of the bunch, but I paid for it, so wanted to make sure I downloaded it and at least launched it so I could register it (not that I ever lose license keys *cough*).

There are two projects that I am working on for which I purchased UberProf.  The ‘really, really close to launch if the business partner ever gets around to final steps’ project uses Linq to SQL, while the ‘very much in development’ project uses EF, so I started with the EF one.

If I ever get around to it, I might post my current use of SpecFlow, but essentially I create the UI (these are web apps) and ‘hook it up’ to my bastardized mix of quasi-cqrs and DDD-lite infrastructure, and write my specs without having any ‘real’ data access at all (I just new up whatever objects I need at the time).  Then, when I need more ‘real’ data to get the UI to look and behave more reasonably like it will in production, I create a test database, with no real consideration to schema or anything, and put something in front of it.  In this case, the ‘something’ was EFv4.

Hooking up the application to allow the EF Profiler to profile it took about 2 minutes.  You register a DLL, you add an entry to your Application_Start routine, you launch the profiler, you start using the app, and voila, the profiler be profiling. 

Since it is long before I care about profiling this app (since it is a test database used to satisfy my specs), I won’t go into much of the details, but one interesting thing I noticed is that, in a routine where I am doing naughty N+1 things in a loop, it only flags the last three selects as N+1, even though all six are identical (except for the values in the where clause).  I’m going to have to keep an eye on this.

Since the L2S project is more important and closer to production, I was encouraged enough with the ‘plug and play’ nature of the thing that I decided to plug in L2S in the same way.  2 minutes.  Register a DLL, add an entry to App_Start, launch the profiler, start using the app, and voi…..oh, damn.

Yellow screen o’ Death:

Inheritance security rules violated by type: 'HibernatingRhinos.Profiler.Appender.LinqToSql.SqlProviderProxy.InitializeProxyDefinitionAttribute'. Derived types must either match the security accessibility of the base type or be less accessible.

I wonder what the hell that means (I mean, I can read….).  So, I sent it off to the Google Support Group, since I would like to have some idea of turnaround time of using it, rather than blindly mailing Ayende.

After some more dicking around (technical term), I’ve gotten this different Yellow screen o’ Death:

Attempt by security transparent method 'HibernatingRhinos.Profiler.Appender.LinqToSql.LinqToSqlProfiler.SetupLinqToSqlIntegration()' to access security critical method 'System.AppDomain.add_AssemblyLoad(System.AssemblyLoadEventHandler)' failed.

Could be how I have IIS7 set up.  I’ll have to play around with it some more.

So, overall, the initial impression is a bit mixed (keeping in mind, again, that this about 30 minutes into it).  Not exactly frictionliess.  Since I haven’t actually been able to use it as a profiler (though that is part of the point), I have nothing to say about that part (other than the initially strange looking select N+1 alert pattern).  I’m really looking forward to using it for what it was designed for, as I know for a fact that there are parts of my data access code in the L2S project that blows.

posted @ Friday, August 20, 2010 7:24 PM | Feedback (0)
Comments Re-enabled

Thanks to the three people who mentioned this recently and reminded me to look at it.

Turns out that the mail provider that I was using with SubText 1.9.ancient no longer worked with Subtext 2.5.blah, and there was one other config change that was required.

I guess the current John Gruber period is over.  For now.

posted @ Wednesday, August 18, 2010 8:56 PM | Feedback (2)
Firefox 4 Beta: Is this a bug or a feature (right-click behavior change)?

It took me a bit to figure out why Firefox was acting so weird.

I tend to have multiple browser windows open, each with multiple tabs (currently 8 windows, probably 70+ tabs overall).  If I go to, say, ESPN home’s page, I will rapid-fire right-click all the links that look interesting to me, then read them over time.

In Firefox 3.3.6, if you right-click a link, the first option is “Open link in a new window”, the second is “Open link in a new tab.”

In Firefox 4 Beta: the order is reversed. 

I’d ‘internalized’ the order, so all of a sudden, new windows were opening up instead of tabs as I don’t normally pay attention to the options in the right-click menu.

posted @ Friday, August 13, 2010 11:05 AM | Feedback (3)
The Risk of Using OSS

Jeremy Miller has posted that he is taking a break from supporting StoryTeller and StructureMap for a bit.  His description of why he is doing so is reasonable and understandable (in my eyes at least).

He differentiates between StoryTeller and StructureMap, and in a way that (by total chance) happens to be beneficial to me.  As I’d mentioned in a previous post, I was comparing StoryTeller with SpecFlow, and though I have started with SpecFlow, I was planning on doing a phase with StoryTeller as a comparison, but given what he’s said about it, it’s probably okay to lay off of that for a while.

What I’m interested in seeing is how, if at all, it affects the usage of StructureMap.  I’m ‘whole hog’ into using it.  There’s nothing that I am aware of that can do what it does, the way that I know how to use it.  I know that there are other tools that are in the same space, and I respect the people who believe in them, but there’s too much cost for me right now to even consider switching.

The ‘risk’ that I mention in the post title is whether the fact that Jeremy is taking a deserved break (and quite frankly, the question is valid even if it wasn’t deserved) is a legit reason to avoid using it. 

If you look at the truly successful OSS offerings like Linux, you have corporate sponsorship and support through companies like IBM and RedHat, such support which has eliminated any question of viability.

When it comes to non-sponsored projects (from what I’m aware of) like Jeremy’s StructureMap or Ayende’s NHProf and RavenDB, is there in fact a community that can carry on the work of those projects if the originators of those projects are not active from moment to moment?

People like to complain when, for instance, Microsoft stops active support for something like Linq to SQL, and (usually) rightfully so, especially since they don’t then open source it to allow others to continue on with it (which I don’t personally believe would actually happen if they did open source it, but I digress).

Jeremy isn’t renouncing StoryTeller and StructureMap, he’s taking a f*&king break.  How will the .NET OSS community respond and step up to the plate?  Or will people wait for him to come back?

posted @ Thursday, August 12, 2010 10:30 PM | Feedback (0)
One other point about LightSwitch that actually isn’t really about LightSwitch

I honestly don’t care about that much about it (I mean, I guess I’ll try it, at some point, maybe…after Blend and a couple of zillion of other things), but some random newsletter I guess I subscribe to came into my Inbox that had one paragraph that I particularly agreed with.

Tim Huckaby is some dude who has had knowledge of LightSwitch (codenamed “Kitty Hawk”) for quite some time, as apparently the development of the tool has been around ‘seemingly for years’ (with “seemingly” in there, it’s hard to know what that really means).  He likes it, but is well aware that other people don’t:

“Not everyone is happy with the LightSwitch announcement. What I find so interesting on LightSwitch is the reaction it continues to get in the technology elite community. No one seems to be “wishy-washy.” It’s been that way since KittyHawk plan was released internally to Microsoft insiders. LightSwitch is either vehemently loved and applauded or violently dissed.”

Call me wishy-washy on this one, but you get the point.  He goes further:

“there’s a legitimate point from the developers whose job it is to inherit these homegrown departmental applications built without governance or guidance, and then turn them into “real” applications.”

But, he adds:

“I’d want to “green-field” every application too and never inherit code and never have to do maintenance on any code. However, that is just not a realistic view of the world we live in.”

And here’s the paragraph I really like:

“So, for you developers out there who have to inherit the code of non-professional developers and absolutely abhor it: Would you rather inherit an Access application, a Visual Basic 6 application, a Fox application, a “fill in the blank” application—or Silverlight code generated by Visual Studio? Because as long as there are technically savvy non-developers, departmental applications that go viral and then have to be turned into production ready apps are going to continue to spawn.”

It is very commonplace (at least at many of the clients I’ve worked at) for ‘non-developers’ (those low-level Fisher Price people who, by the way, are often running the business) to develop tools that help them do their jobs.  This isn’t going to stop (Huckaby forgot to mention the obvious candidate, Excel).  Given that fact (and it is a fact), LightSwitch at least offers the possibility of being better than Access.  Not that you can’t, say, run a shipping department on Access (which I’ve seen done), but it isn’t something I’d recommend.

But that’s just it.  The ‘technically savvy non-developers’ (those low-level Fisher Price people who, by the way, are often running the business) don’t want to hear me even mention SpecFlow, much less try to use it.  They also don’t care if I have to take that ‘homegrown’ app and turn it into a ‘real’ application.  You know why?  They consider it part of my JOB.  Sure, like everyone else, I don’t necessarily like to do it, but the people writing the checks to cover my billing rate *rightly* think that it is part of what I do. 

I can’t really argue against that fact.

posted @ Wednesday, August 11, 2010 9:53 PM | Feedback (0)
Java Developers also suck, or why there’s nothing that bad with LightSwitch

Davy Brion has posted a cri de coeur about how he feels about .NET development, and it’s sort of par for the course in certain circles.  It’s also a bit silly.

One of the things he says is this:

“A lot of people simply don’t know that the quality of software that we, the .NET community, produce on average is really low compared to other development communities.”

He doesn’t specifically talk about Java, but I’m going to go ahead and assume that.

I’m a pure C# developer.  I can read Java code, but I can’t produce it.  Regardless, I’ve worked in numerous environments where I’ve worked with real Java developers.  And what I can say from that experience is that Java developers are equally as capable of producing shit code as .NET developers.

When I was learning software development practices, because Microsoft wasn’t exactly in the business of promoting software patterns, I learned a lot by visiting sites like .  I think it is fair to say that, generally speaking, any random Java developer might know more about these sorts of patterns than any random .NET developer.  Some people think this is a big deal.  I don’t, but that’s okay.

However, my experience is that Java code sucks as much as .NET code as much as any other code because there is nothing about learning development patterns that ensures those patterns will be used properly.

I’ve never exactly worked with the mythical person who says “We implemented 21 of the 24 GOF patterns in our current application, we hope to get to all of them in the next one” but I do know of how people misuse them, in abundance.

Enter .  It appears to be Microsoft’s latest entry into application tooling that allows business users to develop software quickly in the sort of drag and drop way that makes Alt.NETrs gag, because you know you won’t get all of the maintainability love (and whatever else) that we enjoy.  Though I haven’t seen or used the tooling (I guess it’s being released in some form in a few weeks), I’m going to go ahead and accept that it sucks from that perspective.

What Alt.NETers continually don’t get is that businesses don’t give a shit about maintainability (unless they have too).  They care about generating revenue.

It might be hard for people to accept, but millions, if not billions, of dollars of revenue has been generated building applications using VB6 or Access 2002 or Excel whatever.  This is a fact.  I’ve seen such applications.  I’ve seen the fuckers built using VBScript for God’s sake.

As a ‘Craftsman’, I would never recommend that an application be built using such things, but you know what?  If a business can gain time to market building an application that runs on crappy Excel macros, they not only do not care if I complain that I can’t use IoC, they *shouldn’t*.

LightSwitch, to me, looks like a less sucky version of tooling that doesn’t require Access.  In that regard, it’s an improvement.  As someone who wants to use BDD-style development on every project, is this something that I think I would like using?  Hell no.  When confronted with a real world business scenario where something like this builds revenue quicker and in a more sustainable way than what I would like, do I really have something to complain about?  Hell no.  You can call it ‘Fischer Price’ development all you want, but no one who is in a position to make decisions is going to give a shit.

If you’re a .NET developer, build your software to the ‘highest’ levels of quality/craftsmanship/whatever you like.  The existence of LightSwitch should have *no* affect on you whatsoever.  If that existence bothers you so much, go write Ruby or Java or whatever.  No one will miss you.

posted @ Tuesday, August 10, 2010 9:58 PM | Feedback (3)
cqrs for dummies – 3 of n – the event store layer
Series link here.

As a reminder of what I’m talking about, here’s the picture from Mark:

DDDDivision_big

What I’m going to be talking about is section #3 from Mark’s picture, and in particular, I’m going to be going over a number of concepts including:

  • Domain events
  • Write-only event store, or “Accountants don’t use erasers”
  • Compensating actions
  • Automatic audit log
  • Replaying Events
  • Data mining
  • Event Sourcing

The general nature of Domain Events

As we saw from the previous discussion about the command layer, task specific commands are created and validated (in the sense of passing input validation, e.g., email address must actually be an email address, last name cannot be longer than 25 characters and so forth) which are then passed through the command handler and enter the domain.  To use the previous example, the relevant domain entity will receive a ReduceCustomerCreditRatingDueToLatePayment command through the appropriate method (note that the relevant domain entity will depend on the domain, but it is easy enough to imagine a domain model where it is the Customer entity.  Note also that I am not talking about where the entity comes from, which I will touch on when I talk about Event Sourcing).

One thing that is important to keep in mind is that the domain entity can either accept or reject a command (depending on the context), but that a Domain Event is raised either way, and stored in the Event Store.  Commands are generally accepted or rejected depending on whether they pass what you might call ‘business’ validation.  What are the rules of the domain?  Let us suppose that customers can have a status level of the stereotypical Platinum/Gold/Silver, etc. type, and that our domain has a rule that a customer’s credit rating will not be reduced due to one late payment if they are of gold status or above.  In this instance, the command would be rejected if it was received by a Customer that was of at least Gold status and had no previous late payments, otherwise, it would be accepted.

In either case, a Domain Event would be produced (CustomerCreditRatingReducedEvent and CustomerCreditRatingReductionRejectedEvent could be names for these events; in general, you want your Domain Events to be named such that they are understandable by your business users, so they should be involved in naming them) and would, among other things, be stored in the Event Store.

Why would you want to store events produced by a rejected command?  Your domain isn’t changed by a rejected command, the data isn’t affected, so why go through the trouble?  There are a number of reasons why it is important to do this, but think of even rejected commands as events that have significance to the business.  It’s a tricky concept, but try to picture it this way: when moving to cqrs and moving to the use of commands, one of the things you are trying to focus on is the importance or the significance of things that are happening and the specific tasks that people are trying to accomplish.  Thought of in this light, it can be very important to know that these things are being rejected.

To keep with the example, it might very well be important to the business to know that a certain percentage of late payments are not affecting credit ratings.  If you are in an expensive niche market, it might be very valuable business knowledge to know that the rate of late payments amongst your elite customers is increasing.  The business might want to take action if it knows this.  You might want to know if your standards for Gold status are too lenient.  There are all sorts of scenarios that you might want to know about, and knowing that commands are being rejected can be the key to this knowledge.

I will touch on this again when I talk about Auditing and Data Mining, but for now, let’s talk about the Event Store.

The Event Store, or why accountants don’t use erasers

The Event Store, as you might have guessed, records the history of the events that occur in your domain.  There is no technical restriction on how these events are persisted or with what technology.  You could use an RDBMS such as SQL Server, you could use a Document Database like RavenDB,  you could use an ODBMS like db40, you could use just about anything, as long as you can store the events in a clear fashion (when talking about Event Sourcing, it may turn out that some technologies seem better than others, but we’ll get to that).  You want to be able to trace the history of events as they occur to any entity in your domain, so somehow recording the order is important (timestamps are generally good enough), but how this is implemented is, well, an implementation detail.

There’s one other very important aspect of the Event Store and that is that it is write only.  Events come in and are recorded, but once they are recorded, they aren’t changed, and they aren’t deleted either.  The idea, as very well explained by Pat Helland in his post “Accountants Don’t User Erasers”, is that you continually record events as they happen, and you want to keep this record intact.   You might need to make a correction to a previous event, but you don’t do that by changing the record of the previous event, but by producing a new event altogether.

Again, there are many reasons why you would want to do this.  A very basic one is that by doing so, you have a historical picture of the state of your domain as it appeared at any given time.  You know what happened in your domain and when.  Another good reason involves the notion of Compensating Actions.

Compensating Actions

I find it easier to understand the notion of Compensating Actions when using a standard example involving inventory allocation and order placement.

Suppose I’m a customer on an e-Commerce site and I order the limited edition Penguins Winter Classic Jersey, of which there are only 300.  A command is issued that enters the domain, where it is processed and produces an event that is stored in the event store.

Due to a tragic fire at the vendor warehouse, 200 of the jerseys are lost.  Because of this, up to 200 orders will no longer be able to be fulfilled (for the sake of argument, let’s pretend you can’t resource them from another vendor) and so will need to be handled.

A bad company will just cancel the orders, while a better company will try to mollify the miffed customers, but in either case, you don’t want to delete the orders.  You don’t want to delete the original event, or change it.  Instead, you will need to perform some other event, a Compensating Action, that might have the end result of your domain being in the same state as if you just deleted or changed the original event.

You want to do this with everything in your domain.

It might be helpful to think of the contrast between traditional ACID semantics in a relational database with the idea of long-running transactions.  When performing a typical unit of work in a good old RDBMS, there’s the idea that all of the actions succeed or fail as a unit.  If one of them fails, then they all should fail, and the end result is that the data within the database looks as if the attempt never took place (it will actually be recorded in the transaction log normally).  While this is going on, tables are locked (among other things). 

Obviously, when you have a unit of work that might take a number of days (placing and fulfilling an order, approving a mortgage, etc.), you don’t want to lock tables during that time, so in such long running transactions you manage failures differently, and what you do in cqrs with the Event Store is to treat every unit of work as if it were a long running transaction, no matter the time span involved.  This is how businesses typically work.

An advantage of this way of processing and storing events is that you get an automatic audit log.

Auditing

Anyone who has worked as a DBA has probably had to deal with trying to determine the cause of transaction failures.  An ACID style unit of work was attempted, and something failed, so the transaction rolled back.  With certain tools, you can read the transaction log of your RDBMS, but transaction logs are transient to a certain extent, as they eventually get truncated.  What happens when you need to know why a transaction failed a month ago? 

Your application can or will typically log a lot of information as it is happening (LogHelper.LogInformation anyone?), but this is additional code that you need to write, and doesn’t necessarily cover every method.  Just last week, I needed to add additional logging code to a certain area of an application because I knew something was failing, but I didn’t know why (I still don’t know why as a matter of fact, it’s under investigation). 

If *every* important action in your application is logged as an event, then you automatically have an audit log that can tell you what, when and why things happened (obviously, if you have any sort of system failure that prevents events from being produced or logged, having an Event Store can’t help you here, but you can’t solve everything). 

Even better, if you design your applications correctly, you can replay them.

Replaying Events

Suppose that you start from scratch with an Event Store.  The Event Store will log the creation of your domain entities, and then every event that either directly affected them and caused them to change, or logged the rejection of a command that wanted to change them.  With this, you have (conceptually at least) the ability to relive the life of your entities at every step.

Outside of this series, I talked about a bug that I was trying to troubleshoot, where I made a fundamental mistake of misreading what the logging system was telling me.  Having an Event Store doesn’t automatically prevent an end user (me, in this case) from misreading information, PEBKAC still rules.  But, having the entire history of the life of your domain entities can help tremendously in this area.  Having task specific commands that produce explicit events of their acceptance or rejection gives you a trail that you can examine step by step when you need to.

Over and above the diagnostic benefit of having this trail when trying to troubleshoot an issue, you also have the positive benefit of having a source for data mining.

Data Mining

As I mentioned previously, if a command is issued against your domain to reduce a customer’s credit rating due to a late payment, that command could be accepted or rejected, and the appropriate event is recorded.

Because you have this permanent record of the events that were produced from commands being processed by your domain, you have a rich source of information that exists for present and future business analysis.  You can never know for sure in advance what information will be of value to the business at any given time.  It might not occur to you that knowing credit rating reducing commands are being rejected is that important.  In hindsight, once something is important, you often wish you had more information to analyze.  The Event Store gives you this.  You know what was processed and when and whether it was accepted or not, and why it was rejected. 

No system or architecture is perfect, and cqrs isn’t.  You still have to do a lot of work, and you still need to design it wisely.  But, by tracking the history of your domain, step by step, as an architectural feature, you automatically have a wealth of information that you might not have otherwise.

Event Sourcing

Event Sourcing is a big topic, and as always, a good source of information is available from reviewing Martin Fowler’s discussion of the topic.  But, if you take a look at Mark’s picture at the top of the post, you should notice that not only are there arrows that go from the domain to the Event Store, but also some that come from the Event Store to the domain.  I’m going to try and explain why this is, but I am far from an expert here (the reason why this series is called ‘cqrs for dummies’ is that I’m one of the dummies), so you should examine what actual experts like Greg Young have to say as well.

Suppose you want to know what the current state of a customer is in your system.  In a typical RDBMS, you query the Customer table based on their ID, and it gives you that information straight away.

With Event Sourcing and an Event Store, it is a little different.  You query the Event Store for any events for the customer with their ID, from the end (most recent) of it, and work your way backwards until you get to the creation event for the customer, and then replay those events forward from that event, applying all of the events that happened after, and you end up with the current state.

On the face of it, this seems like it would be pretty inefficient in comparison, and, to a certain extent, it is.  One way of improving the efficiency of the process is to build in the notion of a snapshot.  Given a certain number of events for a domain entity, you store a snapshot of the current state of that entity in the Event Store, and then when you need to get the current state of it, you only need to work your way backwards in the Event Store to the most recent snapshot (or the creation event) before replaying forward. 

However, one thing to keep in mind is that for a lot of the systems that use cqrs, the current state of an entity is kept in memory once it is needed.  Trading systems that use cqrs might create the entity on demand using the replay model and then kept in memory during trading hours.  So, while a first call might be technically inefficient, it doesn’t really affect the system negatively during active operation.

One could, of course, keep a separate set of tables of the current state of an entity which is continually updated by each event that enters the Event Store, but there are inefficiencies involved there as well.

There is some indication that using non-RDBMS for an event store when going with the replay model may have significant performance benefits, but I can’t say for sure.

Summation

Building a cqrs-based system that uses an Event Store offers a number of benefits, such as having a perpetual record of the history of the entities in your domain, the ability to replay these events for diagnostic and data mining purposes, and the ability to know when, how, and why your domain has changed.

As always, this isn’t to suggest it is the best way to design a system for every situation.  The tradeoffs involved with designing a system that is different in many ways from traditional RDBMS ACID applications have to be weighed in every case.  But I think there are clear benefits to considering this path.

In the next post, I will talk about the External Event layer, where we go full circle and figure out how the Reporting Store handles the events that allow the query layer to do its job successfully.

posted @ Tuesday, August 10, 2010 8:46 PM | Feedback (2)
The Dark Side of Software Craftsmanship

To get you in the mood, head on over to ElegantCode and take a look at this post.  It is highly likely that you agree with the sentiment of it, and that’s okay.  Generally speaking, you should (generally) agree with the sentiment.  I’ll wait.

Now that you’ve taken a look at it, if you’ve learned anything about ‘advanced’ software development (and here the software craftsmen will cringe that learning certain things should even be considered ‘advanced’ and they have a point), you’ll feel the pain of this:

One thing I have heard expressed in those places is, “software development is not our core competency, therefore recommendations around professional development practices don’t apply to us.”

And all of that is well and good.  But, the post goes on from that, and that’s where it goes terribly wrong.  Consider what follows:

“I don’t know where this absurd line of reasoning comes from, but I know why it is tolerated. It is tolerated because we don’t have the formal structures to hold professionals accountable the way we might hold an electrician, plumber, or physician accountable to being merely competent.”

No, no, and again, no.

To begin with, let’s ask a rhetorical question: have you ever met an incompetent plumber?  Exactly.  There’s nothing about ‘formal structures’ that guarantees, or even enables, competence. 

One only needs to look around at the bazillion certification processes, be they of the Microsoft variety or the Scrum variety or any other variety.  Have any of those solved the issue?  Nope.  Signing a craftsman manifesto won’t make anything better either.  Creating another formal structure won’t really get anywhere (well, it *might* have beneficial side effects, more on this in a minute) either.

It gets worse when you read this:

“We can hold ourselves accountable for professionalism no matter where we work.”

Here we get into danger territory.  “If you aren’t a software craftsman, you aren’t a professional.”  It isn’t stated explicitly here, but definitely implied.

No, no, and again, no.

This is very easily seen when you consider the list of things that (supposedly) imply or signify competence:

“In summary, no matter what type of organization you work for:

  • Yes, you need to use source control.
  • Yes, you need to automate the build.
  • No, you shouldn’t be releasing the assemblies compiled on your machine.
  • Yes, you need to stop writing long methods and pay attention to code complexity.
  • Yes, you need to buy your developers the best tools available.
  • No, you don’t need to write your own logging framework.
  • Yes, you should be practicing test first development.
  • No, continuing to ship known defects is not acceptable.
  • Yes, you should understand who your customer is.”

 

Let’s run through these.

“Yes, you need to use source control”

Okay, it’s hard to argue much about this one.  Let’s go with it.

“Yes, you need to automate the build.”

Maybe, maybe not.  Is automating the build something that will really provide a big benefit for the customer?  The frank answer is that it might not.  It *usually* will but it might not.  Especially when you consider:

“No, you shouldn’t be releasing the assemblies compiled on your machine.”

Again, it depends.  If you come across a situation where you are working to re-design an important system, it might be of almost no importance to do this.  If you are using source control in such a fashion where anyone with access to it can build successfully locally, deploying from your machine isn’t necessarily that important at all.  Given everything else, how important is this?

Yes, you need to stop writing long methods and pay attention to code complexity.”

As a general rule, it is hard to argue against this, depending on how you enforce it.  In an ideal world, working software with low code complexity is a good thing.  However, working software with high code complexity trumps unwritten and possibly non-working software with low code complexity.  How important is it, given everything else, to do this, especially if it involves re-writing working software?  You should begin to notice the importance of the phrase “given everything else.”

“Yes, you need to buy your developers the best tools available.”

Define ‘best.’  Given that your typical ‘advanced’ developers don’t often agree, for instance, on whether ReSharper is better or worse than CodeRush, it’s hard to say what is being advised here.  Real developers can develop using Notepad anyway (okay, I don’t actually believe that, but you’d be surprised at who might).

No, you don’t need to write your own logging framework. “ Okay, I’m going to whole-heartedly accept this one.  True story: working at a client where the ‘architect’ has decided to write their own logging framework, and not only to handle immediate needs, but to handle all possible future needs.  I ask him, “Have you heard of YAGNI?”  “What’s that?”  “You aren’t going to need it.”  “Yeah, I do that.”  “So, then you agree that we don’t need to design a custom framework to handle all possible future needs when you don’t know what those needs are, specifically.”  “No, that’s different.”  Right.  Have at it then.

“Yes, you should be practicing test first development.”

Notice the vagueness of ‘test first.’  If by this it means you should either be practicing TDD or some BDD-style thingy (disclaimer: I use SpecFlow as my first option every time I can), no, no, no, no, no and again, no. There are multitudes of software projects where the only testing you should be doing is integration testing, if even that.  This is especially true in my experience with ETL projects.  Obviously, you want to know that given a particular piece of data, if a transformation from the source to the destination is required, the transformation should be successful. Duh.  But in general, you should be testing the actual data, using statistical analysis.  Wasting your client’s time doing test-first development when it is inappropriate is wasting your client’s time. “

No, continuing to ship known defects is not acceptable.” It is perfectly acceptable if that is what your customer wants, which leads to the one thing I whole-heartedly accept above all else:

Yes, you should understand who your customer is.”

There’s a myth of software development in some circles that you cannot develop successful software unless you follow practice X or Y, where X might mean ‘agile’ or Y might mean ‘SOLID’, and this simply isn’t true, especially when ‘successful’ means ‘profitable.’ I have worked with clients that, for instance, profitably used software that used, forbid the thought, Access.  In the abstract, no one in their right mind would ever *recommend* this as a design.  But given a circumstance where such a system actually existed, thinking this is something that needs to be re-designed is almost as crazy. Millions and millions of dollars of revenue has been generated using, for instance, big balls of mud built on VB (not to pick on VB specifically, but…yeah, let’s pick on it).  Having worked with one of those ‘too big to fail’ financial institutions, close to billions of dollars of revenue were generated doing BDUF and all other sorts of things that I would not only not personally recommend as a starting point, but which I really didn’t enjoy.  But, close to billions of dollars of revenue were, in fact, generated.

David Starr is the author of the post in question, and a (the?) leader of “The Software Craftsman Cooperative.”  As a capitalist, I applaud the creation of this group (I’m pretty sure I’ve never met David, but I have met Jarod and Jason, and, for what it’s worth, I thought they were outstanding guys) and as a developer, I hope they are successful (in the hopes that there is a rising tide in chargeable rates). But, none of these people are qualified to tell me, or anyone else, what counts as professional software development. 

Yegge said it best:

“As far as I'm concerned, Agile Methodologies are fine for you to use, but if you try to push them on a coworker, it should be treated as an HR violation, just as if you were discriminating against them because they don't believe in Feng Shui.

If it's a manager pushing Agile on a subordinate, it should be treated with extra severity. Reprimand, goes on the employee record, mandatory training, and 3 strikes you're out.”

Replace “Agile” with “Craftsmanship” and you hopefully get the point. It could very well be beneficial to have the software HR hiring people to think “we should have some of that Craftsman stuff” when making hiring decisions, but it could also be very, very bad.

Later this week (or maybe next week), I will be deploying assemblies from my machine into production.  And I’m fine with that.  Anyone who (without knowing why as a professional I made that decision) wants to say I’m not a professional can bite me.

posted @ Monday, August 09, 2010 10:23 PM | Feedback (0)
Please don’t separate Deployment from Development

I didn’t highlight this point , but will do so now.

Something that appears to be typical of a number of organizations is to strictly separate the deployment of software from the development of that software into different teams.  Different rationales are offered for why this is important, some of which are ‘organizational’ in nature (‘Legal’ says we need separation of duties), others are ‘conceptual’ in nature, and since there’s usually not much one can do when ‘Legal’ says something, I’ll focus on the latter.

One useful thing that I learned from my Philosophy background is that it is best in the long run to understand a position you disagree with as best you can, for a number of reasons.  One, once you try to think of how someone would believe the position, you might change your mind.  Two, it allows you to build the best criticism you can of the position.

It’s a useful thing, but it’s also difficult to do.  It’s a lot easier (and more fun) to just call someone an idiot.

Anyway’s, as best as I can tell, the ‘conceptual’ rationale for why you would separate the individuals/teams when it comes to development and deployment is something like the following: (making something up) suppose Billy is in charge of the Account Automation process.  He wrote the code, he knows what it does, when it does it, and he knows how to deploy it.

You immediately run into the ‘hit by a bus’ problem, if Billy gets hit by a bus, then you are going to have problems when it comes to changing the Account Automation process.  Additionally, Billy is the one who is aware of any ‘manual steps’ (also often known as ‘hacks’) that are required to get it deployed.

By requiring Billy to put together a detailed description of how the software needs to be deployed so that it can be followed by someone else, he’s (conceptually) more likely to make sure it is as detailed and automated as possible.  This effort (conceptually) allows more people to be involved, thus limiting the ‘bus’ problem.

All of this is well and good.  But here are some of the reasons why the idea is conceptually flawed:

Suppose a deployment fails (and let’s face it, they will from time to time).  Who’s responsible for fixing it?  By instituting a strict separation of duties, it isn’t clear.  Deployments can fail for all sorts of reasons, but it quickly can devolve into a lot of finger pointing, where “who’s responsible” turns into “who’s at fault” into “who’s to blame.”  Even if it doesn’t devolve that way, you no longer have a center of responsibility, a clear directive of who should take the lead in resolving the issue.  Ideally, everyone vaguely related gets involved as a team, but if you’ve ever worked in a large organization, this ideal situation is rarely achievable (for reasons good and bad).

Because Billy knows the system and how it should be deployed, he knows what to look for if the document he produced ends up leading to a deployment failure.  The deployment team doesn’t necessarily have the expertise to know what to look for (though one would hope their expertise would grow), which leads to a focus on documentation as opposed to problem solving.  (I once had a support team request that I ‘list every possible scenario in which the software could fail, and describe the solution.’  Really?  Every possible scenario?  Like if terrorists nuke the data center?). 

As Facebook hit their 500 millionth user, a lot of posts came out, one of which I’ve , and from which I’ll quote extensively:

“None of the previous principles work without operations and engineering teams that work together seamlessly, and can handle problems together as a team. This is much easier said than done, but we have one general principle that's enormously helpful:
The person responsible for something must have control over it.

This seems terribly obvious but it's often not the case. The classic example is someone deploying code that someone else wrote. The person deploying the code feels responsible for it but the person who wrote it actually has control over it. This puts the person deploying the code in a very tough spot - their only options are to refuse to release the code or risk being held responsible for a problem, so they have a strong incentive to say no. On the other side, if the person who wrote the code doesn't feel responsible for whether or not it works, it's pretty likely to not work.

At Facebook we push code to the site every day, and the person who wrote that code is there to take responsibility for it. Seeing something you created be used by half a billion people is awe inspiring and humbling. Seeing it break is even more humbling. The best way we know of to get great software to these 500 million people is to have a person who understands the importance of what they're doing make a good decision about something they understand and control.”

This, of course, isn’t a formal proof of anything, but I tend to agree with it.  Good teams will find a way to produce good results more often than not, but certain structures will help or hinder those efforts, and arbitrarily separating Deployment from Development will more likely hinder them.

posted @ Monday, August 09, 2010 12:22 PM | Feedback (0)
System.Linq.Lookup rocks

Well, I think it’s pretty cool, anyway.

allows you to have a collection of key-value pairs, but what makes it special is that you can have duplicate keys, which your normal Dictionary or SortedList don’t allow.  This allows you to do all sorts of grouping and what not which is very useful.

The only downside is that you can’t new one up, you have to do something like myList.ToLookup(blah, blah, blah) to get one.

Check it out.

posted @ Friday, August 06, 2010 3:18 PM | Feedback (0)
MEF almost makes sense now

Keeping in mind the fundamental rule about analogies (all analogies partially work, because everything is similar to everything else in some respect, and all analogies partially fail, because everything is dissimilar to everything else in some respect), this one makes sense to me.

By way of Kevin Hoffman and his nameless friend and co-worker, MEF and the Zen of Lego.

Of course, it could be almost completely wrong.

posted @ Friday, August 06, 2010 11:13 AM | Feedback (0)
Stereophonics – Dakota (Live)

Another pleasant ditty, this is from a concert that was the basis for a live album, so the sound quality is pretty decent.

Wake up call, coffee and juice
Remembering you
What happened to you

I wonder if we'll meet again
Talk about life since then
Talk about why did it end

You made me feel like the one
You made me feel like the one, the one

posted @ Thursday, August 05, 2010 4:47 PM | Feedback (0)
I think this is related to why people don’t like Microsoft Connect

I got an email, related to , saying it was fixed.

I haven’t the slightest idea if I submitted this.  If I did, it was in August 2006, so it’s a bit late.

posted @ Wednesday, August 04, 2010 6:24 PM | Feedback (0)
The Really Interesting Things Only Show Up in Production

“It's important that the statistics are from the real production machines that are having the problems, when they're having the problems - the really interesting things only show up in production.”

.

posted @ Tuesday, August 03, 2010 9:37 AM | Feedback (0)