Posts
832
Comments
691
Trackbacks
1
February 2010 Blog Posts
O Canada

584-canada-team-celeb[1]

It was like rooting for the Penguins last year.  They couldn’t make it easy.  They stumbled out of the qualifying round, barely beating the Swiss and losing to the US, putting them into a potential run against Russia, Sweden and the US again.

After a somewhat surprisingly easy victory against the Russians, they ended up with a lucky break when Slovakia knocked off Sweden.  And then they nearly blew a three goal lead, winning only with a late second save in the final seconds.

Up by one against the US going into the third period, the hockey gods appeared to favor the Americans.  Two posts hit, multiple chances denied, and then with victory in sight, Canada’s continually shaky goaltending looked fatal, as the US tied the game with less than 30 seconds left.

A friend of mine texted me before the OT started “Sidney Crosby better show up in OT.”  Right before the end, the Canadians gave up a turnover in their end and as the Americans’ poised to shoot, I thought “That’s it.”  So glad to be wrong.  And then, whether it was luck or skill, Crosby showed up.  Big time.

Whether this has any impact on the rest of the NHL season is questionable, but if it does a couple of things seem evident.

The Blackhawks are good.  Real good.  Jonathan Toews was named the top forward in the tournament.  Seabrook was okay, but Duncan Keith was rock solid.  And Patrick Kane was a monster on both ends, with a couple of standout back-checking efforts, including stopping Crosby on a breakaway in the 3rd period of the final that would have put the game away.  The best thing for the rest of the league is that they still don’t have proven goaltending.

Roberto Luongo didn’t lose the gold, but it looked like he might, appearing shaky in every important game.  But he made the ones that counted.  It’s hard to say that a Gold medal winning goaltender is still unproven, but I think he is.

For the Penguins, it’s hard to say what will come out of it.  Malkin played well, except against Canada.  Gonchar was solid, but he’s still 57 years old (roughly).  Will they come back with extra fire to make up for the disappointment or suffer a hangover?  Fleury didn’t even play.  Orpik did drill a couple of people nicely, as is his wont.

And then there’s Ovechkin.  Just as in last year’s Game 7 against the Pens, he failed to perform in the biggest game.  My texting buddy and I have a continual ‘debate’ over who is better, Ovechkin or Crosby.  Ovechkin is more exciting, more dynamic.  He’s a powerhouse.  The Caps lead the Eastern Conference largely because of him.  When he’s at his best, there isn’t a player in the world that can match him.

But Crosby wins.  Crosby didn’t dominate, didn’t carry the team.  But when it mattered, Crosby performed, and Canada won.  I’m glad he’s on my team.  He’s 22, and he now has a Stanley Cup championship and a Gold medal clinching goal.  If he wasn’t on my team, I’d probably hate the f*(ker. 

Back to the regular season of the NHL, and then the playoffs.

Go Pens.

posted @ Sunday, February 28, 2010 8:38 PM | Feedback (1)
cqrs for dummies - an interlude - eventual consistency at Amazon

I'm sure I've posted this before, but there's a really good presentation available of the theoretical underpinnings of Eventual Consistency and how Amazon tackled it.

Some teases:

" Given the worldwide scope of these systems, we use replication techniques ubiquitously to guarantee consistent performance and high availability. Although replication brings us closer to our goals, it cannot achieve them in a perfectly transparent manner."

"This means that there are two choices on what to drop: relaxing consistency will allow the system to remain highly available under the partitionable conditions, whereas making consistency a priority means that under certain conditions the system will not be available. Both options require the client developer to be aware of what the system is offering. If the system emphasizes consistency, the developer has to deal with the fact that the system may not be available to take, for example, a write. If this write fails because of system unavailability, then the developer will have to deal with what to do with the data to be written. If the system emphasizes availability, it may always accept the write, but under certain conditions a read will not reflect the result of a recently completed write."

"Whether or not inconsistencies are acceptable depends on the client application. In all cases the developer needs to be aware that consistency guarantees are provided by the storage systems and need to be taken into account when developing applications."

Good stuff.

posted @ Friday, February 19, 2010 10:39 AM | Feedback (0)
cqrs for dummies – an interlude – cqrs is not an architecture

Papa Greg has returned from his walkabout to post a number of items about cqrs, architecture, event sourcing, and whatnot.  If you find anything in my discussion of this stuff remotely interesting, you really should be reading his stuff first, since, although he didn’t exactly invent any of it (it’s all just things people have been doing before), he did name it and has done a lot of real-world work in making it a lot more sophisticated (not to mention presenting on it around the world), etc. etc. etc.

Anyhoo, in one of them, he emphasizes that cqrs, in and of itself, is not an architecture:

Many people have been getting confused over what CQRS is. They look at CQRS as being an architecture; it is not. CQRS is a very simple pattern that enables many opportunities for architecture that may otherwise not exist. CQRS is not eventual consistency, it is not eventing, it is not messaging, it is not having separated models for reading and writing, nor is it using event sourcing. I want to take a few paragraphs to describe first exactly what CQRS is and then how it relates to other patterns.

Since I’ve been talking about cqrs as it applies to architecture, I suppose a comment or two is in order.

First, it is correct that you cqrs is really just a way of writing code that, well, codifies CQS.  In his current post, he talks about services, but cqrs doesn’t really require services either.  You have separate objects/messages/commands/whatever for reads versus writes, you have cqrs.

But, more interestingly, I can just quote Greg from another recent post of his:

Udi put up a good post (and long) about Command and Query Responsibility Segregation. One point that Udi brought up is that people have tied together Event Sourcing and Command and Query Separation. They are certainly two different patterns…is CQRS itself better off by using Event Sourcing…The answer is a resounding yes…I hope this goes a way to explain why Event Sourcing and CQRS really go hand in hand.

I agree with this as well.  While cqrs is not eventual consistency, eventing, messaging, etc. etc. etc., they all work together well.

So, keep in mind that this series should probably be named “cqrs-based architecture using messaging, eventing, and eventual consistency for dummies”, but marketing doesn’t like that.

In the next substantive post, I hope to talk about the command layer.

posted @ Thursday, February 18, 2010 7:33 AM | Feedback (0)
cqrs for dummies – 1 of N – the query layer

Back to the fun stuff.  Series link here.

As a reminder of what I’m talking about, here’s the picture from Mark:

DDDDivision_big

What I’m going to be talking about is section #1 from Mark’s picture, and in particular, I’m going to going over a number of concepts including:

  • The Reporting Store
  • Eventual Consistency
  • You don’t need your Domain
  • Once you have one, why not more?

“Reporting” doesn’t just mean Reporting

One of the first things that I found difficult when learning about CQRS was the use of the term “Reporting.”  Because I come from a SQL background, when I hear the term “Reporting” in IT contexts, I think about reports, e.g. last month’s sales report.  Because of how the term is normally used, I’m not sure if there is a different word that should be used here.  However, especially with all of the stuff I just wrote about traditional reporting, there are a couple of concepts that are starting to make more sense to me, since it turns out that some of the concepts of CQRS are simply things that we’ve been doing already, but refined.

The first thing to keep in mind is this:

  • The Reporting Store (as separated from the Event Store, which is up in section #3 of the picture) is a logical concept, and as such does not have to be physically separated.  Having said that, however, it probably will be.

Most Microsoft shops are already familiar with the idea of a Reporting Store, as they probably already have one in one form or another, be it a replicated version of their main database, or an OLAP store using SSAS (or perhaps some other tool).  In a traditional shop, this is how traditional reporting tends to be done.  You generate/run your reports off of the Reporting Store, which means you don’t tax your main database to do so.

There is nothing particularly cqrs-y about this, but once you accept that you have a separate Reporting Store, with enough ingenuity sparked by genuine need, you think about different ways of using this.  Back in the dot.com heyday, a common problem involved how to generate/cache the storefront, so that you didn’t have to hit the database on every page.  This is, obviously, still a common task, but there are a lot more ways that you can implement this now than there was then.

We chose to generate our site, and generate it off of the replica database.  Basically, we would create the HTML once on special generation servers for every page, and then use MSMQ to push them out to the web farm (there were products that implemented this and caching, but some of them ran six figures IIRC).  There’s nothing magic or special about this, of course, but what I’m hoping to convey is that the notion of a Reporting Store within CQRS isn’t magical or special either.  In fact, if I had been able to tie my previous experience with replication with the idea of what a Reporting Store was, I think it would have been easier to learn.

Does this mean that using SQL Server Replication is the same thing as implementing CQRS?  Of course not.  For one thing, it really doesn’t matter to set up a litmus test of what counts as ‘really’ implementing CQRS, but if there was one, there are differences.  As I’ll try to explain in a bit, the theory behind CQRS provides a general benefit, a theoretical construct, that is of value in itself, and goes behind a particular technical implementation like Replication.

Let me emphasize:

  • I am *not* saying that SQL Server Replication is the same thing as CQRS.  It is a technology that *could* be used as part of an implementation of CQRS, but it is a separate thing.
  • Some of the *concepts* of CQRS are similar to concepts we have been using for quite a while, such as pulling data from a Reporting Store that is separate from the main database/Event Store.

When is it okay to use something like Replication?  Finding an answer to that can provide use with the beneficial theoretical construct I just mentioned, and answering that question depends on understanding and applying the notion of Eventual Consistency.

It will get there eventually

If you go to a business user who works with, e.g., the rolling last 30 day sales report, and ask them if it is okay to use stale data, they probably won’t give you an affirmative answer.  But if you ask them essentially the same question in a different way, they probably will.

Suppose they have a morning status meeting with the head of the Marketing Department to go over the rolling last 30 day sales report, and, as always happens, they print this out in multiple copies (the myth of the paperless office is why I don’t believe in any ‘revolutionary’ movements like NoSQL, but I digress) and take it to the meeting.  Suppose replication failed at 10 PM EST the previous night, and so whatever sales might have occurred in that small timeframe are therefore missed.  Does this invalidate the report?

The answer you will get is something along the lines of “Not really.”  In most cases, you don’t have enough sales in that small timeframe overnight so it doesn’t matter that much, but “let me know when replication has caught up so that I can regenerate the report, just to be sure.”

That is the gist of Eventual Consistency.  It is acceptable for there to be a gap between the ‘real time’ data, and the data that is viewed in some other context.  Once you find out that it is acceptable for there to be a gap, then the next step is to find out how big of a gap is acceptable.

Suppose the business user is looking at the current day sales report.  If he is looking at it at 2 PM EST, and replication has been down for 4 hours, that might then be unacceptable.  ‘Eventual’ doesn’t mean that next week is acceptable.  But suppose the business user prints out his report for his daily afternoon status meeting.  Between the time he prints out his report and when it is viewed by the head of Marketing, there may have been additional sales.  That is acceptable.  It is accepted that from the time the report is printed and when it is looked at 15 minutes later, it might be slightly outdated. 

Once you have established that it is okay for there to be a gap between the actual “this is the value at this exact moment” data and what it viewed by the end ‘user’ (the ‘user’ could actually be another system), then you can start to think of ways of using your Reporting store for other things.

Whether these ways are valid will depend on the context.  What does your business do, who are your end users, what data will they commonly be seeing, and how will they be acting on it?

In my mind, CQRS offers a theoretical construct to help us here:

  • Anything that doesn’t involve a command is a prime candidate for acceptable Eventual Consistency.  Anything that involves a command may be a candidate, if the result of the command doesn’t need to be immediately evident.

Now, this should be considered only as a starting point, and it certainly doesn’t answer the contextual questions that it needs to answer, but it can help.

For instance, contrast a desired difference between an order review page and a product list page.  When a customer presses submit on the order review page, it is probably the case that you don’t want to immediately show an order confirmation page without knowing the order went through (though you *might* want to).  On the other hand, when a customer goes to a product list page, it probably doesn’t matter if the page is a few minutes old (though it *might* matter).

The points I want to emphasize here include:

  • The particular examples I’ve given don’t really matter, instead examine the context.
  • You don’t absolutely *have* to implement Eventual Consistency to consider having a separate Reporting Store to be beneficial.

This second point will become more evident when talking about section #4) of Mark’s picture (though I will touch on some of them below), but a brief note is important here.  In a thread on the DDD mailing list, Greg has emphasized that you should start off without Eventual Consistency, and then work your way towards it as the need arises.  This is common-sense, and could be considered a simple application of YAGNI here (though YAGNI is unfortunately too often just used as an excuse).  Once you appreciate the concept of Eventual Consistency, it’s an easy temptation to think of all the places where you could possibly implement it without a clear understanding of the drawbacks (and there are always drawbacks).

When you query your Reporting Store, you can ignore your domain

More specifically, if you need to query your reporting store, you don’t need to go through your domain model, and as a matter of fact, it would probably be a bad idea to do so.  To paraphrase a comment from Udi, why should data come across 5 layers through 3 model transformations when all you need to do is populate a screen?

Typically, our end user screens will contain information from multiple entities.  A typical pattern is to find the parent entity you need, load all of the relevant child entities, and then pass that entity into a mapper, which then produces a DTO with a flattened representation, which is then passed back to your screen and bound to it somehow.

Skip it.  Query your reporting store and get a DTO with that flattened representation immediately.

Does this mean you should start rooting around in your code and eliminate any reference you have to AutoMapper?  Of course not.  But once you start to think of how you can skip going through your Domain Model for queries, some other options open up:

  • Put a stored procedure on top of your Reporting Store to return a ViewModel per query.
  • Transform the data from your main database/Event Store to your Reporting Store so that you have a table per ViewModel that you can do a simple select from.
  • Query off of an OLAP store to do the same.

And so on and so forth.  The possibilities aren’t endless, and shouldn’t be done without thought, but it does open up a different avenue for you.

Can you have more than one Reporting Store?

Once you start to think about how to use a Reporting Store with an eye towards Eventual Consistency, even more possibilities open up.

To go back to the dot.com example I gave previously, we used MSMQ to push individual page updates across to our entire web farm.  It was, given the day and our abilities, a bit crude.  At times, a particular server might process individual pages more slowly than others.  From an operational perspective, it worked well enough that we lived with it.  A monitoring server could notice that a particular web server was slow, and pull it out of active duty.  But for the most part, on most days, almost any updated page would hit each server at about the same time.

To think of a possible CQRS implementation of the same idea, why not have a Reporting Store on each web server that subscribed to events being published out of your domain?  Going back to the simplistic product list page example I mentioned previously, imagine having a SQL Server Express instance on each web server which could process those events.  If it is acceptable in the context of your environment to have Eventual Consistency here, and if you have a robust enough environment to be able to process these events evenly (and if it was robust enough for 1998, then surely, it might be today with more advanced service bus technology), then this opens up an avenue for immediate horizontal scalability of your ‘query-facing’ infrastructure.  As your traffic increases, add another web server with its own Reporting Store.  If you have a limited number of processes that utilize Eventual Consistency (think back to Greg’s emphasis of starting slow here), then you have a limited number of events that are subscribed to by a larger and larger numbers of machines.

If you think about it, this is a different way of achieving caching that you might already be doing today, but from a central architectural perspective.  Once again, I don’t think anyone who has either used or considered CQRS is suggesting you start to rip and replace memcached or Velocity here.  But, you might think about ways to fit memcached or Velocity into CQRS because it offers a general scalable architectural set of patterns.

When and why might you not want to do any of this?

I personally find the notion of Eventual Consistency, and that of the a separate Query layer, that skips your Domain model to be compelling.  It ties together concepts that I have already been familiar with into a general architectural model.  Having said that, those concepts that I was already familiar with had drawbacks, and CQRS doesn’t magically solve them.

From previous posts that I’ve made, a familiar reader will know that I have pretty extensive experience with Operations, and all that might entail.  In particular, I believe pretty strongly in what I’m going to vaguely call here ‘planning for expected catastrophe.’

Starting with SQL Server Replication as a base technology, it sometimes fails.  Sometimes it fails easily (an agent stops processing transactions, which merely requires a ‘right-click restart’, and might only take a few minutes to fix if your monitoring is good), and sometimes it fails hard (the entire Replica has become invalid, and must be recreated from scratch, which can take hours to accomplish).

Even though the technology of today is light-years ahead of what we had even 10 years ago, planning for ‘failing hard’ is still something that I think has to be central to planning software.  If your Reporting Store suddenly is unavailable, what can you do?  What we did with our relatively crude system was build in a switch (more or less) that let us immediately go back to processing off of our main database.  We would still generate the site if we could, but even there, we had an emergency ‘oh, good Lord” switch that would allow our site to skip generation altogether, and hope we had enough hardware to weather the load until we could fix the Reporting Store.  Obviously, if both the Reporting Store and the main database went down and our off-site log shipping failed….well, at that point we might be polishing off resumes anyway.  Some catastrophes can’t be recovered from.

Another more basic reason why you might not want to do any of this is because it does require a certain amount of sophistication and a probably larger amount of faith that it will work.  I don’t think a lot of advanced developers will be turned off here, and a good case study of how this works out in actual practice is in the story of MySpace.  The architecture there was built under certain assumptions, and then once the limit was hit, the architecture was rebuilt.  Something like CQRS, in my opinion, gives you a built-in scalability potential, but it isn’t a panacea.

Even if you choose to embrace Eventual Consistency and building a Query layer, there is another thing to keep in mind.  Look at the picture again:

DDDDivision_big

Pay attention to that little line from the thin data layer in section #1 that points back to the services box in section #4.  When push comes to shove, sometimes it is okay to default back to calling into your Domain.  If you took Greg’s cautionary message to heart, you could start off by building a Query layer that does almost the opposite of what I’ve been describing.  All queries ignore the Reporting Store unless and until the Reporting Store ‘proves itself’ within your context, and then you start pointing them there accordingly.  Given my experience, you should probably never need to go to this extreme, but it is there for you if you need it.

Why you should consider doing at least some of this anyway

Suppose you have an application that you hope needs to scale, but you don’t know that you need it today.  What we did in the past, and what e.g. MySpace did, was build to the scalability you knew you needed today, and then when you hit that limit, you punted.  Though I don’t think I’ve done as good a job as I could have, by any stretch of the imagination, I think that CQRS offers an architecture that lets you build it to match the scalability you have today, and then easily expand it.  Your query layer can hit a Reporting Store that, as implemented, simply is your main database/Event Store.  You can code your code and architect your architecture as if all of these differences were physically separate, since you only need to worry about it at the logical level. 

At a fundamental level, building a CQRS-style query layer allows you to logically segment your code between queries and commands.  Which leads to the next topic, the command layer, the topic of the next post in this series.

posted @ Tuesday, February 09, 2010 11:09 PM | Feedback (4)
Blog Comments or “Saint jdn – fighting Those Who Cared”

I have to explain the new tag line.

Updated: the new tag line was fine for a day.  Back to what it has always been.

In the meantime, don’t forget the first rule of the Blogosphere: opinions are like assholes.  Everyone that has one, is one.

Because of the weird phenomenon of people in Eastern Europe posting spam links that advertise porn or poker or whatever, I have moderation turned on for the site.  It’s annoying because almost no one reads this thing and almost no one comments on it (except maybe to say “DOH!  I did that too!”), but if I don’t have moderation turned on, once one test spam comment gets through, hundreds a day come in, and it’s just a pain in the ass to deal with.  Right now, the big thing appears to be selling term papers to college students who don’t want to actually do the work themselves.  One gets submitted every couple of days, which I then kill.

Anyway, other than spam, I’ve never deleted a comment for any other reason, and except for something that was just blatantly racist or something, I don’t know what I would delete.  Maybe completely off-topic comments about politics or something.

I have this general policy for a reason, and one that ties back to the ‘birth’ of this piece of crap blog.

Back when Scott Bellware was blogging on CodeBetter, he was going on in his general humble () way about something he said or did, and I submitted a generic snarky response commenting on how nice it must be to be superior to everyone else.  Which, of course, he deleted.  And then he wrote a post about it (I think it was titled ‘BlogCoward’).

Anyone who’s read this blog or, well, ever met me, knows that I can have somewhat of, uh…an aggressive personality.  (Translation: “I’m kind of a dick”….what do you mean, kind of?  “Fine.”).

Naturally then, I commented about it.  I don’t remember if he deleted the first one, but eventually, one stayed and general frivolity ensued.  “Who are you jdn?  Are you just a troll?”  Which eventually led to my “My name is John Nuechterlein, I got a Ph.D. at the age of 25, I’m a good cook, I play a mean guitar, and I’m a snazzy dresser” comment, and there we go. (i’m no where near being a snazzy dresser, the rest is fairly accurate….but I digress).

From all of that, I have what I might call the ‘pot meet kettle’ moderation policy.  It would be rather cheezy for me to just delete comments if I didn’t agree with what was being said, or if the comments were somewhat…’aggressive’.   This was what was so funny about Bellware’s old blog, he’d post really rude and obnoxious comments about everything and everyone, but if anyone posted the slightest thing critical of him, he’d get all offended and wussy.  I’d link to examples (especially that didn’t involve me) but he deleted all of his old posts when he split from CodeBetter.

If you are going to have a blog and allow comments and say provocative things, don’t be a loser and delete stuff without a reason.  That’s true blog cowardliness…or something like that (i guess my attitude also stems from being on USENET back in the day, as the things that people call trolling today ain’t nothing like then.  There’s nary an H. West amongst them.  Not even a Plain and Simple Cronan.  Moment of silence for Cronan, rest his soul with God………….thank you.  The world misses that guy, and doesn’t even know it.  but i digress).

So, the new tagline came from a comment that Rob Conery made on his blog to a comment I made before he deleted the whole exchange and banned me from the commenting system altogether.  Burning bridge?  What’s that?

true story digression: after I graduated from the University of Miami with a Ph.D. in Philosophy at the age of 25 (Hi Jeremy, Hi Rob!), I stayed around for a year or two before finally leaving the hellhole that is South Beach (City motto: It’s a great place to visit, but you wouldn’t want to live here), and so during that time I was still around the Department for meetings, colloquiums, parties, etc.  Anyway, for a few months I dated one of the graduate students in the program, and some of my former classmates and colleagues asked her what it was like to date this jdn guy.  When she told them that I was very sweet and kind, the unanimous reaction was pretty much that she couldn’t have possibly understood the question properly, or she was actually dating someone else.  I consider the time when she told me about this one of the highpoints of my life, if only because it was so funny…lol, sorry, I digress.

Rob’s been on a kick about getting rid of relational databases and using NoSQL type databases (which are entirely different topics, which he doesn’t get), and supplying a lot of useful code, all of which is good.  He seems to think that if he does something good in one area, it means he’s excused for harmful actions in other places.

The ultimate problem is that advocating getting rid of relational databases would make this industry worse on orders of magnitude.  He doesn’t like to hear that, so he deletes comments.

This has always been the problem with Alt.NET.  Really smart people advocating really stupid business practices, e.g. all of the good that could come from examining NoSQL possibilities drowned out by dumb ideas that you have to get rid of relational databases. 

Anyway, Rob was making some generally ignorant comments and so I posted the following:

"In the 24 years that I’ve been doing this, I’ve changed a column name on a DB precisely twice"

So, in other words, you don't have a lot of experience in this area.

Rob really didn’t like this, I guess.  What I said was accurate (if you’ve ever worked on a DB system with hundreds of tables worked on by dozens of people over 5+ years, changing column names is pretty common.  Not every day common.  But common.).

Oh, he didn’t like this at all.  Because he’s too lazy, err, because he uses Disqus to handle comments to his blog, I got the full response in the email that gets sent out.  It was a brilliant rant.  I wish he had had the guts to keep it online, but it included the following:

“You, my friend, are the smartest of them all. You see me for what I am - a sham. And when I spend 5 hours of my Saturday trying to concoct yet another Lame Blog Post to try and answer the Good People of the world (whom I've completely fooled) - you're there to call me out. You should be commended. No you should be Sainted. Saint JDN - savior of the Geeks. The guy who understood what no one else did and saved the masses from the tyranny of Those Who Cared.”

I love this.  I really do.  The sarcasm is awesome.  Rob has never been able to handle challenges to his positions, so he resorts to this sort of thing.   Brilliant.

He didn’t actually get around to banning me from his site until my follow up comment:

“Why do you insist on things like:

- copying the points made by Udi and Greg and others, but without attributing them at any point, as if you were an original source on any of this (which you aren't, you are simply repeating what they have said, for the most part).

- thinking that your criticisms about relational databases actually relate to them, since your critiques of them seem irrelevant to how they are actually used in the real world”

That did it for him.  As the ultimate Gloryhound, he likes to post stuff where he rips off material from other people and pretend that he was the source.  When I post about cqrs, I make it clear that I am building on the work of others.  Rob thinks it is okay to plagiarize.   Good for him.  Derik has been doing with Dimecasts what Rob is doing with Tekpub, except Derik doesn’t charge for it (Dimecast official motto: “learn something new in 10 minutes or less, average running time of episodes is 12 minutes”).  Good for him.  As I said to him in an email, Tekpub is as popular as it is because people like Ayende are part of it.  It’s not like anyone thinks his blog series is real world code.

As I mentioned to him in my response to his email, I have an open invitation to have a Skype session to go over all of this.  I’d be fine with recording it so that everyone could hear it and come to their own conclusions.  He’s too scared to do that.

From his last email to me, he seems to think that I hate him or that I have a lot of anger towards him.  He’s a blogger guy.  So was Bellware.  If I don’t hate Scott (who has actually blogged a bunch of good stuff in the last few days since abandoning Twitter), why would I hate anyone?  This is all just talking about code.  I think Rob is killing the message of the advantages of using NoSQL stuff with this silly and ignorant comments about relational databases.  Not surprisingly, he has a different opinion about.  Okay, so what?  We differ about that.  I’m willing to talk about it in any open forum he wants.  Like SB, he runs away from that.  No problem.

Summation

If you have a blog, and you allow comments, be a man about it and allow comments that disagree with you.  You aren’t as smart as the people who disagree with you, and the people who disagree with you aren’t as smart as you either.  I think that makes sense.  Besides deleting whatever comments I had on his blog, he’s deleted a bunch of other things as well (there’s some guy named Eric that really riles him up…anybody know who this guy is?), and left in all the stuff where people thank him for what a great guy he is.  Which is okay.  It’s his blog, he can do what he wants with it.

And remember the first rule about the Blogosphere: opinions are like assholes.  Everyone who has one, is one. 

Deal with it.

posted @ Sunday, February 07, 2010 9:28 PM | Feedback (5)
It’s OK to do Reporting off of a RDBMS

Well, that’s a little misleading.  It’s OK to do Reporting off of a RDBMS as long as you do it right, and you should consider other options before committing to it. 

note: I’m using “Reporting” here in the traditional sense, not in the cqrs sense where pretty much anything that doesn’t involve a command is called “Reporting.”  Also, since I mostly know SQL Server, that’s what I’m going to be discussing here.  Also, yes, I know I’m glossing over a hell of a lot of stuff here.

The ‘Problem’

Suppose you have your traditional transactional system (it could be an eCommerce store, trading system, whatever), designed and optimized to handle inserts into it.  Indexes are aimed at preventing locking, data files (especially the transaction log) are located in different places to minimize hot spots and maximize I/O (SAN technology is pretty amazing these days), code is written correctly so that minimal numbers of query plans are generated which are then maximally used, yada yada yada.

Then along comes Sally Business User who wants to write a report that gets back whatever data she’s all hot to get information on, and happens to construct the query in such a way that joins on ten tables, all of which get locked, and which unfortunately returns the Cartesian Product of whatever table you have with the most rows.  Needless to say, the DB locks up and becomes unavailable, requiring a reboot, much gnashing of teeth, yada yada yada.

Of course, if your users are idiots, bad things can happen

But, of course, this is a straw man presentation.  Anybody can come up with stupid scenarios that don’t really address the pros and cons of using an RDBMS for reporting (or using one at all).  Instead, let’s take a closer look at why reporting off of an RDBMS can be problematic and how these problems can be addressed.

Joins can be costly

As a general rule, relational theory says that normalization is good (though this can be taken to an extreme…I once worked on a system where it took something like 8 joins to get a person’s cell phone number, but I digress).  This tends to lead to a proliferation of tables.  This means that when you want to read back related data, you have to join between a larger number of tables than in a denormalized system, and this can be a bad thing for a number of reasons.

A surprisingly large number of developers don’t really understand as much about transaction isolation levels as they really should, and so often times don’t even know how to write their queries with “(nolock)” properly implemented, which can lead to quite a lot of table locking.

The overhead of join conditions themselves (which, BTW, can also use “(nolock)”) isn’t that much (assuming you have proper indexes), additional conditions in the where clause increase, and can lead to very inefficient and costly execution plans if there aren’t proper indexes, they are in the wrong order, or aren’t sargable in the first place.

Aggregation can be costly

One of the greatest managers I ever worked with didn’t really like relational theory or SQL (which was somewhat ironic since data services was his department), and would at times dismissively wave his hand and say “blah blah blah, group by, order by, whatever.”  He knew it was important but didn’t really care much about the details (that’s what I was being paid for).

Well, all that ‘group by, order by, whatever’ can also greatly impact your execution plans for obvious reasons.  Simply selecting a group of rows is much different from selecting a group of rows while also finding your sums, maxes, mins that typically show up in a report.

Functions in where clauses are bad

Taking a piece of code and putting it in an UDF seems like a good idea.  The problem is that putting functions in where clauses makes the where clause non-sargable in most cases.  Even worse, if the function is doing any sort of complicated logic itself, it gets called for *every* row in the result set, not just once.

All this is getting in the way of the business anyway

Back in the dot.com heyday when working on eCommerce systems, the general principle was that our DB should ideally only be used when a customer was trying to give us their credit card number.  Obviously, this was an impossible ideal, but it was still a guiding principle (just as “eliminating crime” is an impossible ideal, but still a guiding principle). 

Well, obviously, reporting goes completely against that principle, which means that your scalability is limited by the amount of resources that are used for it, and as we’ve seen, that amount can be disproportionate to what you really want to be doing.

So, what to do?

Stop using an RDBMS for anything

One tactic to take is to “stop the madness” and not use an RDBMS at all.  Learning set theory, query optimization, etc. seems to require a lot of work, and can be never-ending.  An index that is good today might be bad tomorrow.  Statistics get out-dated.  And Joe the Developer is going to forget some table hint and lock the order table anyway.

And the experience of the Internet has shown that there seem to be hard limits in just how much data can be stored/processed/managed in an RDBMS.  That’s why Google and Amazon (the obvious examples) don’t center their businesses around them (the white papers on their internal architectures are fascinating to read). 

However, throwing the baby out with the bathwater isn’t generally a good option.  Most of us aren’t going to be building systems that scale to the size of Google or Amazon.  It’s good to dream, but deciding on how to architect a system should be based on realistic expectations as to the needs of the business it is supporting, and for almost all businesses (except maybe at the highest of high-ends and lowest of low-ends), an RDBMS hits the sweet spot.

Moreover, while you certainly don’t have to be a Google or an Amazon to use something other than an RDBMS, making that choice requires learning different ways of doing things, with less supporting literature.  SQL has been around for a long time and in many different environments.  While ‘NoSQL’ style databases have been around for ages, there simply isn’t the high level of ‘Google it’ knowledge that you can rely on to solve any particular practical issue you might be facing.  If you don’t know much about SQL, but know that your system is experiencing blocking, you can pretty quickly learn how to identify the causes of it and devise a solution.

Don’t use an RDBMS for reporting, that’s why God invented OLAP

If you want to limit the resources that are hitting your database for non-‘credit card submission’ purposes, then obviously, you should move those ‘non-critical’ resources somewhere else, and an obvious solution is another database.  In fact, if you really want to do it ‘correctly’, put in place an OLAP system.  Take your highly normalized transactions and ship them off to another system that denormalizes it, aggregates it, precisely for data mining and reporting purposes.  The output of such a system will be familiar to anyone who, for instance, uses the Pivot function in Excel.  That’s right, your business users, the ones who want the reports in the first place.

With SQL Server, it comes with the product (well, not the Express version, I guess) as Analysis Services, so if you can already afford the cost of the license, you get it with no additional cost.  So, since it was designed precisely to do the reporting that needs to be done, and it comes with the product, why wouldn’t you use it?  You do have to afford additional hardware (you could use it on the same instance, but that kind of defeats the purpose), but that’s going to be true just about no matter what.  Seems like the obvious answer.

The problem is cost.  Not cost of the product, but opportunity cost.  Learning OLAP concepts and technologies appears to be something like an order of magnitude more difficult than learning OLTP concepts.  Normalization is easier to understand than star schema.  T-SQL is an easier language to work with than MDX.  Because of this, it is harder to find people to staff a business if it uses it.  Significantly harder.

Is this a definitive argument against OLAP?  Of course not.  In fact, if you are the sort of person who likes data, and who would like a fairly secure, fairly well-paid profession so that they could support and raise a family (in other words, the truly important stuff in life other than this geek crap), I would encourage you to learn this stuff.  Action pack subscriptions or MSDN subscriptions aren’t free, but they can give you licenses of SQL Server versions that include SSAS.

But, the main reason why OLAP hasn’t taken off as much as one might think (though I guess that is changing over time) is that there is another, cheaper option.

God also invented Replication

Part of every version of SQL Server (though with some limitations), replication allows you essentially to take your database and copy its data, in near real-time and in as close to the form as it exists in as you want, to another database, more or less automatically.

You still need the additional hardware (since replicating to the same server kind of defeats the purpose), but you can keep your better understood schema, write your reports in T-SQL, and offload almost all of those resources from your main database.  There is a slight overhead in setting up and running replication but it isn’t much, and improves with each version (so, with SQL 7, setting up replication caused table locks, so you had to do it at 3AM, where in 2000, it only caused row locks, etc. etc. etc.).  If a bad query hits your replica and locks it up, then it won’t affect the main DB (as long as was setup that way). 

Is it perfect?  Nope.  You still have to do things right, you still are querying against a highly normalized database (leading to the common “why is my report timing out” problem), there is overhead involved both in terms of actual CPU as well as human overhead in terms of additional monitoring and support, etc. etc. etc.

But many businesses find that it is a perfectly acceptable solution in many situations.

Relational Databases aren’t going anywhere, neither is reporting off of them

It really is okay to do reporting off an RDBMS.  Despite what people think, they are going to be around for a long, long time, and that’s okay too.  Will non-relational databases grow in the market?  That’s hard to predict, but because of the experience of running systems on the Web, I think it will.  And that’s okay too.  The idea that there’s only one way to architect a system is generally nutty anyway.

posted @ Sunday, February 07, 2010 4:20 PM | Feedback (0)
Melissa McClelland – Brake (live)

A snippet of this song was played on Hockey Night in Canada, related to the death of Maple Leafs’ GM Brian Burke’s son.

Obviously, I don’t know Mr. Burke at all, but condolences and prayers to him and his family.  I’ve never understood how people recover from the death of a child.  It’s something I wish on no one.

Anyway, here’s a beautiful rendition of a beautiful (albeit depressing) song.  Resolution kind of sucks, but there you go.

posted @ Saturday, February 06, 2010 10:17 PM | Feedback (0)
How Toyota Does Software

.

"The company changed braking system software in January as part of what it called 'constant quality improvements,' but did not say what it would do about vehicles manufactured before then."

I'm waiting for the first post from someone about how the problem is that Toyota didn't follow the Toyota Production System.

posted @ Thursday, February 04, 2010 10:58 AM | Feedback (2)