Posts
832
Comments
691
Trackbacks
1
May 2010 Blog Posts
The Kings of Convenience – Winning a Battle, Losing the War

I used to be able to play fingerstyle guitar pretty well.  Well, at least I’m telling myself that.

Another gem, with the usual uplifting lyrics I like:

Even though I'll never need her,
even though she's only giving me pain,
I'll be on my knees to feed her,
spend a day to make her smile again

 

posted @ Monday, May 31, 2010 5:19 PM | Feedback (0)
Without having to write code

I’m going to be zinging Microsoft here, as the most obvious target, but they aren’t the only one.  It’s a general point.

In the most recent MSDN Magazine, there’s an about how to work with WCF and WF4 (they can’t call it WWF4 for legal purposes), and I have no opinion about the content in general.  But I want to focus on this one thing:

“In this article, I will explain how to combine several features of WCF and WF that were introduced in the .NET Framework 4 to model a long-running, durable and instrumented mortgage-approval process for a real estate company, without having to write code.”

Anyone with any sense will know where I’m going with this.

There is no such thing as ‘without having to write code.’  Code will be written, somewhere, somehow.

I understand the ideal, at least on some level.  Business end users want to be able to get things done.  IT is usually required to get those things done.  If the business end users can get those things done without having to go through IT, the ideal says this is a good thing.

End of trying to understand the ideal.

If something goes wrong with the business end users trying to get things done without having to go through IT, it will ALWAYS be IT that has to fix it.  Indeed, whether it is WF4 or anything else, it is always some IT process that actually gets the things done.  WF4 is a pretty interface on top of code.  Code is always required.  No pretty interface (at least in my lifetime) will ever change this.

While soapbox ranting is always fun, here’s a very real world case of what can happen:

“A pricing error costs Zappos.com sister site 6pm.com more than $1.6 million”

Here’s the fun quote:

““We have a pricing engine that runs and sets prices according to the rules it is given by business owners,” wrote Hsieh. “Unfortunately, the way to input new rules into the current version of our pricing engine requires near-programmer skills to manipulate, and a few symbols were missed in the coding of a new rule, which resulted in items that were sold exclusively on 6pm.com to have a maximum price of $49.95.””

The ‘without having to write code’ folks will argue that the system was flawed, that they shouldn’t have to have ‘near-programmer skills’ (digression: what does that mean exactly?  That they don’t shower a lot?), but what is really flawed is the idea that you can build a system that lets business owners do this sort of thing. 

I mean, I’m not against building systems that allow non-IT people set rules.  IT people often tend not to understand enough about the business they are supporting (digression: this is a large topic, so I’m avoiding discussing it here), and you want to allow the business users to set fundamental rules, including about pricing.

But you don’t let them do it straight into production.  This is similar to the constant desire of people to allow business end users to have the ability to query production data through some magical query tool, and then you have situations where your entire production system comes to a halt because some business end user managed to query the Cartesian Product of the transaction table against itself.

How to avoid these sorts of things

Here’s where I punt a bit, because this is an actual real world problem that has no obvious answer.  How do you free up business end users to implement the real world needs that they have, but without also ‘allowing’ them to completely f%^k things up?

A $1.6 million dollar mistake suggests that this is a question that needs answering.

posted @ Wednesday, May 26, 2010 8:13 PM | Feedback (0)
FeedBurner stats gone wacky

I’ve always felt that SEO was, to use highly technical terminology, a lot of crapola (which is highly annoying to a business partner of mine who knows a lot about the subject), and not in small part due to the, to use highly technical terminology, random flakiness of web analytics in general.  It’s not a science, and I’m not sure it’s an art.  But, for better or (probably largely) worse, if you build web sites, you have to deal with it.

For this piece of crap blog, I have some sort of web analytics being recorded in a number of places:

- I use a hideously old version of SubText, and so it tracks web views and agg views. digression: I have had it on my to-do list to try to upgrade to the latest version, but I just have this feeling that the first attempt (and subsequent x attempts, where x > 3) will go hideously wrong, so I keep putting it off.
- : since they are the major player, I have the JavaScript thing on various pages (though apparently not the rss page).  However, it looks like Google has released a tool to block some of its own data, so who knows what will happen there).
- : I forget how this is implemented, but for the last year or so, RSS tracking has been through this.
- Clustrmaps: this is the map thingy (technical terminology) on the right nav side.  Forgot how it is implemented.
- SmarterStats: once I moved the blog to DiscountAsp.NET, I got this Silverlight enabled thing with a whole heck of a lot of canned reports and the ability to create a host of others.

The main reason why I have all of these things is to see how they work and how they compare to each other.  Smart people like Ayende use their blog for marketing purposes, but I’m not that smart, and the target audience for my piece of crap blog is some weird combination of people who might be interested in some combination of: general technical posts, cqrs for dummies’ fans, hockey fans, and fans of well written songs that tend to center around the desolation of looking back at failed relationships (if I could fit in left-handed Albanian albinos, it would be the ultimate in niche web sites, but I digress).

An odd thing about comparing all of these sources of web analytics data is that, at times, it is as if they were tracking different sites.  There isn’t always any obvious correlations between them.  There are some obvious things I am aware of (like apparently not having the Google Analytics’ JavaScript on my RSS page), but some other things I don’t know what to make of.

Within the last week or so, my Feedburner stats have, to use highly technical terminology, ‘gone all ape-shit.’  I got my Feedburner account a long time ago, but never bothered to implement it for a long period of time until finally thinking it might be a good idea (I think all I did was enable Feedburner syndication in the Subtext options and it did whatever it did automatically).  Once I did that, the stats have been fairly consistent on a day to day basis.  A slow but steady rise from ‘almost no traffic whatsoever’ to ‘almost no traffic whatsoever, but more than before’ with a few bumps up and down, but mostly pretty stable.

Until the last week or so, where, if you were to trust the numbers, the number of people subscribing has, on one day, dropped 75%, and then the next day, gained it all back, and subsequently all over the place.  Since almost all of the subscribers come through Google Feedfetcher, it’s like Google changed how it works or how it records them on a daily basis.

When I look at what SmarterStats is telling me about access to the RSS page, I don’t see any significant correlation.  It’s almost like the subscriber number now more closely (but not perfectly) correlates with the reach number, which has always fluctuated wildly.  My Google-fu has only come up with this 3-year old page to give some indication, but nothing to suggest there’s been any change to how Feedburner works recently. 

Since my blog serves largely to feed my ego and just serve as a dumping place for things I find interesting, I don’t care about the numbers per se, but it does make me wonder how I would handle explaining this sudden fluctuation to a client that was using Feedburner.  I really have no idea at this point.

posted @ Wednesday, May 26, 2010 6:45 PM | Feedback (0)
RavenDB, and a brief design philosophy discussion with Ayende

Suppose you design a system that is chock full of interfaces, specifically things like some version of IRepository, where you have the ability to change out your backing store/database more easily.

A common criticism of this sort of design is that it is unrealistic to think you actually will change your main backing store/database in a production system.  My own experience is that while it does happen (a current client project I am working on involves changing the backing database for a set of applications from SQL Server to Oracle, for instance), it doesn’t happen often, and you often times end up changing your interfaces anyway.

However, the times are a-changing, and the number of situations where you might want to design with this in mind is increasing.  With a central project that I am working on, two obvious ‘innovations’ (they aren’t exactly new of course) involve NoSQL and what I lovingly like to call, “all that Cloud Computing shit.”  My default implementation uses traditional RDBMS (SQL Server mainly, I’m not really qualified to say much about Oracle…a little, but not much), but I can very easily see the need to have this project remain largely the same, but need to use something like Azure.

Because of this, I’ve been very interested in RavenDB, Ayende Rahien’s long in the making and recently released to RTM document database.  It is a .NET based solution written (with help) by someone who knows a bit about .NET and writing software (to say the least), and appears to fit a need quite nicely.  So, I was very interested in looking at using it ‘in anger.’

And then I saw this…

Has Ayende lost his mind?

I happened to come across a post by Rob Ashton and found this gem:

“Let’s say there are 100,000 books in the document store and we invoke the following code:

   1:  Book[] books = documentSession.Query()
   2:                           .ToArray();

How many books do you expect for there to be in that collection?…..

Thankfully RavenDB safeguards against this kind of sloppy code and automatically limits the number of results returned back. Both the .NET client and server have this behaviour built into them and this means you’ll only get (at the moment), 128 objects coming back for the above query….

Currently the server itself will only let you page 1024 objects at one time, so you can’t be lazy and make a call to Take(100000) because it won’t let you.”

I was immediately appalled.  Why?

Why did I think Ayende had lost his mind?

My immediate reaction was that this was akin to breaking “Select *”.  It’s a query engine.  If I issue a query, I expect the query to do exactly what I expect it to do.  Alt.NET is dead (long live Alt.NET) but there was this notion that when doing Alt.NET type stuff, it is akin to running with scissors, and it seemed to me that Ayende was abandoning that, and not only abandoning it, but tying a user’s shoelaces together.  If I want to query to return 100,000 results, then that is what I want (since a lot of work that I do is around ETL type stuff, I often query large result sets…and yes, an RDBMS is different from a document database).  Don’t magically limit what I want to do with some magical number.

Think of a trade management system for stock trades.  Why would I want to limit some processing to this magical number?  Yes, it is not a generally good idea in such a situation to pull back 100,000 results, but let *me* decide that.  Why cripple the query engine in its core?  Isn’t it up to me to decide?

Rob suggested that I bring it to the google group, which I did.

In my defense, I intended my question about this to be half-serious/half-humorous, but I don’t use emoticons and I wasn’t really paying a lot of attention, so it didn’t quite come across that way.  You can check the discussion .

Paraphrasing roughly, Ayende’s response was basically, “Okay, dude, I did it the way that I did it, if you want to change how it might work, submit a patch already, whiny ‘please stop and smell the flowers’ complaining guy.”

But a patch, in my mind at the time, was treating the symptom, not the disease, the disease being that Ayende was crippling basic functionality of a query engine.  RavenDB (in my mind) was supposed to be an Enterprise-level product (whatever that means) and it seemed to be designed to prevent bad developers from doing bad things, causing friction for the rest of us (who perhaps mistakenly think of themselves as not being bad developers).

And a patch seemed sub-optimal for other reasons.  The patch would require explicitly implementing a setting.  What if a new version came out and someone forgot that they needed to explicitly re-set the setting?  What if, at 3AM when that joyful production issue came in as they often do, no one knew or remembered that this hard-coded ‘cripple’ value was in the core code base?  Sure, you can create an integration test for this, but do you really know the test will be run?

Patch Submission

digression….I hate git.  Why?  For many reasons.  The list of things (and yes, I actually maintain a list) of things I want to learn is in the dozens.  I’m old and slow and command line tools are dangerous.  And TortoiseGit doesn’t work (randomly, try to clone using it, it hangs….why?  Who knows).  And I spent a bit of time getting up to speed on Subversion, only to find all of the kool kidz were dumping subversion to go to git.  And then I read some kool kid who posted about why git sucked and we should all use .  So what do I do?  Spend my very limited resources learning git, only to find out that that one kook kid was right, and I’m going to have to change again?  But I digress.

So, I took the conversation off-line to ask my “Git Clueless” questions.  I ended up having to download the source from GitHub, and then email Ayende the patch.  Along the way, we talked about the design philosophy, and he kept coming up with similar cases that seemed totally irrelevant to me (TFS will limit query results….so what, TFS is an application on top of the query engine…Azure will throw an error if you try to return too many results and it uses too much memory…so what, SQL Server has a setting for that as well, that happens if a developer does bad things, it still isn’t crippling the query engine…etc. etc. etc.). 

We agreed to talk about it on Skype.  Figuring out the different time zones took a little bit of time (I actually spent a minute trying to figure out if GMT was zero-based…LOL).

Finally, I took a minute to think about a basic question….Ayende is a smart dude, what led him to do this stuff?

Skype

I’ll paraphrase all of this, but since it was a nice, friendly conversation, I don’t think I’m mis-characterizing anything.  As always, Ayende is one of the most approachable persons to talk with.

Keep in mind that RavenDB is a product.  Also keep in mind that Ayende has a ton of experience through his work with NHibernate and his profiler tools, and dealing with client experiences with those things.

His position is that “Sure, bad developers do bad things.  But, so do good developers.”  His experience has been that multiple ‘problems’ have turned out to be simple problems with developers not limiting queries, and that by putting in these hard-coded limits, it prevents those things from occurring.  And since he accepts patches to allow you to explicitly override these hard-coded limits, an end-user has the ability to take control.

So, has Ayende really lost his mind?

From a purist, idealistic point of view, I still cringe at this hard-coded ‘crippling’ of a query engine.  From a practical standpoint, you have to consider (among other things) some fundamental truths about software development.  No matter how true and good your development practices are, you will suffer production issues (and these are really all that matter, in the end…in production, does your software do what it is supposed to do?).  Given that fact, in this situation, you could have ‘mirror’ issues:

- RavenDB doesn’t restrict queries and developers don’t properly analyze their queries, and so everything works fine in testing (where the doc db equivalent of “select *” returns a small enough set of results to be workable), but then 3 months into production, that same select chokes because of memory issues.

- RavenDB does restrict queries and production issues occur because this restriction is magically, forgotten, or whatever.

From his experience, Ayende chose to limit query results.  As an idealistic end user, I don’t like this, but I do understand better why he did it the way that he did.

I did point out to him that there is nothing in the official documentation that specifies where he puts these limits in, which he recognized, so I hope that gets updated at some point.

And in the end, he did readily accept a patch that allows me to use RavenDB in the manner I am most familiar with.

Would I recommend RavenDB?

Whether RavenDB actually fits any particular situation is up to any particular person to decide, but after having talked with Ayende, I still plan on giving RavenDB a serious run for its money as the NoSQL variant of a major project I’m working on.

The licensing is still….a work in progress.  At one point, an Enterprise license was priced at $8000.  It is now something like an OEM license for $999 a year, plus a goat (okay, I made that part up).

If you are looking at a NoSQL option in the .NET space, take a look and decide for yourself.  It looks pretty good to me.

posted @ Friday, May 21, 2010 7:44 PM | Feedback (1)
NoSQL in the Wild

Rob Conery has a post about how they run Tekpub using NoSQL that’s a great read.

I’m not qualified to talk about Ruby code but that part is kind of irrelevant.  He hits all the right notes in talking about why they did the things they did (I could quibble about the cost ‘criticism’ of , but that’s a business decision), especially in terms of separating reporting.

It’s a good read, check it out.

posted @ Wednesday, May 19, 2010 8:20 PM | Feedback (0)
Postmortem of Pens-Habs series

Halak good, Fleury bad.

The end.

posted @ Wednesday, May 12, 2010 8:29 PM | Feedback (1)
cqrs for dummies – an interlude – more links of interest

There are more than enough link aggregators out there, so I doubt I will do this on a regular basis, but I’ve come across a couple of other links that I find interesting.

Rinat Abdullin – a nice series about how to start with CQRS and what it means.

Agr.CQRS – a framework and sample implementation of CQRS.

Ncqrs – another framework with sample implementations of CQRS.

– a review list talking about Greg’s ever impending book on the subject.

– another link aggregate of DDDD and CQRS links.

posted @ Sunday, May 09, 2010 10:27 PM | Feedback (0)
I’ve seen the future, and it’s a memristor

I’m going to go along with the info in the link and admit that I don’t really fully understand this.

But, if I do understand this, and it is the future, it changes everything.  No more worries, for instance, about if an RDBMS can scale, since a single instance could load, hold and process the entire internet. Sweet.

Take a look here and here.

posted @ Friday, May 07, 2010 8:45 PM | Feedback (0)
Porcupine Tree, 05/01/2010, Bogart’s, Cincinnati, OH

Bogart’s is one of those clubs that has a large open-space in front of the stage that extends two-thirds of the way back in the building to where the mixing desk is, behind which is another smaller space, and the all important bar.  There’s also a second-level above the bar and mixing desk.

The sound quality in the place was highly dependent on where you were standing.  The music was notably muddy, especially if you were towards the back.

Anyway, I was probably in the first third of the open space when I overheard some guys behind me talking about 5 minutes before Porcupine Tree hit the stage.  I think they were college age (hard to tell, but somewhat (though not entirely) irrelevant to the point).  One of them said, “Dude, I’ve been waiting years for this.”

The guy was from Indy, and either PT hasn’t been there or he couldn’t attend when they did, and he was talking about how for a lot of concerts, the only choices are to go to Chicago or Cincinnati.  In any event, it was one of those little signs of reminders that life is pretty good when you are seeing your favorite band on their latest tour for the 4th (or was it 5th?) time.  Then again, see below.

Or perhaps I was giving myself forward ‘encouragement’ for the next day, when I drove to Pittsburgh and not entirely unexpectedly watched the Pens lose Game Two against the Habs.

In any event, here goes…

Bogart’s is on 2621 Vine Street in Cincinnati.  My hotel was on the corner of 5th street and Vine.  I arrived at an awkward time, a little too late to be able to relax and have a good dinner at the steakhouse across the street, a little too early to grab a cab immediately to the club.  Since I’m so used to the grid-style street systems of NYC and (sort-of) Chicago, I assumed that this meant it was a 21 block walk from the hotel to the club.  Silly me.

As I often do, I was catching up on my phone a bunch of emails, blog post readings, etc. and not really paying attention, until at one point I looked up and realized that, as it happens, very shortly out of downtown on Vine Street, is a major slum area.  One of those places (if you’ve ever seen them personally) where you can’t imagine that real people have to live, especially in America.  And one of those places where my appearance was note-worthy enough for commentary once or twice.  Let’s just say that it was one of those few times in my life when I was very highly conscious of my very high whiteness and wondering how hard it might be to find a cab.  But I digress.

The opening band was some group called .  No, I’ve never heard of them either.  Mr.Bigelf (or whomever the lead singer is) was a sight to see.  He played a mellotron and a Hammands organ in the ‘old style’ where they were to either side of him, so that he could play them while facing the crowd.  He was wearing a top hat like Slash, and wore something like a purple velvet smoking jacket.  Apparently, the band has been around for almost 20 years.  At one point, he introduced a song by saying something like “You may have heard of this one, it is called NeuroPsychopathic Eye.”  Um, no, haven’t heard of that one, sorry.  They weren’t bad at all.  Just a little odd.  The statute of Yoda on his mellotron had nothing to do with it.

After that, Porcupine Tree hit the stage, and here was the set list:

The Incident (the entire thing)
<10 minute break> 
Start of Something Beautiful
Russia on Ice
Anesthetize, pt. 2
Lazarus
Way Out of Here
Normal
Bonnie the Cat

The Sound of Musak
Trains

Overall, I liked this set list better than the one the previous night, but overall, the experience and quality of the set was at PT’s usual high standard anyway.

I think “The Incident” as a whole plays better live than as recorded, in large part because I don’t really see how it fits together as a whole.  The musicianship is obviously great throughout, but for the most part, it seems to me to be more a loose collection of songs than something that hangs together. 

SW doesn’t normally spend a lot of time conversing with the audience, but perhaps because their seemed to be an equipment issue during the set, he did have a funny exchange that came up throughout the show.  Before ‘Lazarus’ played, he mentioned that their seemed to be a lot of familiar faces in the crowd.  “I’ve seen you many times before, how many shows do you think you’ve been to?”  38.  “38, I should know your name, what’s your name?”  Steven.  “Steven.  I think I should be able to remember that.  And you, sir, what’s your name?”  Mike.  “Okay, Mike, how many shows have you seen?”  20.  Crowd boos.  “Sorry, Mike….Steven and Mike, I hope you guys understand, if you are coming to this show to think you are going to hear something you have absolutely never heard before…you know your wasting your time right?”  And he meant it humorously, of course.  Later on, right before ‘Normal’ was played, he stopped and asked “Steven and Mike, can you guess which song is next?”  SW was switching from electric to acoustic, so I, along with dozens of others yelled out “Normal!”  He responded, “I was asking Steven and Mike, can the rest of the class pipe down?”

This relates to the comment I made in the post about the previous concert, where PT tends not to improv beyond the recorded the material, even though they (apparently) used to do this all the time.  One obvious reason for this is that for a large number of songs, they have a film that plays along with the song being played, and they absolutely *nail* having the music be in sync with the film.  So, for instance, whenever they play “Way Out of Here”, they play the same video (taken from a real incident where a couple of teenage girls who happened to be PT fans killed themselves by throwing themselves in front of a moving train) of the young goth chick with the IPod walking on the tracks.

I saw PT in Atlanta a few years ago with a friend of mine in a place that was a lot like Bogart’s where the sound was a lot muddier than what I was used to, but one of the things he mentioned was how when they played “Sleep Together”, the drummer matched the freaky droid drumming guys in the video exactly.

SW did mention that the NYC show would have a number of surprises, and didn’t otherwise make it seem like it would be the last show in a long time, or anything like that.  I’m looking forward to it, even if he mentioned more than once that the show was in October when it is actually in September.  Or perhaps I have tickets to Radio City Music Hall for something entirely different…LOL.

Another good show.

posted @ Monday, May 03, 2010 7:50 PM | Feedback (0)