Posts
822
Comments
686
Trackbacks
1
January 2010 Blog Posts
Toyota Production System in Action

.

There are people who have tied their practices in Lean Software to Toyota and how they do things.  Some have seem to have set their sights on being a ‘Chief Engineer’ for reasons of ego.

It was always dumb to do this, and this simply proves it.

“We need to start today to build the learning organizations and learning cultures that will produce the Chief Engineers that our industry needs.”

Uh, no.

posted @ Friday, January 29, 2010 11:34 PM | Feedback (0)
Why don’t TV Sports Announcers Watch Their Own Broadcasts?

Okay, I understand why they wouldn’t actually watch TV while broadcasting the game, but it’s starting to get ridiculous.

I’ll focus on hockey and football, because that is what I watch mostly.

Watching tonight the big Ovechkin/Crosby rematch…err, the Caps/Pens first game since their epic (Jim Rome shout-out for a member of the crowd) playoff battle, and Crosby scores on one of those weird goals where the goalie wanders out to play a puck and it takes an odd bounce to sit in front of the net for the opposing team to knock in.  This one was doubly-weird because a) the Pens usually have goals scored against them like this because of Fleury’s tendency to act like he’s going to get a sandwich when playing the puck and b) Fedotenko missed the open net and it took an extra second or two for Crosby to poke it in.

This is all clearly viewable, but the announcer (PS, up in the booth) and the color commentator (BE, between the benches) both miss this fact, and debate whether the goal will go to Fedotenko or Malkin (who dumped it in) (and yes, I’m too lazy to type out the names of the announcers…I’ve been up since 5 AM).  They even remark on the fact that the ref took a while to point to the goal.  “He must have lost sight of it” says PS.  Uh, no, ref-y don’t point-y to net-y until puck-y cross line-y.

Multiple replays are going on at this point (I mean, not all at once) which shows clearly that the puck didn’t go over the line until Crosby scores, but they keep missing it.  The arena announcer comes on the PA and *announces* the goal to Crosby and they are still debating if Fedotenko got it.  Finally, BE sees the 5th (or whatever) replay, and notices that the reason why the ref took so long to indicate the goal is because it wasn’t scored until Crosby, well, scored the goal.  Brilliant.

Later on, Caps go up 2-1 when Fehr shoots towards the net and Orpik ends up chipping it into his own goal.  Although the highlight reel goals are the ones that make it onto SportsCenter (official motto: “We haven’t started sucking since 1989”) because, well, they are highlight reel goals, the deflection goal, either off of your teammate or the opposition, is pretty typical.  Once again, the announcers miss this for half a minute until BE finally sees the 19th (or whatever) replay and sees that it goes off of Orpik.

And this is one game.

Now, I understand why you wouldn’t want the announcers to just sit there and watch an HDTV broadcast of the game and announce off of that.  At least while the game was going on.  For hockey, you couldn’t fit a set between the benches anyway, and even with HDTV, you can’t see the entire ice and things going on behind the play, off wing, etc.  Fine.

But the broadcast company could and should have some guy sitting in front of a large HDTV, and talk with the producer to clear up screw-ups like this.

digression: unless you can’t afford it or don’t have HD sources in your area, if you are a sports fan, you *have* to get HDTV.  I mean, not in a moral sense, but you have no idea what you are missing.  I got HD a few years ago during the NBA playoffs and there was something…mesmerizing about Charles Barkley’s bald-ass head during the studio show, especially since TNT’s HD broadcasts are among the best, along with CBS’ SEC football broadcasts.  You really can tell the difference between HD quality.

Football broadcasts, for various reasons, are even worse.

For one thing, a large group of Football announcers are, well, getting up there in age.  They seem to have trouble recognizing basic things like, oh, noticing if a play resulted in an actual touchdown or not.  Or which players were involved in a play.

Also, with all of the on-screen eye candy, like the generated first-down line, there are way too many cases of the announcer saying something like “And that will be a 3rd and two” even if the play ends up a yard past the line.  Yes, we know the generated first-down line is not official, but can you have someone in the booth watching this? 

And there are *way* too many cases when the Umpire is indicating a penalty or something, and the announcers seem oblivious to it, to the point where the broadcast team simply fails to indicate what is happening on the field, even when it is clearly visible to any viewer.

Any and all of these things should be fixable.  It’s rare now that an incident doesn’t happen every weekend.

posted @ Thursday, January 21, 2010 8:53 PM | Feedback (1)
MapReduce Patented

Though it isn’t quite the same thing, I’m thinking of trying to patent Cut and Paste, just to see if it gets awarded.

posted @ Wednesday, January 20, 2010 11:49 AM | Feedback (0)
The EchoChamber Software Craftsman Cooperative

A bunch of guys at ElegantCode have announced what they are calling the The Software Craftsman Cooperative.

At a very high level, it is hard to see what could possibly be wrong with this:

“All members are dedicated to working collaboratively with clients, which means alternative ways of doing business.

The Software Craftsman Cooperative members maintain the following values:

  • Working together produces better results than working alone
  • Transparent and collaborative client relationships are healthier than fixed bid contracts
  • Delivering business value does not always mean delivering lines of code
  • Deliberate action is preferred to reactive heroics
  • Well crafted software produces more value than utilitarian execution “

I mean, you can’t really argue against working together.  Well, you could, but it would be silly.

I met Jarod and Jason at the Alt.Net Conference in Seattle in 2008 and, along with Sergio, to my recollection, we had a good time, very productive.  So, the fact that they are involved in this initiative is good.

But then you get to this:

“Some things about SCC membership:

  • Membership is by invitation only
  • Membership is subject to a vote of the standing members”

Gosh, I can’t see how this couldn’t degrade into any sort of highly political and/or personal issues.  Really, it will all be just a happy family, since everyone agrees on what ‘Software Craftsmanship’ means.

And the ultimate goal, really, seems to be this:

“The goal of the Cooperative is for all Members to contribute to each other’s mutual success in the delivery of excellent software, specifically:

  1. Provide the ability to secure larger contracts (most of the the members will be individuals or small companies).
  2. Provide access to experts in a particular field, either on a paid contractual basis (see #3 below) or simply for advice.
  3. Shared marketing, specifically:
    1. Help to alleviate the "feast or famine" problem that is inherent in the consulting business by providing access to a larger pool of opportunities.
    2. Members can refer opportunities (either in totality or specific pieces thereof) to each other.
    3. Potentially a brand for the Cooperative could be created and marketed.
    4. An online community used to share knowledge of business opportunities between the Cooperative members.”

Unlike some people, I'm not against making money.  But combine an invitation-only group designed to financially better members of said group…can’t imagine what could go wrong there.

I mean, for example, if someone who wanted to join the group believed the following:

- unless you are developing a framework (as opposed to an application, however you want to discriminate between the two), TDD is a harmful software practice.  You should be testing scenarios, not methods, and doing otherwise, causes harm.  If you aren’t doing BDD, you suck.

I’m guessing you wouldn’t get voted in.  Even though that belief is, arguably, correct and true.

Instead, you’ll get the Least Common Denominator beliefs of what makes someone a ‘Software Craftsman’.  And combine that with a motive to make money….not a chance that will be a problem.  Nope.

All that said, I guess it’s better than having a Big Ball of Mud Cooperative.  Maybe.

Curious if when people apply for this thing, will the vote be public on applications?  Hmm…..

But, I’m sure it will all work out without controversy about who voted for or against whom.

    posted @ Thursday, January 14, 2010 9:16 PM | Feedback (0)
    CQRS Presentation, Chicago Alt.NET, 1/13/2010 Recap

    I’ll update this post with the link to the screenshot and the slide deck when Sergio posts it.

    Update: here they are.

    On a scale of 1 to 10, without thinking too much about it, I would give myself a 4 or a 5 on this one, and for a couple of (somewhat related) reasons:

    • - a fundamental flaw of the presentation is that I was trying to give a high-level summary overview of the things that Udi and Greg and Mark are talking about, without having the benefit of the width and breadth of their knowledge and experience of actually implementing it in multiple situations (as I mention in the talk, I do hope to have a system in production this year that implements a fairly robust version of cqrs, but even if I’m successful, that’s still quite a bit short of having done it for years in highly scaled environments).
    • mild digression:  Bellware, in a post to the alt.net mailing list a while ago, made a point about something he called “Talking with a Full Mouth” which, to crudely paraphrase it, was that you shouldn’t try to ‘educate’ people about software practices unless you’ve practiced them for years, and in fact, should really just STFU about it.  As someone who was paid to educate people about philosophy and logic in a previous career, there is something to this.  There were times when I might sit in on other people’s classes, and there were times when it was painful because the educator didn’t have a firm grasp of the subject matter and I felt the class suffered from it.  I don’t feel too bad about the presentation in this regard because I was very clear about what I was doing, giving a high-level summary overview of what CQRS was about, but didn’t have all the answers, and pointed them to the authoritative sources.   When I was teaching students about, say, Utilitarianism, there’s no way that I could claim to be as intelligent as Mill, but that didn’t mean I couldn’t explain to beginning philosophy students the difference between Act and Rule Utilitarianism.
    • - I don’t have a firm grasp of the right way to do Event Sourcing, which I think is really how CQRS needs to be done.  There’s something about how the code I’ve seen that implements it that strikes me as….sub-optimal, for lack of a better term, but I can’t quite but a finger on why it strikes me that way.  As I mentioned in the talk, I’m pretty convinced that this is because I don’t quite get it deeply enough.  I explicitly made this point, and so, like dogs sensing fear, a number of the questions focused on this area.
    • - You can’t really explain any topic in-depth in an hour, much less CQRS.
    • - Half of the purpose of the presentation was, frankly, to have people react to how I presented it, and ask questions that I might not be able to answer, not only to guide how I continue my blog posts about it, but, more selfishly, guide me in terms of what I need to work on more deeply to increase the chance of success of my own implementation of it.  So, mission accomplished there. 

    Other than that, I thought it went well.

    Anyway, I think that one of the things that is much clearer from the talk is that the importance and attractiveness of CQRS is tied into a couple of crucial areas:

    • - having scalability built in: a couple of comments that people made to me during and/or after the talk was about how they wished they had something like CQRS in previous projects to handle scalability.  Short of violating laws of logic and/or physics (some laws of physics appear to violate laws of logic, which is why they are probably false, but not even I can digress that point here), nothing is impossible, but trying to retrofit CQRS into an existing software system/application appears to be, well, really really hard.  So, if you’ve paid a price in the past for not having scalability you didn’t know you needed, CQRS will seem, at least on the surface, more attractive.
    • - having an audit log built in: the conceptual idea of having an exact record of everything that has happened in your system over time is something that has incredible appeal.  Almost any system of significance runs into the problem of trying to figure out the answer to the question, “How did the application get to this state?”  Whether you are a developer or an operations manager, a hell of a lot of time can end up being spent trying to determine this.  Even a system that doesn’t technically have bugs can end up in a state that the end users didn’t expect.  When constructed properly, a CQRS-system at least offers the promise of being able to answer this question.  You can figure out how you got to the state you are in by examining the record of the events that got you there.  In theory, you can set up a UAT environment and replay those events, debug them closely, and see exactly how you got there.  In theory, if you construct the system properly, you can rollback those events that got you into the state you don’t want to one that you do want.  In theory, as you develop new code, you can replay events (and possibly even commands, more on this below) and see what the effects of the new code will be on events that have already occurred.  All of these things will appeal to people who have had to deal with this.
    • - Eventual Consistency: do you understand the concept and does it apply to your system?  Do you accept the notion that all data is stale?  At the highest-level, I think this can be seen as a deal breaker.  It’s one thing to accept the idea that, say, Amazon has no choice but to deal with this, yet another to think that your own system, which really will never need to scale to Amazon levels, has to deal with it. 

    My own list of questions that I wrote down after the presentation are around the following:

    • - if, as a Customer, I change my Address information, I expect to see that the information is updated when I press that Submit button.  But if my screen is populated from data from my eventually consistent data store which is gathered from my Query layer, how do I ensure this?  In an eCommerce situation, Order Submission is a question here.  I expect to see the Order ID (or whatever) on the Order Confirmation screen after I press that Submit button, especially since I’m often asked to print out the Order Confirmation screen.  When I was part of a team that handled sales for the eCommerce store for the NBA, we sold a hell of a lot of product in the seconds/minutes/hours after the Lakers clinched their 3rd title (I think that’s when it was).  Eventual consistency in terms of getting an Order Confirmation email is one thing.  As an end user, if it takes a minute or three to get that confirmation email, that’s expected.  But while placing the order, I expect that Order ID to be there ‘immediately.’  How do I ensure that this happens?  My Query Layer might query the Domain directly because my system issues an ReallyImportantOrderThatHasToBeReflectedImmediately query, but is this a well-thought out architecture or a hack?  In other words, what’s the well-thought out way of handling Eventual Consistency for those UI queries that just have to be consistent, especially in situations of high traffic?
    • - Commands issued from the UI become Internal Events that are saved to an Event Store that produces External Events that are published.  How exactly should this work?
    • - Should there also be a Command Store, as well as an Event Store?  I’ve not seen any discussion about this, but it seems like you might want to have this, if only so that you can replay commands against a future code base to see what the effects are.  You want the Event Store as your audit log, and because you want to replay them given the business rules of your Domain at the time they occurred, but why wouldn’t you also want to have the Commands themselves recorded so that you can replay them given new business rules?  Or can the Event Store inherently handle that?

    And I’m sure there are others.

    In any event (pun intended), I hope that my presentation at least got some people to be interested in CQRS, and thinking about if it might work for them.

    posted @ Thursday, January 14, 2010 8:43 PM | Feedback (0)
    When jQuery scrolling sucks

    Rob Conery wrote a post about how traditional paging on web pages sucks.  And, generally speaking, he’s right.  As he puts it:

    “we know that people rarely go to pages 2- whatever and that when they do, they rarely come back to page 1. We have 3 seconds to capture a user’s attention on a new visit – that’s it. If you make them page they will leave, and it’s amazing how many applications still use this dated way of showing information.”

    Though he doesn’t specify it, he’s obviously talking about (among other things), Google.  An entire industry exists (called “SEO”, which means “Search Engine Optimization” or as I like to say, means “SEarch snake Oil”) to help you get your web site/page to appear on the first page of a Google search, since the amount of traffic/sales that you can get by being on the first page is, well, really ridiculous compared to what you get if you are on, say, page two.  Since I’m lazy, I won’t link to anything here, but, well, you can Google it and find pretty solid real-world studies about it.  It’s fascinating/depressing to think about if you are a business trying to get your business to appear on page one.

    Anyway, as a replacement, Rob talks about the ‘bottomless scrolling’ option, one that you can see on Twitter.  You go to a person’s Twitter page, and at the bottom, there is a “more” link that does some nifty AJAX type stuff, and the next series of Twitter entries appear.  And you keep clicking and get more entries.

    This is all fine.  Except it also sucks, and in some cases, sucks more than traditional paging.

    The first obvious point is that if the imagined end user isn’t going to go to Page 2, they aren’t going to want to click on the “More” link either.  There’s no end-user improvement to having the “More” link instead of having a “Page 2” link.

    More importantly, in some cases, there’s an end-user degradation to having the “More” link.

    Suppose someone sent me an email that said “Hey, Rob twitted about you, you should check it out.”  And though it isn’t critical to the point, suppose it was more than a week ago.

    Given the ‘bottomless scrolling’ option, I would have to keep hitting that “More” link over and over and over to get to the relevant point.  If I had a paging option, I could center into the appropriate date range by entering in, say, ?page=10, and seeing the date range there, then, say ?page=47, or whatever, and very quickly find what I was looking for. 

    So, when considering whether one should abandon paging, you should consider what the end user scenarios are.  If the end user scenario expects Page 1 to be the be all and end all of what they are looking for, then ‘bottomless scrolling’ might be the right thing to do (though you should also think about what if anything it actually gains you…I’m thinking nothing usually).  If the filter criteria of the search is something that an end user might be able to control, paging is still the right thing to do.

    posted @ Monday, January 11, 2010 10:22 PM | Feedback (2)
    CQRS Presentation, Chicago Alt.NET, 1/13/2010

    This should help clear up the upcoming blog posts.  I hope.  Registration here.

    Update: video and slide deck here.

    Marketing blurb:

    Jdn presents "CQRS in roughly an hour or so"

    6:00 pm
    Pizza and networking time

    6:30 pm

    After his well-received presentation of Inversion of Control, Jdn is back to explain CQRS, a design principle that, when well understood and applied, can do great things for complex, multi-user applications.

    CQRS (Command Query Responsibility Segregation) is a design/pattern/architecture where the responsibilities of processing commands and queries are segregated into distinct parts of the system. By designing systems that explicitly model state transitions and are built around notions of eventual consistency, you can gain scalability and business acumen, and fashion the parts of your systems more intelligently to the tasks at hand.

    John Nuechterlein (a.k.a. Jdn) is an independent software consultant that has been in this industry for many years. He's a long standing member and one of the founders of the Chicago ALT.NET user group.

    posted @ Wednesday, January 06, 2010 7:56 PM | Feedback (0)