Posts
822
Comments
686
Trackbacks
1
September 2010 Blog Posts
Porcupine Tree – Live at Radio City Music Hall – 09/24/2010

A couple of days before the show, I ‘discovered’ Porcupine Tree’s page, and so was prepared for a 3 hour show that featured one ‘half’ of a performance of The Incident, and one ‘half’ of them performing older material from the 1993-1999 period.

It didn’t turn out quite that way.  With two 10-minute breaks, they didn’t quite perform for 3 hours, they didn’t play The Incident in its entirety, and they didn’t just play other material from 1993-1999.  But, regardless, it was a pretty fantastic show (with only a minor quibble).  Here’s a mini-review, including the set list.  All listed comments from SW are from memory, so semi-accurate paraphrases.

To the owners of Radio City Music Hall: have you ever sold merchandise before?  I would have thought so.  Did you notice the near-crushing of people before the show due to the chaotic way you were running it? You might want to make note and have a meeting.

Porcupine Tree ‘Unplugged’

When the curtain first went up, there was a stand-up bass for Colin, SW with an acoustic guitar, a small drum kit for Gavin, a small Hammond-like organ for Richard, and then a regular (though smaller stack) guitar setup for John.  I wouldn’t say a groan went up from the crowd, but it was weird.  SW told the crowd “Up until around 7PM last night, we had no idea what we were going to do for this part.  But rest assured, for the whole show, we are going to be playing a lot of ‘old shit’” (and, of course, he did the thing with your fingers when he said ‘old shit’).  Here are the songs from this segment:

  • Stranger by the Minute
  • Small Fish
  • Pure Narcotic
  • Black Dahlia
  • Futile

After the first song, I did hear one random “Go electric!”, but overall the set was well-played and well received.  I did have a moment of horror when, before the last song, SW said “We were trying to think of what was the most inappropriate song to play in this format, and we agreed it’s definitely this one” and for a brief second, I thought they were going to play “Anesthetize” (as a reference point, I think Eric Clapton’s acoustic version of “Layla” is the greatest abomination in the history of rock music), but, luckily that wasn’t the case.

This led to the first 10 minute break.

Next song set

Here’s the song list:

  • Even Less, full version
  • Open Car
  • Lazurus
  • Tinto Brass
  • The Sky Moves Sideways, phase one
  • I Drive the Hearse
  • Bonnie the Cat

I thought by far the best song/performance in this set was ‘Even Less.’  I recently got the re-issue of Recordings, so I’d heard it before, but it was absolutely phenomenal performed live.  At some point, SW did reference the fact that John left the stage for periods of time (as there were no parts for him from the older material), “He isn’t leaving because he’s incontinent.”  He also talked about how there’s a tradition at venues for acts to sign a ‘guestbook’ of sorts when they play, and that this time, instead of paging back in the book and seeing “Ultraviolet”, instead it included “Conan O’Brien” and “Oprah Winfrey.”

Before playing “I Drive the Hearse”, he mentioned that the original plan had been to play The Incident in its entirety, but that for time reasons, they changed plans.  I have no idea why the ‘3-hour rule’ was strictly in effect.  Did they need to prep the Hall for a Knicks game? (joke)  It did re-enforce for me what I’ve felt from its release, that there is something about the 55-minute ‘song cycle’ that doesn’t hang together for me.  I can’t explain why exactly.  There are various ‘concept albums’ that fit together due to recurring musical passages or lyrical phrases (in better and worse ways).  The best contrast I can think of (and it isn’t a great one) is Pink Floyd’s The Final Cut, Roger Waters’ swan song with Pink Floyd that centers around the Falklands War and a marital breakup (or something, proving definitively how weird he is).   As odd of a recording that that is, there’s something about how “Not Now John” flows into “Two Suns in the Sunset” that seems to tie the whole thing together.  When “I Drive the Hearse” comes at the end of The Incident, I just think “Great, here’s my favorite track.”

One other note: I just noticed on another site that had the total set list (since I needed to verify some of the older tracks) that there’s some disagreement about what “The Sky Moves Sideways” piece was played.  If you go onto YouTube (as I did as part of my verification), you can find “The Sky Moves Sideways, parts one and two” as recorded in San Fran in 1999 (IIRC), which is not exactly what ‘Phase One’ is as recorded.  There’s a couple minutes different.

And yes, that is a total nerd ‘other note’.

At this point, there was another 10 minute break

Last song set

Here’s the song list:

  • Occam’s Razor
  • The Blind House
  • Great Expectations
  • Kneel and Disconnect
  • Drawing the Line
  • Dislocated Day
  • Time Flies
  • Anesthetize, part 2
  • Up the Downstair
  • Sleep Together

“Time Flies” is fantastic as a live song (and “Drawing the Line” is one of those songs that I enjoy more live than on recording), as is “Sleep Together.”  I *hate* that they don’t play the full version of “Anesthetize.”  I can’t think of a similar contrast, maybe if Led Zeppelin played a ‘short version’ of “Stairway to Heaven” or something.  Other than that, it was a good set.

Encore

SW announced “we only have time for one more song” (again with the weird 3-hour rule….if they had tried to go 4 hours, would the Teamsters go on strike or something?), “but it’s a fucking long one” and they launched into my favorite PT song “Arriving Somewhere But Not Here.” 

And that was that.  Except for what they did to “Anesthetize”, I can’t really complain, it was a very enjoyable show, and, for me, the live rendition of the full version of “Even Less” was worth it.

I imagine that between now and then, the London Special Event will be a bit different.  For whatever reason, PT apparently didn’t record this concert (the guy in the seat next to me did, asking me not to clap too loud) and won’t record the London show.  I’m sure there’s an engineer’s tape out there somewhere.

posted @ Monday, September 27, 2010 7:10 PM | Feedback (0)
College Football Question: Why don’t DBs commit pass interference more often?

As everyone knows, in the NFL, a pass interference penalty results in the ball being placed at the spot of the foul.  It could be 10 yards, it could be 47 yards.

In college football, it is only a 15 yard penalty from the line of scrimmage.

Given this, why don’t defensive coaches coach their defensive backs to commit this foul any and every time a receiver is obviously going to beat them on a route?  Against the possibility of a touchdown or long gain on a pass, why not just tackle the receiver and take the 15 yards?

posted @ Thursday, September 16, 2010 9:49 PM | Feedback (1)
If They Can Make It There (Porcupine Tree at Radio City Music Hall, 9/24/2010)

Back in February (I think), Porcupine Tree announced that they were going to have two special concerts, one in NYC, and one in London.

It was never really explained why they were special concerts.  I’m unclear at the moment if they are at all, as Porcupine Tree is touring ‘normally’ for The Incident.

In any event, I decided to turn the concert event into an excuse to take a full week vacation to NYC, which starts this weekend.

Before he retired, my father lived in a building that had one (or more) guest rooms that allowed myself and my sisters to visit NVC, essentially, for free.  Since he’s retired, I haven’t been back.  Once I got past the ‘holy crap’ realization of what it costs to get a hotel for a week, I decided to go ahead and do it.  What the hell.  I can afford and manage it.

Besides the concert, my other main enjoyment will be to visit various restaurants of chefs I admire.  No, I don’t expect any of them to be there, since famous chefs don’t actually cook that much, but, I’ll be hitting spots owned by: Bobby Flay, Mario Batalli, Laurent Tourondel, Daniel Boulud, and Morimoto.  I do expect that by the end of the vacation, I will weigh something close to a metric ton.  My only hope is that my usual walking tour of the entire island south of Central Park will burn a few calories.

posted @ Thursday, September 16, 2010 9:38 PM | Feedback (0)
More Blog Moderation Blah Blah

In a previous post, I linked to something going on at LosTechies.

Derick Bailey posted an update about his original post wherein he described his decision to delete various comments that he felt were totally out of context and so deleted them.

Needless to say, if you read the comments to that update, he stepped into it, so to speak.

I’ve posted before about the entire topic of moderation, but here I tend to use really basic moderation tactics.  If, to make up an example, I post about a Porcupine Tree concert (and there will be yet another one shortly), and a commentator decides to try to start a debate about the 2nd Amendment to the U.S. Constitution (not the ship, if there is one, the freakin’ constitution), I’ll delete it.  If it’s spam, I’ll delete it.  If it says something negative about my mother, I’ll delete it.  Anything racist is right out.

Other than that, I’ll probably allow it (the ‘probably’ modifier gives me leeway to be capricious, though I’ve never had to be on this blog).  Some people don’t like anonymous comments, but because of my USENET usage heritage, I have no problem with them.  Some people don’t like personal attacks, but my experience has been that one person’s personal attack was another person’s accurate analysis (and I think one of the best comments of the very few my blog generates was arguably an attack by someone I happen to know personally, but it was both accurate and funny….funny will almost always trump anything in my book, even/especially if it is at my expense).  I’ve been called “.NET’s greatest troll” both literally and in spirit, often times simply because I tend not to always accept ‘Alt.NET Orthodoxy’ (there is no such thing, but you know what I mean), and some people sometimes apparently think I don’t actually believe it when I don’t accept it.  And that’s okay. 

When it comes to blog comment moderation, the very safest thing a blogger can do is to go the John Gruber route.  Don’t allow comments at all (Scott Bellware does this on his current ‘blog’ (apparently he doesn’t consider it a blog, per se).  This is perfectly acceptable in my mind.  I don’t know if I buy Gruber’s reasoning 100% (that will surprise anyone who knows me not at all), but it makes sense to me.

Once, however, you decide to allow comments, even/especially when they are moderated, you just open yourself up to a lot of crap when you start to delete/block them for reasons other than the reasons I give.  It gives the appearance that you are blocking things simply because you disagree with them.  Which is perfectly legal, but seems really unsporting.  Especially if you are someone who posts something highly opinionated, you should expect disagreement.  It comes with the territory.  If you don’t want people to strongly disagree with you, don’t post anything highly opinionated. 

Rob Conery automatically blocks every comment I might make to his blog (even when I agree with him), but at least he was open about it.  Phil Haack blocks every comment I might make to his blog (even if it is a request for help about SubText) as flagged for moderation, but then it disappears forever.  I can’t for the life of me figure out why .  That’s their right.  It isn’t (in my mind) strictly censorship, since they aren’t government entities.  I think it says something about them, but I don’t think they care what I think about it, which I understand personally.

Anyway, there you go.  Once you accept comments, if you start to block things because you disagree with them, you’re just causing yourself extra grief.  If it isn’t racist, etc., allow it, and then, as the blogger, if you think it is important, point out that you think the commenter is a douchebag.  If you can manage it though, don’t be a wuss about it.

HTH.  YMMV.  HAND.

posted @ Thursday, September 16, 2010 9:11 PM | Feedback (2)
T4MVC methods can only be passed pseudo-action calls (e.g. MVC.Home.About()), and not real action calls.

I’ve run into this error before when using T4MVC, and I posted on the Alt.NET yahoo group at the time.  I don’t recall if I ever explained the cause for this, and since I momentarily forgot about the resolution, I’m posting it here again, as much so that I know it will be lodged in my brain.

When using T4MVC, there is a T4MVC.tt file that is added to your Visual Studio project/solution that generates related .cs files to enable T4MVC functionality.  If you change a controller to add a method that returns an ActionResult, and add code in the .ascx file to use that method, BUT you forget to right-click on the .tt file and run custom tool to regenerate the related .cs files, you will get this message.

Obviously, this isn’t an in-depth technical analysis of what T4MVC is doing behind the scenes, so I can’t promise that there aren’t other causes for this error message, but in my practice/experience, this is the main cause.

posted @ Thursday, September 16, 2010 8:35 PM | Feedback (3)
Initial Impressions of UberProf, part 3, or “Stumping Ayende”

In previous posts, I talked about my initial impressions of both the EF Profiler and the Linq to SQL Profiler.  I’m going to focus here on the latter.  From the post title, you can guess correctly that there’s some bad here, but there’s also some good here, so I’m going to start with that.

“This thing will pay for itself in an hour”

This is one of the lines in an email I sent to Ayende, focusing on the promise of the profiler and how I see the clear benefits of it.

Here’s a standard output (I’m using a test project, reasons for which I’ll get to later, if I remember to):

1

The Data contexts tab on the upper left shows each Data Context executed and when you select one, you get all of the queries it executed (again, in my simple test, there is only one query, but go with the idea).  Where it gets even better is when you click on the Analysis tab and select accordingly:

2

As you can see, all of your calls are grouped by method (again, pretend there are more queries per method), which is fantastic.

Where I believe the Profiler is worth the cost is probably best explained by contrasting how I might accomplish the same tasks without the tool.  Since at this point I’m purely a SQL Server guy, I’ll talk about that case.

If I wanted to set a baseline for the SQL calls that a web app would make, I would naturally use the SQL Server Profiler it comes with.  I would create a trace in that tool, have it start profiling, and then hit a page I wanted to baseline.  I would then stop the profiler, and export the trace to SQL tables (you can do this automatically, but I prefer to export it for no other reason than that is what I’m used to).  You can spend some time to set up the trace so that it only logs the calls your apps are using, or, what I normally do, is just do some filtering and grouping to get the same statistics that the Linq to SQL Profiler gives you.  So, from a pure data perspective, you get the same data.

What is really efficient about the promise of the Profiler is that you get it very quickly and very easily, and more importantly, your SQL calls are tied back to the methods that generate them.  This allows you to very easily see which methods are most inefficient.  You can tie the individual queries back to your SQL instance if you care to (though I haven’t done that yet), but having this centralized view of your application and how it interacts with your data store is incredibly useful.  It would take very few iterations of being able to see the statistics and dive down into the methods to easily cover the time it would take to use SQL Server Profiler alone (which won’t automatically tie back to your methods anyways).

So all of that is good.  And where it really rocks for me is if you imagine the following workflow:

1) Identify a page you want to optimize.

2) Run the profiler on that page and get all of the statistics in all their glory.

3) Identify your most expensive/problematic areas, and optimize your code.

4) Rerun the profiler, see how it has improved.  Rinse and repeat.

Again, you could do most of this with SQL Server Profiler, but UberProf makes it really easy to do.  In theory.

For the real app that I am using UberProf with, it’s a perfect test of its capabilities.  The app was built (with some code probably two years old) without any eye to optimization at all and so it is a performance dog.  Within an hour, my goal was/is to show how easy it is to improve an application’s performance (and not because I told Ayende I was going to do it, just because I think the promise of the tool is that compelling).

Now, let’s talk about the not quite so good part.

“If I can ever get it to work for more than a minute at a time”

At some point during the Skype session with Ayende where he was trying to troubleshooting the problem I was having, I said something like “Dude, I told you I wasn’t making this up.”

Here’s a simulation of the problem:

3

Notice how in the middle of the top of the app, it shows the L2SProfTestSite as still being profiled.  Note that the statistics have not cleared in the bottom left and that no data is showing in the Data Context tab.

To the consternation of Ayende and myself, we can’t determine why the Profiler stops profiling.  Doing an iisreset should do the trick.  Recycling the Application Pool (since this is a web App) should do the trick.  But neither do. 

Note that I mentioned that the above pic is a simulation of the problem.  To the double consternation of Ayende and myself, we can’t recreate the issue outside of the particular web app I bought UberProf to fix.

After some back and forth on the Google support group, we setup a Skype conversation and tried to troubleshoot the problem.  I shared my desktop and recreated the issue immediately, to which Ayende responded (I’m paraphrasing of course since we didn’t record it) “It should not be doing that.”  Over the course of a little more than an hour, he changed internal code to L2SProf and sent me various debug builds, and try as we could, we could not come up with any explainable reason for why it behaved the way that it did.  We looked at how I was creating Data Contexts, how the App Pools were defined, and other things.  Ayende shared his desktop and showed that he couldn’t recreate the issue.  And in my test app, I can’t recreate it either, but there’s nothing in the web app in question that explains the behavior (which I’ve recreated on three separate machines to eliminate machine-specific issues).

The only way I can get L2SProf to benchmark separate pages is to get stats for the first page, then add a space somewhere in the solution and rebuild and recycle.  At that point, I can get stats for the second page I’m benchmarking.  Rinse and repeat for the third page.

Because of the time difference, after an hour or so, we concluded the Skype session, and he asked me to try and recreate the issue in a test app, which I’ve been unable to do.

I have not as of yet done a side by side comparison of the statistics that Linq to SQL Profiler generates versus SQL Server Profiler, but that’s actually the least of the things I worry about.  It meets the eyeball test when I’ve run quick comparisons, I’ll be more exacting in my tests, but I’m fairly confident about that part (which in the end is the important part).

What’s maddening is the high friction in being able to use the tool in what I would consider to be a normal use case.  I have the workaround, but I shouldn’t have to add a space to the code base and rebuild.  Ayende agreed, and was able to share his desktop and show that he couldn’t reproduce the same behavior.  The fact that I have been able to reproduce the same behavior, at will, on three separate machines tells me there is something basic being missed.

That Ayende was willing to Skype in and troubleshoot is a good thing (try doing that without a support contract with other software vendors), but that he couldn’t fix the issue isn’t so good.  As a paying customer, I want the thing to work in what I believe we both agreed was a ‘not crazy’ use case.  Combined with the constant errors (mentioned in previous posts, I think) that are generated when upgrading to a new build gives off the impression that the tool is amateur-ish, which I don’t think it is.

So, it’s frustrating all around.

Since I have the ugly workaround, I will now be able to test the promise of the tool on the application I bought it for.  That will be the topic of my next post.

posted @ Tuesday, September 14, 2010 8:37 PM | Feedback (0)
Accell UltraAV DisplayPort to VGA Adapter–An Endorsement

I bought a Dell Dimension 8100XPS to replace my existing desktop when the power supply blew up.  I chose the 8100 XPS because it got better reviews than a comparable HP, and I’m done with PowerSpec units dying, and it was what was in the store at the time.  It’s an 8GB (expandable to 16 GB) Ram, 8 core machine and I’ve been pleasantly surprised.  The last Dell I used was some Latitude crappy laptop from back in the day, and I was happy at how easy it was to wipe it and load from scratch.

Anyway, it contains an ATI Radeon something or other (5750 I think) which allows you to hook up 3 displays at once, which I’ve always wanted to upgrade to (from the typical 2 monitor setup), but each time I tried to add the 3rd display, it wouldn’t get recognized, or would disconnect one of the other ones.  After some research, it turned out that one of the monitors had to be DisplayPort-capable (something to do with clock syncing, or something too lazy to learn).  Okay.  What the hell is a DisplayPort again?  Google…..

Great.  I don’t have one of those.  What to do?

Enter Accell.  I can’t tell for sure that it is exactly the same quality as a straight HDMI connection, but hell if I can tell the difference.  The adapter allows you to (surprise, given the name) plug a VGA port into it and then into the DisplayPort port in the video card.  It’s cheap, and it works.

If you find yourself with a similar requirement, it performs as advertised.

posted @ Tuesday, September 14, 2010 7:53 PM | Feedback (0)
Unintentional Comedy

Over at LosTechies, Derick Bailey has a post about testability and design.  It’s an interesting read.

However, for real fun, wade through the comments.  Outstanding stuff.

posted @ Monday, September 13, 2010 10:13 AM | Feedback (5)
Important Lessons in Software Development

Nate Kohari of AgileZen fame (disclaimer: I am a paying subscriber of AgileZen, I think it’s good, not sure I’d call it great, though I use it more lightly than intended, I think) has a post where he talks about the experience of starting up the company (which has since been acquired by Rally Software).  It’s a good read overall, but I particularly agree with two points he makes:

“I built AgileZen unlike any other piece of software I’d built. I cut corners that I would never have thought of cutting if I was working a day job. And you know what? It didn’t matter. We launched on schedule, we crushed our user and revenue goals, and our customers raved about the usability of the product. No one gave a shit about what our test coverage numbers were. Our software made their job or life easier, and that’s why they bought it…Now, I’m not saying every product should be built this way, but it certainly shifted my perspective on what’s vital and what’s not when building software.”

“After we launched, I spent the month of October basically rewriting every line of JavaScript in the app to create a more maintainable structure. Was it wasteful to throw so much code away? Maybe, but we were up and running, and we were making money. Before we could afford to spend a lot of time to make the software maintainable, we first had to prove that we would actually have to maintain it — we had to make sure that we actually had a market. Your app can have the best architecture and be 100% defect-free, but if no one cares what it does, it’s wasted effort.”

Though it’s been forever, I do remember being part of a startup, and the constant “we have to launch on time, we have to launch on time” of it all (we launched two weeks late, and I missed it because after a 36 hour straight work shift, I was dead).  We thought (rightly) that we were in competition with some other groups and we had to get launched and start making money and all that (at one point months or years later, our then CEO’s constant mantra was “we are the clear market leader” which I think we were for a bit.  Till the funding ran out.  Our main competition still exists, generates two billion dollars a year in revenue, I think.  Surprisingly, I’m not the bitter one amongst the ex-colleagues.  But I digress)).

I have my own company which could be startup-y, but I think some lessons can be learned that applies to most software development. 

What value does your software provide?

It could be to external customers or internal users, but in most instances, you can tie the reason for the existence of your software to someone it provides value to.  And that’s the most important thing.  Maintainability, separation of concerns, testability, yada yada yada, all that matters.  Especially to developers.  I like my SpecFlow features very much.  But they don’t matter if the interfere with the value of your software.  This is why, when I think it interferes with that value (I want to say “value proposition” but then I’d have to shoot myself), I don’t write a feature.  In other words, I make a judgment call.  What is the estimated cost of not having a feature covering some functionality?  What am I giving up?  I’m giving up ‘code coverage’ (though not in the strict sense).  But some features are made to be used to help drive development, and then *thrown away*.  At times, it is still better to create the throw away feature, but not always.

Some people of the Software Kraftsmen BrownShirt Squad (trademark pending) ilk like to argue something like the following: when you deviate from the strict maintainability path, you are taking on technical debt, and it will end up costing you more in the end.  There’s a reason why they like to argue something like that: a helluva lot of time, that argument is absolutely compelling.  You will end up block providing value down the road.  And we’ve all been there, facing the dread of changing some unmaintainable ball of mud where you fix/upgrade one thing, and break one or three or twelve other things.  And that sucks.

But, it won’t always block value.  Where the SKBSS argument goes awry is completely ignoring the fundamental truth of YAGNI, and creating a local optimization that prioritizes developer ‘friction-free’ warm fuzzies over value.  It isn’t just me, but my experience is that there’s a good bit of software out there that gets written, and then it is done.  A significant amount of work that I’ve done involves ETL projects, and so it’s probably a bit more natural to find this in this area.  In an ETL project, there are times when you need to rigorously test out code, but most of the time, what really matters is that the data (usually large amounts of it) ends up where it ends up when it is supposed to, and for that, you run the code that does the ETL.  Not test code that tests out certain pieces under mocked out conditions, the actual code.

This extends beyond ETL projects though.  I don’t know that you can put a hard and fast rule on it, but some pieces of software are designed to be written, and then left alone.  What ultimately matters is that it works and provides value, even if the code behind it is objectively poorly maintainable (and though I have argued about the meaning of ‘maintainability’ in the past, I’ll accept the meaning that most people mean by it for the point of discussion).

*Even if* future value will be blocked, it is sometimes the right decision to provide the value that you need right now *right now*, knowing full well you will pay a price down the road.  To go along with the analogy, debt is sometimes inherently useful, and used by businesses all the time.  It (should be) is a business decision to decide to take it on or not.

Another lesson

See previous discussion.

Any number of people who have been consultants can relate to this (I think): I’ve recently been handed one of those ‘gifts that keep on giving’ in software development, a Very Important ™ project where the original lead developer and lead project manager left the company the day after it went into production.  Fantastic.  I will let you guess whether it worked the day after it went into production.

From a maintainability standpoint, the thing is a significant improvement over what it is replacing.  Grading it very strictly, on a scale of 1 to 10, I would give it about a 5.  Not spectacular, by any means.  There are core components to the thing that I’ve been able to fix significant issues or add decent functionality in the space of an hour.  There are other parts of it that, frankly, I’m not touching unless they order to me to (also known as the prioritization process).  Given that the previous software, graded very strictly on a scale of 1 to 10, was about a –3 (I’ll leave you with two words, “Control-M” and “VBscript”), it’s a definite improvement.

But the end users don’t care about that, they care that it hasn’t been providing all of the value that they thought they were getting (the reasons for this are worthy of another post).  Because I’ve been able to make fixes and add some value, the maintainability of the code base has been a plus.  But they, and I, would have preferred it provided the value originally expected, even if it was a maintenance dog.

Summary

Becoming a better developer involves learning all of those yada yada yada things I mentioned before.  If you learn them, you will be a better developer.  But being an even better developer involves learning when the yada yada yada things can and should be put aside.  This is why manifestos are bad, and why you should strive to be a craftsman without becoming a Kraftsman.

posted @ Thursday, September 09, 2010 9:35 PM | Feedback (0)
True Curmudgeon Tests–Sports Variety

If you are one, you know it, but here’s an easy test.

Remember a few years ago in the NCAA when George Mason made it to the Final Four?  If your natural reaction to their loss was joy, mixed with slight regret that it wasn’t by 50, that’s a sign (note, this is what made this past year’s tourney final game so tough.  All the “Butler Cinderella playing at home” slop was nauseating, but they were playing Duke, and it is a God-given right for every man, woman and child to hate Duke and Mike “I’m a leader of men” Krzyzewski if they so choose).

Tonight gave us another test.  If your natural, visceral reaction to the intro to the Saints-Vikings game was hoping they got drilled by at least 30, that’s a sign.  It’s completely unnatural to hate the Saints, but I think I hate the Saints.

posted @ Thursday, September 09, 2010 7:47 PM | Feedback (1)
The start of a new college football season

Signals the beginning of Fall in America, and the annual renewal of some fantastic traditions:

Brent Musburger will promise that any contest between two teams in the Top 10 will be “ a dandy.”

Kirk Herbstreit will claim to have intimate knowledge of the inner psyche of the QB based on one or two plays.  “You can see here that the QB runs up field, even though he has players open, because he’s afraid of how the defensive line is coming after him” (replays will clearly show that every receiver is clearly covered).

Mike Patrick, ignoring the umpires, the stadium, and his own telecast, will miss an easy call.  e.g.4th and 1 with 45 seconds left, Patrick debates whether Navy should use a timeout.  Maryland calls a timeout, the umpire signals that Maryland has called a timeout, the stadium announcer can clearly be heard announcing “Maryland calls a timeout”, and the ESPN bug shows that Maryland is being charged for a timeout, and so, of course, Patrick announces “And Navy finally uses their last timeout.”

Anytime one team has more than 3 plays go in their favor, the announcers will talk about the obvious shift in “momentum.”  Never mind if the 4th play results in a touchdown for the other team.

Just about every broadcast team, on every station, for every game, will invent a theme for one team or another in a matchup, and will run it into the ground, regardless of whatever action occurs in the actual game.

I love football.

posted @ Monday, September 06, 2010 8:44 PM | Feedback (2)
Memristors are coming

They are coming .

posted @ Wednesday, September 01, 2010 2:45 PM | Feedback (1)