Scrum Is Not About Tools

(I originally posted this on my MSDN blog.)

A frequent question on the internal email groups around here is, “My team is going to start using Scrum and we need to find a good Scrum tool to use for tracking everything.  What tool should we use?”  Usually these questions come from teams who have a tradition of metrics-heavy, documentation-heavy process with lots of artifacts.  The assumption behind the question is that Scrum is also a metrics-heavy, documentation-heavy process, just with different artifacts.

My answer is that Scrum is a philosophical and cultural shift.  Scrum is highly ceremonial in some respects, but the ceremony is all oriented around face-to-face, real-time communication, not artifacts.  If your team is geographically distributed then yes, you’ll probably need some artifacts to compensate for your lack of face-to-face communication, but remember that Scrum is not about artifacts.  Unfortunately, most software-based Scrum tools are about artifacts to a greater or lesser extent, which makes them less useful than you might think at first.

Low-tech Scrum tools like sticky notes and whiteboards are not glamorous and of course it seems somewhat heretical that software developers might eschew software tools in favor of old-fashioned paper, but there are lots of advantages to be had.  You can make pieces of paper behave precisely the way you want them to with very little effort, and physical objects offer visual and tactile feedback that’s hard to beat.

I usually track my product backlog electronically in a spreadsheet because it can get large and I want to do a lot of sorting and filtering on it, but I prefer to track my sprint backlog on a whiteboard because its size is fairly bounded and I don’t do a lot of arbitrary manipulation on it, and half the value of the sprint backlog is in seeing physical objects up on a wall and moving them from place to place.  It gives you a sense of flow and progress like nothing else can.

However, if your team isn’t ready to make the cultural shift to low-tech Scrum all at once (it takes time!), or if you have a distributed team, then yeah, there are some tools out there that can work decently well.  But beware: any tool you adopt will come with some built-in assumptions and ways of doing things.  When you adopt the tool, you adopt the entire philosophical mindset behind the tool (or you spend a lot of time wrestling with the tool for control of your philosophy).  If your needs and views are compatible with theirs then all’s well, but if you differ then it’s going to be painful.  There are several nice things about sticky notes and whiteboards but one of the biggest advantages is that you can customize your process to your precise needs.  Agile processes should exist to serve the team, not the other way around.

One excellent description of a low-tech approach to Scrum can be found in a free eBook titled Scrum And XP From The Trenches.  It’s a couple of years old now (which at the rate agile thinking evolves these days is positively ancient) but it’s still an excellent guide and source of ideas.  And because it’s low-tech it’s easy to tune it to your needs.

Misadventures in Legacy Code

(I originally posted this on my MSDN blog.)

Last week I gave a talk titled “Misadventures in Legacy Code” to the South Sound .Net User’s Group, which is the oldest .Net user’s group in the world, having been started during the days of the 1.0 beta.  I’ve given presentations to groups at work before but this was my first experience with just walking in cold and speaking to a group of strangers.  I had some concerns about how it might turn out but the SSDotNet group was a great audience and the feedback I got was very positive.

Some folks asked for a copy of the slide desk so I’m posting it here as an attachment.  I’ve edited it a bit to remove the code examples from some of my projects since they wouldn’t make any sense without a whole lot of context.  I talked the group through it at the presentation but anyone reading the deck doesn’t have that luxury.

It was a very enjoyable experience and I’m interested to do more of it in the future.

Misadventures In Legacy Code

The Number of Classes Is Not A Good Measure Of Complexity

(I originally posted this on my MSDN blog.)

Must . . . Resist . . . New Class . . .

For some reason, most developers (me included) have this idea that the number of classes in your code base strongly indicates the complexity of your design.  Or to say it another way, we tend to freak out when we see the number of classes in a project climb upwards and we tend to intuitively prefer software designs that minimize the number of classes.

I often find myself ignoring clear evidence that I need to break out some responsibility into its own class because I have a fear that clicking on Add | New Item in Visual Studio is going to make my code much more complicated, so I search for ways to jam the responsibility into some existing class instead.

This impulse is usually dead wrong.

I had an experience a few weeks back where my unit tests were getting to be pretty messy and painful to deal with.  I knew this was probably evidence that my design needed some serious improvement but I ignored it.  It got worse.  Finally I said, “Ok, fine, I’m going to refactor the code until my unit tests quit sucking.”  I started breaking up classes that had too many responsibilities, creating additional interfaces and concrete classes, and generally created an explosion of new files.  It was terrifying.  It felt like I was turning the code into a nightmare of complexity.

The funny thing was, though, when I was all done, everything just fell into place and all of a sudden I had an elegant, easy-to-understand, maintainable design sitting there.  Sure there were a lot more code files than before, but each class did one small thing and did it well.  It was remarkable how much easier it was to reason about the new design than the old.

Mental Juggling

Michael Hill wrote an excellent blog post on this awhile back and he points out that in addition to the usual low-coupling, high-cohesiveness arguments in favor of small single-responsibility classes, there’s also an argument from cognitive science.  We can only hold a few “chunks” of information in our short-term memory at once.  But we can get around this limitation by collecting chunks that are closely related, giving them a name, and that becomes just one chunk to store and track.

When we have a class that contains 500 lines of source code and does five different things, we have to think about all of that code more or less simultaneously.  It’s really difficult to handle all of that detail at once.  If we break up that class into five classes that each do one thing, we only have to track five class names in order to reason about the system.  Much easier.

Moderation in everything

Of course this can be overdone.  My recent zombie-fighting involved (among other things) chopping out a bunch of pointless classes that were apparently built to satisfy someone’s concept of proper architectural layers but didn’t really handle any responsibilities of their own.  They didn’t create useful names for chunks of code; they were just pointless abstractions.

It’s interesting that two apparently contradictory principles can be true at the same time: on one hand source code is a liability and should be minimized, but on the other hand more, smaller classes are better than fewer, bigger classes, even if that raises the file count and line count.

Legacy applications are like zombies

(I originally posted this on my MSDN blog.)

I should have posted this before Halloween when I was first thinking about it, but hey, better late than never.  Here’s what I wrote on Twitter:

I would add that the area I was working on went from 6,800 executable lines to 3,200 executable lines.  (Executable lines is so much smaller than text lines because Visual Studio is really conservative about what it considers an executable line.  Interface definitions don’t count at all, for example.)  The smaller number includes all of the unit tests I added for the code I rewrote, so the amount of code I removed is even larger than it appears.

Believe me, these numbers have nothing to do with my skills as a programmer.  Rather, they reflect my, um, target-rich environment.  When a code base is yanked this way and that over several years without much thought given to the overall cohesive direction of the project, a lot of cruft builds up.  A little time (ok, a lot of time) spent thinking about what the code is actually supposed to accomplish clarifies a lot of things.