ASP.NET MVC 3, IDependencyResolver, and StructureMap

(I originally posted this on my MSDN blog.)

ASP.NET MVC 3 offers new facilities for easy dependency injection in various parts of the application that you might want to implement yourself.  Brad Wilson discusses the new features in an extensive series of posts.  In the Beta version and beyond, the way you do this is by creating a class that implements IDependencyResolver and serves as an adapter to the IoC container of your choice.

I happen to like StructureMap as an IoC container so I thought I ‘d wire it up for use in ASP.NET MVC 3.  IDependencyResolver isn’t exactly a complex interface to implement but being the properly lazy developer that I am I thought I’d look around on the web to see if anyone had already offered up an implementation for StructureMap.  I found a few different blog posts that all had pretty much the same code (such as Brandon Satrom’s) so I grabbed it and dropped it into my application.  I then tried to have one of my controllers pulled from the container and . . . met with utter failure.

Specifically, StructureMap kept insisting that it had no idea what I was talking about when my IDependencyResolver adapter asked for an instance of my TitlesController (i.e. container.TryGetInstance(serviceType) was returning null).  The framework would then fall back to trying to create an instance on its own which would throw an exception because this controller was designed for dependency injection and didn’t have a parameter-less constructor.

This was particularly aggravating because all the implementations I found on the web seemed to be the same and apparently they were working for other people.  I beat my head against this problem for, um, longer than I should have, I guess, until I finally found a email thread started by Jimmy Bogard on the StructureMap users list that clarified the problem for me.  The issue is that while StructureMap’s Container.GetInstance() will create an instance of a concrete type without it being explicitly registered, Container.TryGetInstance() doesn’t do that.  Container.TryGetInstance() will give up and return null if the type you’re asking for isn’t explicitly registered as a plugin type.  Coincidentally, the very first thing I was trying to pull out of the container was a concrete type (my controller class).  The existing implementations will work for registered interfaces but not for controllers which are requested by concrete type.

By the way, while researching all of this I ran across this thread in which Jeremy Miller points out that the original implementation of of IDependencyResolver.GetServices was wrong, too.

So here’s my IDependencyResolver implementation for StructureMap which takes StructureMap’s non-intuitive behavior into account and also tries to minimize the number of exceptions that have to be thrown and handled:

(UPDATE: I removed some registration code from the constructor that was causing confusion and probably wasn’t necessary (at least not for resolving concrete controller types, anyway.)  If you need to resolve interface types, you’ll need to register them with StructureMap in your app’s registration code or maybe do it below in the constructor.)

public class StructureMapDependencyResolver : IDependencyResolver
{
    readonly IContainer _container;
 
    public StructureMapDependencyResolver(IContainer container)
    {
        _container = container;
 
        // TODO: if you haven't registered necessary interfaces somewhere else, you'll need to do so here.
    }
 
    public object GetService(Type serviceType)
    {
        if (serviceType.IsClass)
        {
            return GetConcreteService(serviceType);
        }
        else
        {
            return GetInterfaceService(serviceType);
        }
    }
 
    private object GetConcreteService(Type serviceType)
    {
        try
        {
            // Can't use TryGetInstance here because it won’t create concrete types
            return _container.GetInstance(serviceType);
        }
        catch (StructureMapException)
        {
            return null;
        }
    }
 
    private object GetInterfaceService(Type serviceType)
    {
        return _container.TryGetInstance(serviceType);
    }
 
    public IEnumerable<object> GetServices(Type serviceType)
    {
        return _container.GetAllInstances(serviceType).Cast<object>();
    }
}

 

You Get What You Measure

(I originally posted this on my MSDN blog.)

I’ve been participating in more conversations internally about promoting a team-oriented culture at Microsoft.  Microsoft has a strong individual-oriented culture which works well for many things but doesn’t work so well for agile software development.  The question of paying (at least partly) based on team results rather than individual results came up, and it’s an interesting thing to think about.

Many groups at Microsoft strongly focus individual-contributor evaluations on individual results, not team or product results.  For instance, a developer might be evaluated on the number of product features he delivered and the number of bugs found in his code.  That’s understandable in the sense that that individual results are easy to define and easy to measure, but a huge problem in that there’s no guarantee that successful execution on a random collection of individual commitments will result in working software.

That is to say, all of the PMs can successfully write specs, all of the devs can successfully write code, and all of the testers can successfully find bugs, and yet the business fails to make money because the parts didn’t synthesize together into a cohesive whole.  Of course when you get to the senior manager level you start to see commitments that explicitly say, “Ship a successful product,” but ICs don’t usually have that in their goals and aren’t incentivized to make decisions that might promote the “ship a successful product” goal at the expense of their own individual goals.

The root issue here is the general failure mode of all metrics.  If you define a metric and evaluate people on it, then people will optimize for delivering on that metric.  When your metrics represent only secondary attributes rather than your true core goal, then you’re at high risk of people optimizing in ways that actually hurt your core goal rather than help it.  Metrics are useful in direct proportion to how closely they represent the core goal.  At Microsoft, our core goal is to deliver successful software products.  Our evaluation and compensation system ought to be tied as closely to that core goal as we can possibly get it.

Unfortunately our core goal is rather fuzzy.  Well, I guess measuring achievement of the core goal is pretty straightforward but mapping the ultimate end result back to the individual contributions that people made to get you there is really, really hard to do in an objective, measurable fashion.  When we try to solve that problem we inevitably end up measuring indirect effects rather than primary causes, which makes people optimize on the wrong things, and we’re right back where we started.

It would be great if we could come up with an evaluation and compensation system that would explicitly encourage people to stop and ask themselves, “What is the one thing I could do today that would have the greatest impact on shipping a successful product,” rather than asking themselves, “What did I write down in my commitments last summer that I haven’t done yet?”  That difference in mentality may be subtle but it has far-reaching implications in behaviors and the choices we make.

Learn The Why, Not Just The How

(I originally posted this on my MSDN blog.)

In another conversation on an internal email thread, someone asked some newbie questions about Scrum daily standup meetings, like do they have to be every day or could they be done every three days or so?  There were some good replies that encouraged the standard practice of daily meetings, then someone else mentioned that since his team sits together in one common team room, they communicate all day long and they’ve found the daily standup meeting to be not really necessary at all.

I thought that was a interesting point and replied:

That’s a good example of why it’s important to understand the goals that any given methodology is trying to achieve, not just the ceremonies that it uses to achieve them.  The goal in this case is a short feedback cycle, shared understanding of what all team members are doing, and identification of the most important things to be working on right at this moment.  The mechanism you use to achieve that is really beside the point if you’re able to effectively drive the goal.  Scrum’s mechanism of the daily standup meeting is very effective for the vast majority of Microsoft teams who have individual offices.  If you’re fortunately enough to have extremely high communication going on all day long, then sure, I can see how a daily standup would not be as important.

Anyone who tries to implement Scrum (or any other new methodology) without understanding the end goals and how the individual practices are intended to express those goals is pretty much doomed to failure.  I suppose you can start out that way (I guess I did, come to think of it) but you can’t continue for very long that way.  You have to understand the “why”, not just the “how”.

Time and time again when I see a person or a team who is fumbling along with agile practices (whether that’s Scrum, XP, TDD, design patterns, or anything else) and it’s not really working for them, it’s usually because people are just going through the motions they’ve read about without any real understanding of the principles, the philosophy, and the goals involved.  There’s usually some cargo cult engineering going on.  This causes at least two types of serious problems:

  1. Either they slavishly follow every detail of the methodology without considering the unique circumstances of their project and adapting certain aspects of the process to best fit what they need,
  2. Or they start slicing and dicing the methodology, throwing out whatever they don’t immediately like or whatever seems difficult to implement without having any sort of plan to accomplish the same goals by other means.

The Agile Manifesto calls for favoring individuals and interactions over processes and tools.  No methodology should be followed to the letter without thought, but most methodologies have important, specific reasons for every practice they prescribe.  The only way we can favor people over processes and still deliver working software is if we understand the “why” of our processes so we can intelligently mold them to fit our people.

Multi-task At The Team Level, Not The Individual Level

(I originally posted this on my MSDN blog.)

My current work environment is pretty typical of a lot of tools teams, IT shops, and similar groups; we have lots of relatively small projects to build and maintain.  In fact, we have far more projects than we have people in each discipline (Dev, Test, PM).  This obviously means that we don’t have the luxury of assigning individuals to one and only one project for long periods of time; we have to slice up our time and devote attention to multiple projects in turn.  Given that’s the reality of our situation, though, there are still different ways we can go about doing that, some healthier than others.  We can multi-task at either an individual level or at a team level.

Individuals

Multi-tasking at an individual level means that Joe is splitting his time between the Widget project and the Froberg project, while Sally is splitting her time between the Froberg project and the Walleye project.  Every individual has a different portfolio of projects that they’re responsible for.  The benefit of this strategy is that it gives us maximum flexibility to allocate scarce engineers in a way that seems to cover all the holes on paper.  There are several very expensive costs, however:

  • People rapidly shift from project to project which means that no one develops a deep and strategic understanding of any one project.
  • Because people are always coming and going on a given project, it’s hard to develop a sense of team identity or to develop productive working relationships with colleagues.
  • Individuals are often left to make their own choices from day to day about which project is most in need of their attention at the moment.  There’s no strategic thinking about priorities or about how to spend the overall time budget of the group.

When we multi-task at an individual level, we often end up building plans for our projects that assume 100% allocation when that’s not really the case.  For instance, say that we have work happening on the Widget project and the Froberg project at the same time.  From a group planning and budgeting perspective, it might look like we have one team devoted 100% to Widget and another team devoted 100% to Froberg and that the work should be able to proceed in parallel.  But in reality that’s not the case because Joe is double-booked for both projects.  That puts him in an awkward position because now he has to decide on a day-to-day basis which is more urgent: Widget or Froberg.  That decision isn’t being made by the project owners and it’s not being made strategically; it’s being made individually and tactically.  Despite people’s best intentions, that kind of system usually devolves into a last-in-first out work queue, i.e. we work on whatever was most recently brought to our attention.

Teams

On the other hand, we could multi-task at a team level.  In other words, we could build a stable, consistent Dev/Test/PM team that has explicit responsibility for both the Widget and Froberg projects, while another stable team has responsibility for Walleye and something else.  We still have multiple projects that need to be worked on, and we still need to make priority decisions between them, but now the trade-offs are visible in the budgeting, planning, and strategic thinking realms.  Say the team is working on a new Widget feature.  Oops, we have an urgent Froberg bug that needs to be fixed!  What’s the cost of handling that?  Well, the cost is that we postpone the Widget project while the whole team shifts its attention to Froberg.  Is it worth it?  Maybe, maybe not, but now we can have a proper discussion about it.  It’s very clear and obvious.  You can’t just sneak it in under the radar.

The costs were there in the individual model, too – if Joe is working on Froberg today then by definition he’s not helping Widget make progress – but the costs were hidden.  The individual model seems like it’s easier to work with in the short term because it avoids the  hard priority choices but those choices still need to be made and it’s best if they’re made explicitly rather than on an ad hoc basis.

Agile Means Teams

Of course there are multiple agile project management methodologies that emphasize the team as the unit of work.  Scrum prescribes a team, a backlog, and a sprint full of work that is locked.  If something urgent comes up, it gets prioritized into the product backlog for the next sprint and everything else necessarily shifts down in the stack.  The benefit and the cost are clear to everyone.

Kanban describes a similar system but emphasizes continuous single-piece workflow which is probably a better fit than Scrum if you’re juggling lots of small, unpredictable jobs across multiple projects.  If something new comes up, slot it into the backlog and everything else gets bumped down.  The team continually pulls from the top of the list and you can forecast a pretty accurate time to delivery for any point in the backlog.

Even groups who like agile methods and are trying to practice them sometimes have a hard time breaking the habit of treating the individual as the unit of work.  Every time they slip back into that old habit everything gets much more unpredictable and velocity falls dramatically.  It’s inevitable.  The way to make agile methods work is to keep the team together as a cohesive unit and feed them the work you want the most.  Don’t steal individuals for side projects.  Multi-task at the team level, not the individual level.

(Whew, I wrote that whole post without once using the word “resource”.  It was harder than it sounds!  🙂 )

 

When Is A Sprint A Failure?

(I originally posted this on my MSDN blog.)

Here’s another question that was asked on one of our internal distribution lists:

When do we consider a sprint to be a failure?  I don’t know the answer and have no clue about it.  We delivered 7 out of 9 stories we committed with all the release criteria, but we failed to deliver two stories due to under-estimation of the stories we picked.  Is this a failed sprint?

This stirred an interesting discussion about sprint commitments, continuous improvement, estimation, retrospectives, and numerous other things.  This was my contribution to the thread:

The phrase “sprint failure” in Scrum has a pretty specific meaning to a lot of people.  It means that at some point before the end of the sprint, you realize that your sprint plan has gone so far off the rails that the best thing you can do is to abort the sprint before it’s completed and build a new plan.

What if halfway through your sprint you realize that you’re probably going to be able to deliver only 7 of your 9 stories?  Should you abort the sprint at that point and plan a new one?  I guess it depends on the specifics of the stories but almost always I’d say no.  You’re a little off the plan but not drastically so.  That’s not uncommon.  Keep going, deliver value, and afterwards figure out if there are improvements you can make for next time.

The original poster is apparently not talking about aborting a sprint halfway through.  He’s talking about working until the end of the sprint then labeling the results.  I think he needs to define what he means by “failure” here.  Does he mean, “we should reflect on what happened and look for opportunities to improve?”  Sure, I agree with that, though I’d probably stay away from the term “failure” because it can be easily misunderstood.  Does he mean, “there are negative consequences imposed on the team?”  I emphatically disagree with that.  Not delivering 2 of 9 stories is not an outcome that should be punished.

If it’s treated as something to be punished then I guarantee that the team will adopt counter-productive behaviors.  In this case, they’ll probably just low-ball all of their commitments so that there’s very little possibility of failing to deliver.  This doesn’t help increase efficiency and reduce waste, though, because there’s no learning and improvement going on, just self-defense strategies.

One of the potential weaknesses of Scrum is that it’s easy to get too fixated on the concept of a sprint commitment and whether you fulfilled or failed to fulfill that commitment.  Some measure of predictability is important, sure.  But no estimation technique is foolproof and you can waste an awful lot of time trying to perfect something that is inherently imperfect.  Always remember that the goal is simply “good enough” estimation because estimation isn’t value, it’s just a means to an end.

I think this is one of the reasons why there’s been a lot of attention paid to Kanban and single-piece flow systems lately; because Kanban gets rid of the sprint timebox and all of the potential distractions that go along with it.  I personally feel that Scrum and Kanban aren’t mutually exclusive and if you merge the best of both systems you’ll knock the rough edges off of both.  (Some people use the term “Scrum-ban”.)  One of the best influences Kanban can have on Scrum is to put the concept of a sprint commitment into its proper perspective, that is, it’s a device for short-term planning, nothing more.  Relax!

The sprint retrospective is a powerful tool for change and improvement but it only works if you approach it from a positive perspective of “how can we become better?” rather than a negative perspective of “how badly did we fail?”.

Always beware the Law of Unintended Consequences.  Whenever you’re working with a system that contains humans you’ll find that some things you implement for positive reasons end up having negative results because they reinforce the wrong behaviors.  I think an over-emphasis on sprint commitments, and and insistence on labeling each sprint as a success or failure, is one of those things.

 

GenesisEngine: Behavior-oriented Language

(I originally posted this on my MSDN blog.)

As I wrote in my previous post, BDD is largely about preserving the flow of intent from your user stories to your unit tests (specifications, in BDD parlance) to your product code.  As developers, we’re in the habit of switching over from user intent (features that solve problems) to developer intent (mechanics of the code) when we write tests, but preserving as much user intent as possible all the way through is a lot better for maintainability and it helps drive better initial designs, too.  It’s the same as the ubiquitous language in DDD.  Don’t abandon the ubiquitous language; stick with it as long as possible.

In other words, don’t focus on how the code works, think about how the system behaves.  The code will follow naturally.

It turns out, though, that it’s surprisingly hard to do this well.  At least, I find that I often have to remind myself to leave implementation language out of my context and specification names and to focus on how a user would describe the behavior in domain language.

I recently prepared a presentation on MSpec for my group at work and I used the Camera specs in the GenesisEngine project as an example of specs written in the behavior-focused style.  There’s nothing like putting your work in front of other people to make you take a fresh look at it with a critical eye!  As I read over my Camera specs, I realized that I had let some implementation language sneak in when I wasn’t looking.  In other places I had been pretty vague about the behavior that was actually expected.

For instance, consider this context:

[Subject(typeof(Camera))]
public class when_view_parameters_are_set_by_look_at : CameraContext
{
    Because of = () =>
        _camera.SetViewParameters(new DoubleVector3(0, 1, 1), DoubleVector3.Zero);
    It should_set_the_camera_location_correctly = () =>
        _camera.Location.ShouldEqual(new DoubleVector3(0, 1, 1));
    It should_set_the_camera_yaw_correctly = () =>
        _camera.Yaw.ShouldEqual(0f);
    It should_set_the_camera_pitch_correctly = () =>
        _camera.Pitch.ShouldEqual((float)(-Math.PI / 4));
    It should_set_the_camera_roll_correctly = () =>
    _camera.Roll.ShouldEqual(0f);
    It should_set_the_camera_view_transformation_correctly = () =>
        _camera.OriginBasedViewTransformation.ShouldEqual(
            GenerateOriginBasedViewMatrix(_camera.Location, _camera.Yaw,
                                          _camera.Pitch, _camera.Roll));
}

It should set location/yaw/pitch/roll/transformation correctly?  What the heck does that mean?  That tells very little about what my intent actually was.  I was just being lazy and didn’t want to bother with trying to carefully describe the intent.

Actually, I bet what I was thinking was something like, “Hmm, my expectation here is that these properties should be set to specifc numbers.  I don’t want to state those numbers in the spec names, though, because that’s a test detail.  I’ll just say it should set the properties ‘correctly’ because that sounds more generalized.”

But what was my real intent for the behavior of the camera when the view parameters are set via a look-at point?  Well, the real intent is that the camera should go to the requested location and set its orientation to whatever values are needed to face toward the look-at point from that location, and finally generate a new view transformation based on the new camera state.  Ok, now that’s a description that’s phrased in terms of the problem domain, not the implementation domain.  Let’s see if we can improve those specs:

[Subject(typeof(Camera))]
public class when_view_parameters_are_set_by_look_at : CameraContext
{
    Because of = () =>
        _camera.SetViewParameters(new DoubleVector3(0, 1, 1), DoubleVector3.Zero);
    It should_go_to_the_requested_location = () =>
        _camera.Location.ShouldEqual(new DoubleVector3(0, 1, 1));
    It should_set_the_yaw_to_face_toward_the_look_at_point = () =>
        _camera.Yaw.ShouldEqual(0f);
    It should_set_the_pitch_to_face_toward_the_look_at_point = () =>
        _camera.Pitch.ShouldEqual(-MathHelper.Pi / 4);
    It should_set_the_roll_to_face_toward_the_look_at_point = () =>
        _camera.Roll.ShouldEqual(0f);
    It should_generate_a_view_transformation_for_the_current_state = () =>
        _camera.OriginBasedViewTransformation.ShouldEqual(
            GenerateOriginBasedViewMatrix(_camera.Location, _camera.Yaw,
                                          _camera.Pitch, _camera.Roll));
}

That’s better.

In the location spec, I got away from the implementation language of “setting the location (property)” and used the domain language of “going to a location (in the world)”.  Very similar, but different perspectives.  For the orientation components, I described the intent of facing in a particular direction.  And for the view transformation, I called out the fact that the transformation is dependent on the new state.

Now, a lot of you may be looking askance at me right now.  Isn’t this nitpicking pretty silly?  Well, sure, it’s not earthshattering or anything.  I didn’t fix a bug or implement a new feature with these changes.  But I think I made the code a little bit cleaner, and that’s the real reason why I started this project in the first place.  It’s not about terrain rendering, it’s about fine-tuning my techniques.  Lesson learned: avoid writing specs that say something should be done “correctly”.  Describe what the correct behavior actually is.

The diff of the improvements I made to the Camera specs can be found in two parts here and here.

BDD Unit Testing is Not That Complicated

(I originally posted this on my MSDN blog.)

One of the first sessions at the Alt.Net Seattle conference was one on Behavior-Driven Development, or BDD.  Actually, we had three suggested sessions all related to BDD that we combined into one session, which was probably a bit of a mistake in hindsight because we had a couple different groups of people looking for different things (BDD at the unit test level vs. BDD at the acceptance test level), which caused a bit of controversy.

I think that at the unit test level, BDD really isn’t that different than normal TDD that we all know and love.  All it really brings to the table is a strong emphasis on the arrange-act-assert structure and an even stronger emphasis on the behaviors you’re trying to build in your code rather than the mechanics of the code itself.  In other words, BDD asks that you think in terms of what the user wants to do and how you’re going to enable them to do it.  You give clear, explicit names to each scenario that you need to implement and you also give clear names to each expectation that you have for the scenario.  The overall point is simply to write tests that people can actually read.

Anyway, Charlie Poole (one of the developers of NUnit) made a comment to the effect of, “Well, I’ve been doing that sort of thing in my unit tests for years already.  Why do we even have to have a special name for this?”  I also noticed a lot of other people asking things like, “Well, what about SOLID principles?  Do they still apply?  How about mocking frameworks or IoC containers?  Can I still use those?”

This confusion is really unnecessary, and Charlie’s right: it’s unfortunate that we even have a name for it that makes it sound like it’s something different than TDD.  At least at the unit test level, BDD is not a brand new way of writing tests.  It’s just the same old red-green-refactor workflow that we’ve always used; just with a stronger emphasis on expressing customer-oriented intentions so that when other developers have to pick up your code and maintain it later, they’ll know why your tests exist, what user value they map to, and when they break, it’ll be obvious what needs to be fixed.  You still use all the same state-based and interaction-based testing techniques in your tests and the same SOLID principles in your product code.  Nothing changes.

Relax – it’s not that complicated.

 

My BDD Naming Macro

(I originally posted this on my MSDN blog.)

Over the years several people have shared the Visual Studio macros they use to make the BDD boxcar naming style easier to work with.  I thought I’d add my own, not because it’s any better than the others but because it’s built for a slightly different workflow and someone might find it useful.

First, here’s the macro:

Imports System

Imports System.Windows.Forms

Imports EnvDTE

Imports EnvDTE80

Imports System.Diagnostics

Public Module BDDNaming

    Public Sub ReplaceSpacesInTestNameWithUnderscores()

        If DTE.ActiveDocument Is Nothing Then Return

        Dim selection As TextSelection = CType(DTE.ActiveDocument.Selection(), EnvDTE.TextSelection)

        If selection.IsEmpty Then

            ReplaceSpacesAtEditPoint(selection)

        Else

            ReplaceSpacesInSelection(selection)

        End If

    End Sub

    Private Sub ReplaceSpacesInSelection(ByVal selection As TextSelection)

        Dim text As String = selection.Text

        text = text.ToLower()

        text = text.Replace(” “, “_”)

        text = text.Replace(“”””, String.Empty)

        selection.Text = text

    End Sub

    Private Sub ReplaceSpacesAtEditPoint(ByVal selection As TextSelection)

        selection.CharLeft(True, 1)

        While selection.Text(0) <> “””” AndAlso (Not selection.ActivePoint.AtStartOfLine)

            selection.CharLeft(True, 1)

        End While

        If selection.Text(0) = “””” Then

            ReplaceSpacesInSelection(selection)

            DeleteTrailingQuote(selection)

        Else

            selection.CharRight(False, 1)

        End If

    End Sub

    Private Sub DeleteTrailingQuote(ByVal selection As TextSelection)

        selection.CharRight(True, 1)

        If selection.Text(0) = “””” Then

            selection.Delete()

        Else

            selection.CharLeft(False, 1)

        End If

    End Sub

End Module

I usually bind this macro to Alt– (Alt-[dash]) because it’s easy to remember and it’s not bound by default to anything important.

To use it, I start by typing an open quote mark where I want to type a BDD context or spec name.  I use Resharper so it automatically inserts two quote marks for me, but the macro works equally well without Resharper.  The quotes prevent Intellisense from freaking out as I start to type the context or spec name:

    public class “”

 

Then I type the context or spec name as a normal sentence using the space bar like so:

    public class “when a message with no eligible listeners is sent”

 

Then I hit Alt– to convert the sentence to a proper boxcar-style identifier:

    public class when_a_message_with_no_eligible_listeners_is_sent

 

I can also highlight any arbitrary piece of text and hit Alt– to convert the spaces to underscores.

Other people like to put a lot of boilerplate code into the macro to make it easier to set up contexts and specs quickly, but I prefer this macro that does just one thing and does it well.  Hopefully someone else will find it useful too!

 

Hiring Managers for Agile Teams

(I originally posted this on my MSDN blog.)

Someone asked a question along the lines of, “Let’s say you were hiring senior managers for a group made up of multiple agile teams.  What qualities would you look for in an interview?”

The very first thing I’d look for would be a strong understanding of and belief in agile principles and philosophy.  There are a lot of people out there who say, “Yeah, sure, agile’s great!” but they panic and revert to non-agile strategies when the going gets tough.  Does the candidate deeply understand not just how agile works mechanically, but why it works and the factors that can help or hinder success in that model?  Do they know how to fix it when it breaks?

The absolute worst thing you can have when in a difficult situation is a senior manager who doesn’t truly trust agile and knee-jerks back to traditional methods when the risks get high (which of course is precisely when traditional methods don’t work).  A thrashing manager can absolutely destroy even a strong agile team.

I guess I could drill down into specific beliefs and name things like support for self-directed teams, a passion for delivering real value rather than just sticking to a pre-determined plan, and a willingness to blur traditional role lines (dev, test, PM) in the interest of a cohesive team.

Of course, many of the usual things like the ability to hire and grow quality people, real-world experience with shipping software, a strategic understanding of the business environment, etc, are still critical.  That stuff doesn’t go away.

 

Code Reviews on Agile Teams

(I originally posted this on my MSDN blog.)

Formal Code Reviews

Formal code reviews are well-known as an effective tool for reducing bugs and improving the quality of your software.  There’s lots of literature out there that demonstrates that the cost of finding and fixing a bug goes up exponentially the later into your product lifecycle it gets.  Code reviews are by far the most efficient way to improve the quality of your code.  Unfortunately the way code reviews are usually implemented on traditional software development teams doesn’t translate well to agile teams and usually turns out to be counter-productive.

Here’s how the reasoning for code reviews usually goes:

Premise: It’s useful and important to have multiple people look at a piece of code because another person may easily see and correct flaws that the author missed.

Secondary Premise: Without a formal process, it’s not likely that anyone except the author will ever look at a given piece of code.

Conclusion: we need a formal process to ensure that multiple people look at each piece of code.

Code reviews are most useful in the kind of traditional engineering environment where Dev A owns Feature A, Dev B owns Feature B, and they rarely cross the boundaries to look at each other’s code in the course of doing their jobs.  Sometimes it’s even considered rude to make changes to code in someone else’s feature area.  If you don’t have a formal process for getting multiple sets of eyes on a piece of code in that kind of environment then sure, it’s quite possible that no one besides the author will ever look at that piece of code.  Or if they do look, they’re not going to strongly criticize it and they definitely won’t clean it up if it has problems.

So what’s the problem with formal code reviews?  Well, in practice there are two problems.  First, if you have a formal code review system that says people should get their code reviewed after they check it in then it’s probably not going to happen.  It’s a pain and people conveniently “forget” to do it.  Its the first thing to get pitched overboard in a schedule crunch.  Don’t even bother because it doesn’t work.

Ok, so to fix that you can make the code reviews a prerequisite for checking in.  Microsoft product groups use this strategy a lot and often establish elaborate rules that say you must get sign-off from a code reviewer before you can check in to the repository.  This effectively guarantees review of 100% of your source code, which is great, but it causes another (huge!) problem.

Frequent Check-ins

Agile teams have found that frequent check-ins are a very important part of the agile development process.  By frequent I don’t mean once a week or even once a day.  I mean once an hour or even more.  Every time you add a small piece of new functionality (ideally defined by a set of tests and the code that makes them pass), you should check in.  If it builds cleanly and doesn’t cause regressions, check it in.  What are you waiting for?  This has several benefits:

  1. It promotes the spirit of “continuous integration” where the whole team is working on a single shared reality rather than individual copies of reality that need to be merged later.  It prevents misunderstandings and makes it easy to get the latest code.
  2. Smaller checkins makes it much easier to find and fix a build break or to roll back a mistake.
  3. It gives the team a sense of momentum and accomplishment.
  4. It gives project managers good visibility into daily progress and how the team is doing.

We should do everything we can to promote frequent check-ins.  Frequent check-ins are good for traditional teams but they’re absolutely critical for agile teams.  There’s lots of literature out there on the benefits of checking in frequently.

Unfortunately a formal code review process works directly against frequent check-ins.  I don’t want to interrupt my co-workers 10+ times a day to look at my stuff.  I don’t want to interrupt my own flow to find an available code reviewer.  Formal, pre-check-in code reviews simply don’t scale to an agile system of frequent check-ins.  They’re incompatible.

(On a tangential note, I’ve observed teams where the check-in process is so laborious that they resort to using TFS shelvesets to pass around the “current” code to people who need it and it can take anywhere from a couple of days to more than a week for those shelvesets to get promoted to actual check-ins.  People get confused about which is the most recent shelveset and code gets lost.  This is a terrible way to work!)

XP to the Rescue

Going back to the reasoning behind formal code reviews, I strongly agree with the major premise.  It’s critical to get multiple sets of eyes on every piece of code because it drastically reduces bugs and improves quality.  However, it’s entirely possible to do software development in such a way that invalidates the secondary premise and largely eliminates the need for formal code reviews before checking in.  It’s not true that formal code reviews are the only way to get multiple eyes on the code.  There are multiple practices found in the Extreme Programming (XP) philosophy that help us do that in other, more efficient ways.

Shared Ownership

The first practice is to promote shared ownership of the code.  Instead of creating a team of specialists where each dev has exclusive ownership of a feature area, we want to create a team of generalists where everyone has the flexibility to work on pretty much every area of the product.  That doesn’t mean that we won’t have subject matter experts; of course some devs are going to be particularly knowledgeable about threading, or networking, or databases, or whatever.  But that area expertise shouldn’t translate into exclusive ownership of a piece of code.  The team as a whole owns the whole code base.  The practical implication of this is that everyone on the team has permission and is expected to work on code they didn’t write and to fix it, refactor it, and clean it up when they do.  This means that over time you’ll get multiple sets of eyes on most of the code which accomplishes the stated goal of a code review.

Swarming

The second helpful practice is to have the team “swarm” one or a few stories at a time rather than having each team member work on a completely separate story in parallel.  This is in the same vein as the first practice; stories aren’t owned by individuals, they’re owned by the team.  Not only that, but we want to encourage devs to actively work on the same stories at the same time so that when dev A checks something in, dev B will be reading, using, refactoring, and extending that code almost immediately.  This does an even better job of accomplishing the stated goals of a code review.

Pair Programming

The third helpful practice is pair programming where you literally have two devs collaborating to write a single piece of code.  People have referred to this (only partially tongue-in-cheek) as a “real-time code review”.  Pair programming is a complicated topic and confers many benefits but yes, it entirely eliminates the need for formal code reviews before every checkin.  This is the natural and logical conclusion when you follow the XP philosophy to its end.

Unfortunately, pair programming is not something you can simply slipstream into a team’s daily rhythm.  It a radical philosophical shift in the way we do development and lots of people are deeply uncomfortable with it at first, for good reason.  It’s a huge change and it’s possible to do it badly so that you don’t see any benefit from it.  There’s lots of literature out there on how to do it well but it takes time and practice.  If you’re transitioning a team from traditional software development over to agile development, you might consider starting first with shared ownership.  That by itself can be a bit of a shock to people.  Let your team digest that for awhile then try swarming, then pair programming.

Code reviews are one of the most powerful tools in a developers toolbox, but code reviews on an agile team probably won’t look the same as on a traditional team.  Keep in mind the goal you’re trying to accomplish and look for ways to get there without sabotaging other important techniques.