Further thoughts on short sprints

(I originally posted this on my MSDN blog.)

I wrote about my preference for short Scrum sprints last month, where “short” means no more than two weeks.  Recently there was another internal email thread on roughly the same subject where some people listed some concerns they had about moving their team to two-week sprints. I’ve combined, paraphrased, and edited some of the questions and included my answers for them below.

The environment isn’t nailed down and requirements are still up in the air.  I need four weeks in order to maintain control.

When you have a lot of uncertainty in your project and your requirements are up in the air, that’s precisely the time when you want to have short sprints so that you have frequent adjustment points.

The whole idea of a sprint is that you build a crisp short-term plan based on facts that you know and can rely on.  If there’s a lot of uncertainty in your environment, which is easier: to build one plan that will still be accurate and useful four weeks later or to build a two-week plan, execute it, and then build another two-week plan using additional facts that you’ve learned during the first sprint?  Long sprints are for stable environments where the need for flexibility and responsiveness is low.  Short sprints are for changing environments where you don’t know a lot right now but in two weeks you’ll know more.

Planning for four-week sprints is painful as it is; I don’t want to do that twice as often!

With shorter sprints, your planning for each sprint should get shorter as well.  In fact, I’ve found that when going from four week to two week sprints your planning can be reduced by more than half because you simply don’t need all of the process that you need in a four week sprint.

For example, in a four week sprint it’s important to carefully estimate the number of required hours for each task, then track the number of hours you spend on each task, and generate a burn-down chart so that you can tell if you’re tracking to the plan.  Most teams have some point at about halfway through the sprint where they evaluate the burn-down chart and make adjustments to the sprint commitments depending on how they’re doing, because four-week plans rarely survive unchanged.

Well, instead of doing that, how about if you skip the burn-down chart and just build a new plan at the two-week point?  You can save all the effort of detailed tracking and you get a more up-to-date plan as well.  Remember, building a two-week plan isn’t nearly as expensive as building a four-week plan so you’re doing about the same amount of planning (or probably less) overall.

How far off track can you get in two weeks, anyway?  Certainly not as far as in four weeks, so there’s not as much need for oversight.  And if you do start to get wildly off-track, just glancing at a sprint task board or a list of completed stories will probably tell you that you’ve got problems because the total number of items is small enough that you can understand them at a glance and eyeball it pretty reliably.

Meeting proliferation – yuck!

The same goes for meetings.  There may be more meetings but each one is much shorter because there’s less to cover.  If it’s hard to get all stakeholders together for a demo every two weeks, you might schedule a big public demo every four weeks (after two two-week sprints).  Short meeting tend to be more productive because they’re crisp and people don’t get burned out.  Four-hour planning meetings (or 6 hours, or 12!) are way too painful to be productive.

I have multiple teams in different geographic locations that need to stay in sync.  Won’t short sprints hinder that?

Syncing multiple teams ought to be easier with short sprints because no team is ever more than two weeks away from being able to make major adjustments  to their plan in response to the needs of another team.  I’m not sure how long sprints would help the syncing issue.  Long sprints give you the illusion that you know exactly where you’re going to be four weeks from now, but you probably don’t.  You can make a pretty decent predictions for several weeks or even several months into the future using product burndown velocity, and that reminds everyone of what they really are – predictions.  Not guarantees.  Now, predictions are useful.  I’m not saying that you don’t think about the future at all.  I’m just saying that you need to be realistic about your level of certainty.

I’m not sure my team is ready to transition to two-week sprints.  As scrum master, should I just mandate it?

I would not try to impose two-week sprints over the objections of the team.  One of the fundamental principles of Scrum is that the team should be self-organizing as much as possible.  If there’s general agreement that short sprints wouldn’t work for some reason, then imposing them probably won’t be successful.  That doesn’t mean you can’t keep evangelizing the idea and hope to eventually change people’s minds, though.

If your organization is still feeling shaky about Scrum in general and people are still climbing the learning curve, then you should probably just stick with whatever sprint system you’re using at the moment unless it’s obviously broken.  People can only absorb a certain amount of change at once.  It might be wise to let your recent changes soak in a bit before you muck around with the system.

Anything else I should know about short sprints?

The one thing that short sprints really do demand is that you have to be able to write very good user stories that are small, well-defined, but still vertical slices that deliver real business value.  That’s not an easy skill to develop, but I think it’s totally worthwhile because it pushes you to really understand the features at the top of your product backlog.  Big, vaguely-defined user stories are hard to get done in two weeks, so they make you feel like maybe you need three or four week sprints, but I think the right answer is to not work with big, vaguely-defined user stories.  There’s almost always a way to break them up in a way that makes sense if you take the time to thoroughly understand them, and only good things can come of thoroughly understanding your top-ranked stories.

Ok, there’s probably one other thing that short sprints demand.  They demand that your entire team be familiar with, comfortable with, and committed to agile principles.  If your team really wants to work in a high-planning, waterfall system and they’re just doing this Scrum thing because someone higher up told them they had to, then long sprints at least gives them some of the long-term planning that they desire.  Short sprints will just make them even more uncomfortable than they already are.  That says nothing about the viability of short sprints – it’s about people’s comfort zones.

To sum up, the whole premise of Scrum is that you make crisp firm plans based on facts that you definitely know, and you avoid making firm plans based on anything you don’t know or only pretend to know.  Planning beyond the knowledge horizon doesn’t really change what you know, it just tricks you into thinking you know more than you do.  The key is to execute hard on the facts in front of you and stay loose on everything else.  And remember, people are the ultimate limiting factor so don’t drive them faster than they’re willing to go.

Everything I’ve said about two-week sprints applies to one-week sprints, only more so.  The ultimate conclusion to this line of thinking is Lean/Kanban development where you don’t have time-boxed sprints at all; you just have single-piece workflow and a pull model.  I haven’t really gone there yet because I’m still consolidating my grasp of Scrum principles but a lot of the industry thought-leaders are already there.

 

What goal does your culture value?

(I originally posted this on my MSDN blog.)

There have been several blog posts written recently on the topic of TDD and whether it ultimately makes you more productive or just slows you down.  I don’t have much to add to that discussion but I found a comment left by Ben Rady for one of Bob Martin’s posts and thought that it was excellent (the comment, not the post, though the post was good too):

TDD slows you down if your goal is to be “done coding”. If your definition of done amounts to “It compiles, and nobody can prove that it doesn’t work” then writing a bunch of tests to prove whether or not it works makes your job harder, not easier. Sadly, in many organizations this is what “done” means, and so good developers wind up fighting against their environment, while the bad ones are encouraged to do more of the same.

If, on the other hand, your goal is to deliver a working system, then TDD makes you go faster. This is the only useful definition of “done” anyway, because that’s where you start making money.

This was a particularly interesting point because I was thinking of a point I heard someone make here at work last week – that Microsoft has a far higher tester to developer ratio than a lot of other leading software companies.  Those other companies have a quality standard that is comparable to Microsoft but somehow they achieve it with many fewer testers.  Why is that?

I’ve spent most of my career working as a developer of test tool in test organizations at Microsoft so I have a huge amount of respect for the great job that Microsoft testers do.  But, having worked here for fifteen years, I believe that a large part of the work our test teams do is avoidable; it’s the unfortunate result of our traditionally developer-centric culture which has a lengthy history of focusing on the “done coding” goal rather than the “working system” goal.  We need so many testers because they have to spend a large part of their time wrangling the devs into moving in the right direction.

I’m not sure if it’s cause or effect, but there’s definitely a strong correlation between our “done coding” culture and the strong wall we have between the development and testing disciplines at Microsoft.  Developers write product code and testers write test code and never the twain shall meet.  Developers are often completely ignorant of the tools and automated test suites that the testers use to test the builds.  If a test tool gets broken by a product change, it’s pretty rare that a developer would either know or care.  I’m pretty sure there’s a better way to do it.

To be fair, there’s nothing particularly unusual about Microsoft’s historical culture; that’s the way virtually the entire industry operated fifteen years ago.  But in the past several years the industry (or a significant part of it, anyway) has made large strides forward and Microsoft is still playing catch-up.  Again, to be fair, Microsoft is an enormous company with many different micro-cultures; there are plenty of teams at Microsoft who are very high-functioning, where developers take complete responsibility for delivering working systems, and where testers have the time to do deep and creative exploration of edge cases because the features just work.  But from where I sit that doesn’t appear to be part of our broad corporate culture yet.

A lot of people are working hard to change that, and it is changing.  As frustrating as it can be to deal with our historical cultural baggage, it’s also fascinating to watch a culture change happen in real time.  I’m glad to be here and to be a small part of it.

Edit:  I’m proud to say that Microsoft does value quality software quite a lot.  It’s just that we take the long way around to achieving that quality; we’re apt to try to “test it in” after it’s written rather than focusing on reducing the need for testing in the first place.  That’s the problem I’m talking about here.