MSpec is Fantastic!

(I originally posted this on my MSDN blog.)

I’ve previously written about the Behavior-Driven Development style of TDD and about how to use such a style in MSTest.  I’ve done this on several personal and business projects and it’s worked pretty well, but I recently got around to trying Aaron Jensen’s MSpec and I have to say there’s no going back.  Conceptually my tests are the same but mechanically MSpec just flows so much better.

Here’s test context from a personal game project of mine written in my old MSTest BDD framework:

[TestClass]

public class when_the_planet_is_initialized : PlanetContext

{

    protected override void Context()

    {

        base.Context();

    }

    protected override void BecauseOf()

    {

        _planet.Initialize(_location, _radius);

    }

    [TestMethod]

    public void should_initialize_the_renderer()

    {

        _planetRenderer.AssertWasCalled(x => x.Initialize(_radius));

    }

    [TestMethod]

    public void should_create_terrain()

    {

        _terrainFactory.AssertWasCalled(x => x.Create(_radius));

    }

}

It works, but it’s noisy.

  • Each spec requires a minimum of four text lines.
  • There’s a lot of type and access control gook that obscures the intent.
  • Even though there’s no test-specific context to set up, I still have a Context method that simply calls the base class. Strictly speaking this isn’t required, but I’ve found that if I leave it out and then decide to add it later, I often forget to call base.Context so I always leave this boilerplate code here as a reminder.  Call it a defensive habit.

Now here’s the same context re-written in MSpec:

[Subject(typeof(Planet))]

public class when_the_planet_is_initialized : PlanetContext

{

    Because of = () =>

        _planet.Initialize(_location, _radius);

    It should_initialize_the_renderer = () =>

        _planetRenderer.AssertWasCalled(x => x.Initialize(_radius));

    It should_create_terrain = () =>

        _terrainFactory.AssertWasCalled(x => x.Create(_radius));

}

The MSTest version requires 25 text lines, the MSpec version 12 text lines.  This context doesn’t require an “Establish context” delegate, but if I wanted to add one later the form is the same as the other clauses and I don’t have to remember to call a base class version.  There’s far less compiler gook getting in the way and the code reads very easily.  It just oozes elegance.  Ok, I suppose Rubyists or Pythonistas might still call this noisy but for C# it’s amazing.  It can be daunting at first when your brain exclaims, “That can’t possibly be legal C# code!” but once you figure out that it’s just lambda expressions assigned to delegate fields it all makes perfect sense.

MSpec also includes BDD specification extension methods very similar to the ones I wrote about so you don’t have to worry about writing and maintaining that code, either.

If you’re at all interested in BDD, you owe it to yourself to try MSpec!

What’s the Business Impact of Adopting Unit Testing?

(I originally posted this on my MSDN blog.)

Someone asked a question on an internal distribution list to the effect of:

“My team is thinking about making unit testing a standard practice.  The technical benefits are obvious, but management is asking questions about the impact to the business.  How badly will productivity suffer in the short term?  What’s the ROI of unit testing?  Are there metrics or published studies I can use?”

My answer:

You may be able to find some hard statistics in various reports, but practical problem is that the variance from team to team can be huge so there’s no way to take the results from some other team and use it to predict with any accuracy what will happen to your team.  There are a couple of things that can be said with certainty.

First, introducing unit testing will decrease your team’s productivity in the short term, no question.  By how much?  That depends on the team and how much they resist learning new ways of doing things, and it also depends on management and how much they really mean it when they say they want developers to write unit tests and they’re willing to pay the upfront cost.  The transition could go relatively smoothly or it could be really ugly depending on how cooperative people are feeling.

It also depends heavily on the existing architecture of your code.  Effective unit testing requires your code to be organized according to basic principles of good architecture; i.e. high cohesion, loose coupling, and everything that goes along with that.  There’s nothing like unit testing to point out that your supposedly good architecture is actually terrible.  Introducing unit testing may ultimately mean a lot of refactoring of your existing code.  It doesn’t have to happen all at once but it may have to happen at some point.

Second, the payoff won’t be felt in the short term.  The payoff is long-term after you get your architecture straightened out, after you have a good base of unit tests, and after everyone gets fully on board with the new way of doing things.  Then you’ll see fewer bugs being written in new code, fewer regression bugs in old code, and less incremental cost for adding or changing features.  Your code base won’t “rot” as fast.

Unfortunately the value here is difficult to quantify in hard numbers.  The amount of value you earn back depends largely on whether the team is honestly committed to unit testing or if they’re just paying lip service to it, and it depends on whether you’re doing unit testing correctly or not.  It’s certainly possible to write unit tests badly just like it’s possible to write code badly.  If at all possible you should invest in bringing in an experienced unit testing expert who can show you how to do it correctly and what pitfalls to avoid.

 

Push vs. Pull in Scrum

(I originally posted this on my MSDN blog.)

Push

Traditionally, software development has been built around a push model of work flow.  That is, as soon as any kind of task is identified, like “add this feature” or “fix that bug”, it’s assigned to an individual.  Each individual has their own personal queue of tasks and they focus only on items in their personal queue.  Some attempt is made to level the queues across the team but as time progresses the queues need to be re-leveled and if they’re not re-leveled frequently enough then some people can become starved for work while others are overloaded.  Over time it becomes very difficult to predict when any given task might be completed because it depends on both who’s queue it’s in and what’s ahead of it in that queue.  While in the work period (sprint, milestone, etc) the question of which stories are done and which are still in progress is really quite non-deterministic.  The only thing that can be said with certainty is that when everyone’s queues are empty, all the stories are done.

Pull

Scrum is fundamentally a pull model.  That is, it’s designed to hold all work in a single common queue and as people become available they pull tasks off the front of the common queue (or as close to the front as possible while allowing for individual expertise, etc.).  In this way there’s no concept of re-leveling and individuals are never overloaded or idle, and tasks get completed as closely to their natural priority order as possible.  It reduces local bottlenecks that might result from individuals getting distracted or sick and makes the overall flow of work more predictable and orderly.  It encourages a style of working where you always have a fairly clear line between “done” stories and “not done” stories so you can easily adjust your plans by simply adding or removing stories from the end of the common queue.

Favor the Pull model

There are advantages and disadvantages to both models, of course, but in general the single common queue strategy tends to be best aligned with the kind of factors we want to optimize for on small teams.  This can be shown by queuing theory and explains why, for example, a single line at a bank or store checkout counter is preferable to multiple lines, one for each clerk.

Agile teams need to choose how they handle tasks in their sprint backlog.  Do they leave them all unassigned and have people claim them one at a time, choosing the highest priority item at the moment that they’re qualified to handle, or do they divide up all the tasks and assign them to individuals during the planning meeting and then everyone works through their individual pile of work during the sprint?

The first option is far less risky, far more adaptable, and far more likely to deliver a cohesive feature set at the end, even if we have to cut certain features.  If everyone is working through a single list from highest priority to lowest, then the most important features come together soonest and there is a clear cut line whenever we need it.

The second option sends people off on their own individual tracks and nobody knows until the end whether everything will come together to give us a working set of features.  We know in which order each individual will complete tasks but we have no idea in which order features will be completed, and that’s the only thing that really matters.  What if someone gets sick?  What happens to their pile of work?  Does it just sit there unattended until they get back?  What if the thing they were working on was on the critical path?  Would anyone else realize it?

Emphasize the team, focus on the product

I’ve heard people object to the pull model saying, “the push approach defines the targets in the beginning.  You know what you need to do.”  That’s true, but it defines individual targets, not product targets.  We effectively tell people, “Ok, here are your ten tasks and if you get these finished, you’re successful.”  The result is that everyone focuses on their ten tasks but no one focuses on how the product is shaping up.  That’s not part of the target.

The idea of agile development is that rather than focusing on individual goals, everyone should be focused on the team goal which is to deliver working software that provides value.  We want all decisions, every day, to be made with that target clearly in mind.  It doesn’t matter what we thought was going to be best two or three weeks ago.  What matters is what’s best today.  We want to tell people, “Here is the feature list and if the team delivers these features, we’re all successful.  Find the one thing that will move us furthest in that direction and do that.”

What about accountability?

I’ve also heard people object to the pull model saying, “If I don’t assign buckets of work to individuals, how will I enforce accountability?”  When you emphasize individual-based metrics, it actively discourages people from collaborating on tasks even when that would be the most efficient strategy from a product-development standpoint.  If I have tasks assigned to me at the beginning of the sprint and I know that I’m going to be evaluated on how many of my tasks get completed, then I’m going to be heads-down on my tasks and I won’t be very inclined to help anyone else in any significant way because it’s a zero-sum game; any time spent helping others directly hurts my own performance.  Yeah, good people will sometimes rise above that and be altruistic anyway, but they’re fighting the system to do so.

Accountability is an important concept, certainly, but it should start at a team level and focus on the product.  You get what you measure.  If you measure working software, you get working software.  If you measure completion of individual tasks, then you get a pile of completed tasks.  Completed tasks don’t deliver value.

If there’s a certain member of the team who’s not pulling his or her weight, don’t worry – everyone will know it pretty quickly.  It’ll be obvious.  There’s nowhere to hide on a small team.  Individual performance issues are best handled out-of-band, not as part of the core development process.

The concept of team ownership over individual ownership and the “just in time” assignment of tasks is a pretty important concept and it’s one of the core principles of Scrum.  It’s pretty much the point of the daily stand-up meeting.  Without it, I’m not sure you can even call what you’re doing “Scrum”.

The ScrumMaster Role

(I originally posted this on my MSDN blog.)

I recently took on the role of ScrumMaster for a newly-formed team that’s new to agile project management.  We’ve had several interesting discussions about various topics that commonly come up when a team is adopting Scrum so I thought I’d share them here.

Now, I know, Scrum is so 2007, right?  But I’ve been writing about it recently and have several more posts in mind for the near future because I firmly believe in the principle of “write what you know”.  This is what I’m doing right now so you get to hear about it.  And anyway, although the alpha geeks may roll their eyes at such an outdated topic, there are still large swathes of the software development industry where people are just now looking around and asking, “So, what’s up with this Scrum whatchamacallit thingy?”  So hopefully this will still be of use to some of you.

So what’s this ScrumMaster thing all about?  Sounds important.  I mean, you can get certified for it and everything.  The ScrumMaster must be a bigshot project manager, right?  Well, no.  The ScrumMaster role is important but it’s a very different kind of important than most people assume when they first hear the word.

Here’s what I wrote to my team on the subject:

The term “ScrumMaster” is really a misnomer because there’s really nothing “master” about it.  The role is mostly about being the team’s secretary and conscience in regard to anything having to do with the Scrum process.  It’s not a management role; it’s a facilitator role.  In fact it’s best if the role is not filled by an actual manager.

As the ScrumMaster you can expect me to:

  1. Clarify the intent and purpose of the Scrum process when anyone has questions.
  2. Facilitate the collaborative team decision-making process as we figure out what process rules we want to have.
  3. Loudly remind people to follow the process rules once we’ve agreed on them.
  4. Identify roadblocks that are hampering team productivity.
  5. Enlist the aid of people who can remove those roadblocks.

As the ScrumMaster I will not:

  1. Be responsible for budgeting, resource allocation, or project planning.
  2. Unilaterally make process rules.
  3. Be a substitute for Dev leads, PMs, and PM leads.

The ideal Scrum team is supposed to be self-organizing, self-directed, and self-motivated.  The ScrumMaster role exists to help out when the team loses its way, but an experienced and healthy Scrum team doesn’t have much need for a ScrumMaster because everyone knows how things should work and the team is able to function on its own without help.  The ultimate goal of the ScrumMaster is to work himself or herself out of a job.

To this I would add that it’s important that a ScrumMaster has a good understanding of the Scrum practices, but it’s even more important that the ScrumMaster deeply understands the principles and philosophies behind the practices and why certain things are the way they are.  When the team has a difficult decision to make, the ScrumMaster’s job is not to make the decision but rather to lay out the options and articulate the principles that are at stake and how each option will hinder or aid those principles.  Then let the team decide.  They may choose something that the ScrumMaster doesn’t particularly agree with.  That’s ok; let the team run with it for awhile and see if it works.  If it works, then great!  And if not, then it’s a growth opportunity for all concerned and all the team members have a personal ownership stake in finding a better solution.

Guest Post: Blink and Subconscious Messaging

(I originally posted this on my MSDN blog.)

My colleague Robert Hanz wrote an excellent piece for an internal email list and I liked it so much I asked him if I could post a copy of it.  Apparently Herb Sutter liked it too!  Thanks, Robert!

I was reading Blink last night, and one of the things it mentioned is how subconscious signals can significantly impact conscious activity.  For instance, one experiment took jumbled, semi-random words and had the subjects arrange them into sentences.  After they finished these, they would need to go down to the hall and speak to a person at an office to sign out of the test.  But the test wasn’t the actual sentences – mixed in with the words to form sentences were one of two sets of words.  One set had a lot of words regarding patience and cooperation, while the other had words regarding belligerence and impatience.  The test was to see the behavior of the student when the person they needed to talk to was engaged in a conversation and was unable to help them.

The group that had the belligerent words waited, on average, about 5 minutes before interrupting the conversation.  The group with the patient wordset waited indefinitely.  The test was designed to end after I believe 10 minutes of waiting – not a single “patient” subject ever interrupted the conversation.

Reading this, one of the things that came to mind was some of the different messaging used by waterfall projects vs. agile projects, and how often we hear them.

Waterfall:

  • “Bug” (used for just about anything that needs to be done by a developer)
  • “Priority one”
  • “Due by”
  • “Stabilization”

(this in addition to the frequently long list of “bugs” that confronts developers every day)

Agile:

  • “User story”
  • “Customer value”
  • “Velocity”

(in addition, the list of work to be done is usually scoped to be possible within a single week)

When I thought about the differences in words, I was really struck by how different the messaging was.  In the waterfall case, the message was overwhelmingly negative – it focused on failures, urgency, and almost a sense of distrust.  It seemed that the language seemed to be geared around ways that individuals messed up, and how everything is an emergency that must be dealt with immediately.  And, if you think about it, in a waterfall culture there is typically no frequent reward or messaging of success – the best you can typically hope for (for much of the cycle) is to avoid failure.  And the idea that you’re actually producing value is very much removed from the language.

On the other hand, the agile language itself focuses on results, not failures.  Stories are typically “done” or “not done,” and while bugs are certainly tracked, at a high level that’s usually just a statement that the story is “not done.”  Combine this with sprint reviews (where the team shows what they’ve accomplished), and the overall message becomes very much focused on the successes of the team, rather than the failures.  Progress is measured by value added, not by failures avoided.  Even something as low-level as TDD consistently gives its practitioners a frequent message of success with the “green bar.”

While I certainly believe that agile development has many advantages in terms of reducing iteration time and tightening feedback loops, among other things, is it possible that something as simple as the shift in language is also a significant part of the effectiveness?  That by priming individuals with messages of success and value, rather than messages of failure, morale and productivity can be boosted?

I strongly agree with Robert that the agile language tends to lead directly to morale and productivity boosts.  You are what you eat, or in this case, you are what you talk about.  Agile practices encourage you to talk about the things that matter.

 

Scrum Is Not About Tools

(I originally posted this on my MSDN blog.)

A frequent question on the internal email groups around here is, “My team is going to start using Scrum and we need to find a good Scrum tool to use for tracking everything.  What tool should we use?”  Usually these questions come from teams who have a tradition of metrics-heavy, documentation-heavy process with lots of artifacts.  The assumption behind the question is that Scrum is also a metrics-heavy, documentation-heavy process, just with different artifacts.

My answer is that Scrum is a philosophical and cultural shift.  Scrum is highly ceremonial in some respects, but the ceremony is all oriented around face-to-face, real-time communication, not artifacts.  If your team is geographically distributed then yes, you’ll probably need some artifacts to compensate for your lack of face-to-face communication, but remember that Scrum is not about artifacts.  Unfortunately, most software-based Scrum tools are about artifacts to a greater or lesser extent, which makes them less useful than you might think at first.

Low-tech Scrum tools like sticky notes and whiteboards are not glamorous and of course it seems somewhat heretical that software developers might eschew software tools in favor of old-fashioned paper, but there are lots of advantages to be had.  You can make pieces of paper behave precisely the way you want them to with very little effort, and physical objects offer visual and tactile feedback that’s hard to beat.

I usually track my product backlog electronically in a spreadsheet because it can get large and I want to do a lot of sorting and filtering on it, but I prefer to track my sprint backlog on a whiteboard because its size is fairly bounded and I don’t do a lot of arbitrary manipulation on it, and half the value of the sprint backlog is in seeing physical objects up on a wall and moving them from place to place.  It gives you a sense of flow and progress like nothing else can.

However, if your team isn’t ready to make the cultural shift to low-tech Scrum all at once (it takes time!), or if you have a distributed team, then yeah, there are some tools out there that can work decently well.  But beware: any tool you adopt will come with some built-in assumptions and ways of doing things.  When you adopt the tool, you adopt the entire philosophical mindset behind the tool (or you spend a lot of time wrestling with the tool for control of your philosophy).  If your needs and views are compatible with theirs then all’s well, but if you differ then it’s going to be painful.  There are several nice things about sticky notes and whiteboards but one of the biggest advantages is that you can customize your process to your precise needs.  Agile processes should exist to serve the team, not the other way around.

One excellent description of a low-tech approach to Scrum can be found in a free eBook titled Scrum And XP From The Trenches.  It’s a couple of years old now (which at the rate agile thinking evolves these days is positively ancient) but it’s still an excellent guide and source of ideas.  And because it’s low-tech it’s easy to tune it to your needs.

Misadventures in Legacy Code

(I originally posted this on my MSDN blog.)

Last week I gave a talk titled “Misadventures in Legacy Code” to the South Sound .Net User’s Group, which is the oldest .Net user’s group in the world, having been started during the days of the 1.0 beta.  I’ve given presentations to groups at work before but this was my first experience with just walking in cold and speaking to a group of strangers.  I had some concerns about how it might turn out but the SSDotNet group was a great audience and the feedback I got was very positive.

Some folks asked for a copy of the slide desk so I’m posting it here as an attachment.  I’ve edited it a bit to remove the code examples from some of my projects since they wouldn’t make any sense without a whole lot of context.  I talked the group through it at the presentation but anyone reading the deck doesn’t have that luxury.

It was a very enjoyable experience and I’m interested to do more of it in the future.

Misadventures In Legacy Code

The Number of Classes Is Not A Good Measure Of Complexity

(I originally posted this on my MSDN blog.)

Must . . . Resist . . . New Class . . .

For some reason, most developers (me included) have this idea that the number of classes in your code base strongly indicates the complexity of your design.  Or to say it another way, we tend to freak out when we see the number of classes in a project climb upwards and we tend to intuitively prefer software designs that minimize the number of classes.

I often find myself ignoring clear evidence that I need to break out some responsibility into its own class because I have a fear that clicking on Add | New Item in Visual Studio is going to make my code much more complicated, so I search for ways to jam the responsibility into some existing class instead.

This impulse is usually dead wrong.

I had an experience a few weeks back where my unit tests were getting to be pretty messy and painful to deal with.  I knew this was probably evidence that my design needed some serious improvement but I ignored it.  It got worse.  Finally I said, “Ok, fine, I’m going to refactor the code until my unit tests quit sucking.”  I started breaking up classes that had too many responsibilities, creating additional interfaces and concrete classes, and generally created an explosion of new files.  It was terrifying.  It felt like I was turning the code into a nightmare of complexity.

The funny thing was, though, when I was all done, everything just fell into place and all of a sudden I had an elegant, easy-to-understand, maintainable design sitting there.  Sure there were a lot more code files than before, but each class did one small thing and did it well.  It was remarkable how much easier it was to reason about the new design than the old.

Mental Juggling

Michael Hill wrote an excellent blog post on this awhile back and he points out that in addition to the usual low-coupling, high-cohesiveness arguments in favor of small single-responsibility classes, there’s also an argument from cognitive science.  We can only hold a few “chunks” of information in our short-term memory at once.  But we can get around this limitation by collecting chunks that are closely related, giving them a name, and that becomes just one chunk to store and track.

When we have a class that contains 500 lines of source code and does five different things, we have to think about all of that code more or less simultaneously.  It’s really difficult to handle all of that detail at once.  If we break up that class into five classes that each do one thing, we only have to track five class names in order to reason about the system.  Much easier.

Moderation in everything

Of course this can be overdone.  My recent zombie-fighting involved (among other things) chopping out a bunch of pointless classes that were apparently built to satisfy someone’s concept of proper architectural layers but didn’t really handle any responsibilities of their own.  They didn’t create useful names for chunks of code; they were just pointless abstractions.

It’s interesting that two apparently contradictory principles can be true at the same time: on one hand source code is a liability and should be minimized, but on the other hand more, smaller classes are better than fewer, bigger classes, even if that raises the file count and line count.

Legacy applications are like zombies

(I originally posted this on my MSDN blog.)

I should have posted this before Halloween when I was first thinking about it, but hey, better late than never.  Here’s what I wrote on Twitter:

I would add that the area I was working on went from 6,800 executable lines to 3,200 executable lines.  (Executable lines is so much smaller than text lines because Visual Studio is really conservative about what it considers an executable line.  Interface definitions don’t count at all, for example.)  The smaller number includes all of the unit tests I added for the code I rewrote, so the amount of code I removed is even larger than it appears.

Believe me, these numbers have nothing to do with my skills as a programmer.  Rather, they reflect my, um, target-rich environment.  When a code base is yanked this way and that over several years without much thought given to the overall cohesive direction of the project, a lot of cruft builds up.  A little time (ok, a lot of time) spent thinking about what the code is actually supposed to accomplish clarifies a lot of things.

Further thoughts on short sprints

(I originally posted this on my MSDN blog.)

I wrote about my preference for short Scrum sprints last month, where “short” means no more than two weeks.  Recently there was another internal email thread on roughly the same subject where some people listed some concerns they had about moving their team to two-week sprints. I’ve combined, paraphrased, and edited some of the questions and included my answers for them below.

The environment isn’t nailed down and requirements are still up in the air.  I need four weeks in order to maintain control.

When you have a lot of uncertainty in your project and your requirements are up in the air, that’s precisely the time when you want to have short sprints so that you have frequent adjustment points.

The whole idea of a sprint is that you build a crisp short-term plan based on facts that you know and can rely on.  If there’s a lot of uncertainty in your environment, which is easier: to build one plan that will still be accurate and useful four weeks later or to build a two-week plan, execute it, and then build another two-week plan using additional facts that you’ve learned during the first sprint?  Long sprints are for stable environments where the need for flexibility and responsiveness is low.  Short sprints are for changing environments where you don’t know a lot right now but in two weeks you’ll know more.

Planning for four-week sprints is painful as it is; I don’t want to do that twice as often!

With shorter sprints, your planning for each sprint should get shorter as well.  In fact, I’ve found that when going from four week to two week sprints your planning can be reduced by more than half because you simply don’t need all of the process that you need in a four week sprint.

For example, in a four week sprint it’s important to carefully estimate the number of required hours for each task, then track the number of hours you spend on each task, and generate a burn-down chart so that you can tell if you’re tracking to the plan.  Most teams have some point at about halfway through the sprint where they evaluate the burn-down chart and make adjustments to the sprint commitments depending on how they’re doing, because four-week plans rarely survive unchanged.

Well, instead of doing that, how about if you skip the burn-down chart and just build a new plan at the two-week point?  You can save all the effort of detailed tracking and you get a more up-to-date plan as well.  Remember, building a two-week plan isn’t nearly as expensive as building a four-week plan so you’re doing about the same amount of planning (or probably less) overall.

How far off track can you get in two weeks, anyway?  Certainly not as far as in four weeks, so there’s not as much need for oversight.  And if you do start to get wildly off-track, just glancing at a sprint task board or a list of completed stories will probably tell you that you’ve got problems because the total number of items is small enough that you can understand them at a glance and eyeball it pretty reliably.

Meeting proliferation – yuck!

The same goes for meetings.  There may be more meetings but each one is much shorter because there’s less to cover.  If it’s hard to get all stakeholders together for a demo every two weeks, you might schedule a big public demo every four weeks (after two two-week sprints).  Short meeting tend to be more productive because they’re crisp and people don’t get burned out.  Four-hour planning meetings (or 6 hours, or 12!) are way too painful to be productive.

I have multiple teams in different geographic locations that need to stay in sync.  Won’t short sprints hinder that?

Syncing multiple teams ought to be easier with short sprints because no team is ever more than two weeks away from being able to make major adjustments  to their plan in response to the needs of another team.  I’m not sure how long sprints would help the syncing issue.  Long sprints give you the illusion that you know exactly where you’re going to be four weeks from now, but you probably don’t.  You can make a pretty decent predictions for several weeks or even several months into the future using product burndown velocity, and that reminds everyone of what they really are – predictions.  Not guarantees.  Now, predictions are useful.  I’m not saying that you don’t think about the future at all.  I’m just saying that you need to be realistic about your level of certainty.

I’m not sure my team is ready to transition to two-week sprints.  As scrum master, should I just mandate it?

I would not try to impose two-week sprints over the objections of the team.  One of the fundamental principles of Scrum is that the team should be self-organizing as much as possible.  If there’s general agreement that short sprints wouldn’t work for some reason, then imposing them probably won’t be successful.  That doesn’t mean you can’t keep evangelizing the idea and hope to eventually change people’s minds, though.

If your organization is still feeling shaky about Scrum in general and people are still climbing the learning curve, then you should probably just stick with whatever sprint system you’re using at the moment unless it’s obviously broken.  People can only absorb a certain amount of change at once.  It might be wise to let your recent changes soak in a bit before you muck around with the system.

Anything else I should know about short sprints?

The one thing that short sprints really do demand is that you have to be able to write very good user stories that are small, well-defined, but still vertical slices that deliver real business value.  That’s not an easy skill to develop, but I think it’s totally worthwhile because it pushes you to really understand the features at the top of your product backlog.  Big, vaguely-defined user stories are hard to get done in two weeks, so they make you feel like maybe you need three or four week sprints, but I think the right answer is to not work with big, vaguely-defined user stories.  There’s almost always a way to break them up in a way that makes sense if you take the time to thoroughly understand them, and only good things can come of thoroughly understanding your top-ranked stories.

Ok, there’s probably one other thing that short sprints demand.  They demand that your entire team be familiar with, comfortable with, and committed to agile principles.  If your team really wants to work in a high-planning, waterfall system and they’re just doing this Scrum thing because someone higher up told them they had to, then long sprints at least gives them some of the long-term planning that they desire.  Short sprints will just make them even more uncomfortable than they already are.  That says nothing about the viability of short sprints – it’s about people’s comfort zones.

To sum up, the whole premise of Scrum is that you make crisp firm plans based on facts that you definitely know, and you avoid making firm plans based on anything you don’t know or only pretend to know.  Planning beyond the knowledge horizon doesn’t really change what you know, it just tricks you into thinking you know more than you do.  The key is to execute hard on the facts in front of you and stay loose on everything else.  And remember, people are the ultimate limiting factor so don’t drive them faster than they’re willing to go.

Everything I’ve said about two-week sprints applies to one-week sprints, only more so.  The ultimate conclusion to this line of thinking is Lean/Kanban development where you don’t have time-boxed sprints at all; you just have single-piece workflow and a pull model.  I haven’t really gone there yet because I’m still consolidating my grasp of Scrum principles but a lot of the industry thought-leaders are already there.