Do Not Exceed The Maximum Safe Rate Of Change

(I originally posted this on my MSDN blog.)

About a year and a half ago, shortly after I discovered Scrum, TDD, dependency injection, mocking frameworks,  design patterns, and many other shiny new toys, my team started a new project.  Being the bright-eyed novice that I was, I said, “Hey, this would be a great opportunity to try out all these neat ideas I’ve been reading about!  In fact, we should try them all at once!”  And since I was the team lead, everyone else just kind of nodded and said, “Um, yeah, sure, whatever.”  (That should have been my first clue.)

I’ll spare you the gory details, but suffice to say that we were playing with stuff we didn’t understand.  I knew just enough to get myself into trouble, and the rest of my team had difficulty understanding what I was babbling on about, so there was a whole lot of cargo cult engineering going on.  Looking back on it now, I think we probably got it about 75% right, but the other 25% made the whole experience very painful for everyone involved.  We didn’t go quite to the extreme of level of building factory factories (I think . . . ), but there were many things in the code base that were over-engineered; either because they were needlessly complex or because no one knew how to take advantage of the complexity in order to deliver value.

Fortunately, the project was mothballed as a result of a reorg so we didn’t have to push it to its conclusion.  I think it would ultimately have been successful.  I could see a lot of promise in the techniques we were using.  But there were so many new ideas jumbled together, and not implemented with a deep understanding of why they were good ideas, that we weren’t seeing the promised reduction in pain and increase in productivity.  Quite the opposite, actually.  To this day people still roll their eyes and chuckle whenever that project is mentioned.

I drew several lessons from that experience, but the main one is this: you don’t have to adopt everything all at once.  There is a certain intrinsic rate at which a team can absorb new ideas while still turning out productive work.  New ideas and new tools are usually disruptive.  I don’t care how good the idea or tool is, there’s an inherent cost in taking it onboard.  The cost of one or two at a time is manageable.  The cost of several new things all at once can be ruinous.  There’s a invisible line called “the maximum safe rate of change”, and you don’t want to cross it.

We backed off a bit on Agile techniques after that.  We kept reading, learning, and experimenting, trying to build a deep understanding of the principles, not just the ceremonies, of Agile development.  Turns out that real learning takes time and practice.  Huh, who knew!  It took about a year before I was confident enough to take another shot at running a seriously Agile-oriented project with more than two people.  It was going very smoothly, I think, until the layoffs happened, which just goes to show, I guess, that engineering isn’t everything.

So anyway, that explains my previous posts on how to use context/specification in the MSTest environment.  Why not just use xUnit.net and SpecUnit.net, or MSpec?  Well, yeah, in an ideal world I would.  I plan to, someday.  But in the context of our latest project, we were wrapping our heads around the context/specification format and using an IoC container for the first time.  Oh, and doing an in-place refactor/rewrite of a legacy code base.  Switching test frameworks and test runners on top of that may have pushed us past our maximum safe rate of change.

There’s always tomorrow.  Tomorrow may end up looking very different than expected, but learning never stops.  Give new stuff time to sink in, to turn from pantomime to purposeful action, so it can be used to generate real value.

 

Handling Exception in BDD-style Tests

(I originally posted this on my MSDN blog.)

Exceptions cause problems with BDD-style tests in the MSTest environment.  The MSTest [ExpectedException] doesn’t work well in this case since the exception will be thrown in the BecauseOf() method, which MSTest considers to be part of the initialization, so all tests in the context will fail.  The best way to deal with expected exceptions in BDD-style tests is to catch and save the exception when you perform the action in BecauseOf(), and then check the attributes of the saved exception in your tests.

Here’s an example of how we observe exceptions.  It’s not particularly important to understand what the class under test does, but briefly, we have an IWorkItem interface that’s implemented by several concrete classes.  We also have a decorator called RetryableWorkItem that wraps any IWorkItem instance and watches for it to throw exceptions when it’s executed.  If it does so, then RetryableWorkItem will inspect the thrown exception to see if it’s worth retrying, and if so, it will throw a RetryableException that tells our scheduler that the work item blew up and needs to be executed again sometime in the future.

[TestClass]

public class when_a_retryable_error_occurs : RetryableWorkItemContext

{

    private Exception exceptionToThrow;

    private Exception thrownException;

    protected override void Context()

    {

        base.Context();

        this.exceptionToThrow = new IOException();

        this.actualWorkItem.Stub(x => x.Execute()).Throw(this.exceptionToThrow);

    }

    protected override void BecauseOf()

    {

        this.thrownException = ((MethodThatThrows)delegate

        {

            this.retryableWorkItem.Execute();

        }).GetException();

    }

    [TestMethod]

    public void should_tell_the_scheduler_to_retry_the_work_item()

    {

        this.thrownException.Is(typeof(RetryableException));

    }

    [TestMethod]

    public void should_preserve_the_original_error()

    {

        this.thrownException.InnerException.ShouldBeTheSameAs(this.exceptionToThrow);

    }

    [TestMethod]

    public void should_provide_a_retry_delay()

    {

        RetryableException retryEx = this.thrownException as RetryableException;

        retryEx.TimeToWaitBeforeRetry.ShouldBeGreaterThan(TimeSpan.Zero);

    }

}

In the BecauseOf() method, we use a spec extension on the delegate type that will execute the delegate, catch any exception that is thrown, and return the exception.  We just save it in a class field and then inspect it in the tests.

Here’s the spec extension that captures the exception for us:

///

/// Returns an exception thrown by a delegate, or asserts if no exception was thrown.

///

/// The delegate to be executed.

/// The exception that was thrown.

public static Exception GetException(this MethodThatThrows testCode)

{

    try

    {

        testCode();

    }

    catch (Exception e)

    {

        return e;

    }

    Assert.Fail(“The delegate did not throw an exception.”);

    return null;

}

 

Edit: one nice side benefit of this approach is that you guarentee that the action you’re testing is the one that threw the exception.  With the stock MSTest exception attribute, all that’s really tested is that something, anything, in your test threw the expected exception.  If you’re testing for a common exception like InvalidOperationException, you might accidentally have that exception thrown by some supporting object and while the test passes, it’s not testing what you think you’re testing.  This approach avoids that problem. Another benefit is that there might be certain behaviors that you expect to see even when an exception is thrown.  For example, maybe if you pass bad parameters into a service method you expect to get an ArgumentException but first you want the bad call to be written to the event log.  The MSTest ExpectedException attribute doesn’t work for that either because it gives you no opportunity to verify stuff after the exception is thrown.  This approach does.

 

BDD Specification Extensions

(I originally posted this on my MSDN blog.)

A few people have asked me for more details on the specification extension methods we use to make our BDD tests more readable.  As I mentioned previously, SpecUnit.net is a great library that has all kinds of useful extension methods, but it’s written to be used on top of xUnit.net.  If you want to use the technique in the MSTest framework, you’ll have to implement your own spec extension methods.

Fortunately, they’re trivial to write.  Here’s an example of what we wrote for object references, which actually covers the majority of tests since our observations are often just testing for equality (or not) and nullness (or not).

/// <summary>

/// Provides BDD-style assertations for object references.

/// </summary>

public static class ObjectSpecificationExtensions

{

    /// <summary>

    /// Verifies that the object reference is null.

    /// </summary>

    /// <param name=”actual”>The reference to verify.</param>

    public static void ShouldBeNull(this object actual)

    {

        Assert.IsNull(actual);

    }

    /// <summary>

    /// Verifies that the object reference is not null.

    /// </summary>

    /// <param name=”actual”>The reference to verify.</param>

    public static void ShouldNotBeNull(this object actual)

    {

        Assert.IsNotNull(actual);

    }

    /// <summary>

    /// Verifies that two objects are equal.

    /// </summary>

    /// <param name=”actual”>The actual object.</param>

    /// <param name=”expected”>The expected object.</param>

    public static void ShouldEqual(this object actual, object expected)

    {

        Assert.AreEqual(expected, actual);

    }

    /// <summary>

    /// Verifies that two objects are not equal.

    /// </summary>

    /// <param name=”actual”>The actual object.</param>

    /// <param name=”notExpected”>The unexpected object.</param>

    public static void ShouldNotEqual(this object actual, object notExpected)

    {

        Assert.AreNotEqual(notExpected, actual);

    }

    /// <summary>

    /// Verifies that two objects are the same instance.

    /// </summary>

    /// <param name=”actual”>The actual object.</param>

    /// <param name=”expected”>The expected object.</param>

    public static void ShouldBeTheSameAs(this object actual, object expected)

    {

        Assert.AreSame(expected, actual);

    }

    /// <summary>

    /// Verifies that two objects are not the same instance.

    /// </summary>

    /// <param name=”actual”>The actual object.</param>

    /// <param name=”notExpected”>The unexpected object.</param>

    public static void ShouldNotBeTheSameAs(this object actual, object notExpected)

    {

        Assert.AreNotSame(notExpected, actual);

    }

    /// <summary>

    /// Verifies that an object is of a specific type.

    /// </summary>

    /// <param name=”actual”>The actual object.</param>

    /// <param name=”expected”>The expected type.</param>

    public static void Is(this object actual, Type expected)

    {

        Assert.IsInstanceOfType(actual, expected);

    }

    /// <summary>

    /// Verifies that an object is not a specific type.

    /// </summary>

    /// <param name=”actual”>The actual object.</param>

    /// <param name=”expected”>The unexpected typ.</param>

    public static void IsNot(this object actual, Type notExpected)

    {

        Assert.IsNotInstanceOfType(actual, notExpected);

    }

}

We also have extensions for specific types (Int32, String, TimeSpan, Single, etc) that implement things like ShouldBeGreaterThan(), ShouldBeLessThan(), ShouldBeEmpty(), ShouldBeCloseTo(), etc.  The nice thing about this strategy is that we don’t have to have our entire spec extension library written up front.  If we’re writing a test and want to assert some condition that isn’t already implemented by a spec extension method, we can just add an appropriate extension method at that point.

The extension methods above are all a single line that just pass the call through to the MSTest Assert class, but spec extensions are even more helpful when you have a complex assertion that requires multiple lines of code to express.  You can hide all that complexity behind a nice intention-revealing extension method that leaves your tests crisp and easy to understand.

 

BDD With MSTest

(I originally posted this on my MSDN blog.)

In my previous post I mentioned that I was writing BDD-style unit tests in the standard MSTest environment that’s part of Visual Studio.  In an ideal world, I’d probably choose to use a test framework explicitly designed for BDD like MSpec, but there’s a maximum rate of change that I and my team can absorb and right now a new test framework would exceed that boundary.

Instead, I use a small adapter class to translate MSTest’s attribute system to my preferred BDD context/specification style.  The adapter class is very trivial, but I thought I’d post it here in case anyone wants to see it.

///

/// Provides common services for BDD-style (context/specification) unit tests.  Serves as an adapter between the MSTest framework and our BDD-style tests.

///

public abstract class ContextSpecification

{

    private TestContext testContextInstance;

    ///

    /// Gets or sets the test context which provides information about and functionality for the current test run.

    ///

    public TestContext TestContext

    {

        get { return this.testContextInstance; }

        set { this.testContextInstance = value; }

    }

    ///

    /// Steps that are run before each test.

    ///

    [TestInitialize()]

    public void TestInitialize()

    {

        this.Context();

        this.BecauseOf();

    }

    ///

    /// Steps that are run after each test.

    ///

    [TestCleanup()]

    public void TestCleanup()

    {

        this.Cleanup();

    }

    ///

    /// Sets up the environment for a specification context.

    ///

    protected virtual void Context()

    {

    }

    ///

    /// Acts on the context to create the observable condition.

    ///

    protected virtual void BecauseOf()

    {

    }

    ///

    /// Cleans up the context after the specification is verified.

    ///

    protected virtual void Cleanup()

    {

    }

}

That’s all there is to it.  Note that Context() and BecauseOf() are run before every test, even though technically they could be run only once for all the tests in a particular context class.  This is just to make absolutely certain that no side-effects bleed through from one test to another.

There are a couple of other conventions that we use to make the TDD/BDD process better.  The first is that we put common context code into a shared context class, then derive our test context classes from that.  We try to keep the shared context class very simple; mostly just declaring and instantiating any common fields or stubbed interfaces and declaring and instantiating the class under test.  If you limit yourself to just those steps in the shared context class then your tests will still be self-explanatory and understandable.  Beware, though, of cramming too much complex setup code into the shared context class because if you do, no one will be able to understand your tests without flipping back and forth between the test and the shared context class.

The second convention is that we place all the test context classes inside of an empty static class named for the class or concern we’re testing.  (Update: edited to better match the screenshot below.)  So, for example, all of our EncryptionService tests are contained inside of a static class named EncryptionServiceTests.

The only reason for this is to help disambiguate the tests in the Test Results window after you run all your tests.  With the combination of the BDD naming scheme and the MSTest runner, it can be hard to tell which component a failing test is targeting.  The solution is to use a containing class as described above and then add the “Class Name column to the Test Results window, like so:

  1. in the Test Results window, right click on a column header.
  2. Choose Add/Remove Columns.
  3. Enable the “Class Name” column and move it up to before “Test Name”.
  4. Click OK.

Now your test results will look like this:

image_15

Sadly, Visual Studio doesn’t save your column preferences so you have to re-enable the Class Name column every time you start Visual Studio.  Grrrr.

So here’s what it looks like all put together (I trimmed this code down to one test for space reasons):

public static class EncryptionServiceTests

{

    public class EncryptionServiceContext : ContextSpecification

    {

        protected string plainText;

        protected string encryptedText;

        protected EncryptionService encryptionService;

        protected override void Context()

        {

            this.encryptionService = new EncryptionService();

        }

    }

    [TestClass]

    public class when_empty_encrypted_text_is_submitted_for_decryption : EncryptionServiceContext

    {

        protected override void BecauseOf()

        {

            this.plainText = this.encryptionService.Decrypt(string.Empty);

        }

        [TestMethod]

        public void the_plaintext_should_be_empty()

        {

            this.plainText.ShouldBeEmpty();

        }

    }

}

Writing Unit Tests That People Can Read

(I originally posted this on my MSDN blog.)

There’s a great quote from Refactoring: Improving the Design Of Existing Code:

“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”  — Martin Fowler

Recently, I’ve discovered that it applies to more than just code.  It also applies to unit tests.

I’ve been writing unit tests and using TDD for awhile now and while it worked ok, I wasn’t really happy with the aesthetics of my unit tests.  They were technically correct but were difficult to write correctly the first time, even more difficult for me to revisit and modify later, and nigh impossible for anyone not already fluent in the quirky idioms to read and understand.  That’s not a great recipe for success.

I went searching for ways to make my unit tests easier to understand and discovered three techniques that improve things quite a bit:

  1. Structuring tests in the Behavior-Driven Development style
  2. Using the new Arrange-Act-Assert syntax in Rhino Mocks 3.5
  3. Using specification extension methods ala SpecUnit.net

The feedback from my team members is that unit tests written in the new style are significantly easier to read, understand, and modify and they do a better job of documenting the intended behavior of the code.

Behavior-Driven Development

Behavior-driven development, or BDD, can be hard to grasp if you just do web searches and read the literature.  At least, it was for me.  I kept getting lost in what seemed to be wandering philosophical debates about “executable specifications” and customer-oriented languages.  It didn’t help that there are apparently two flavors of BDD, each with their own esoterica.

I finally found an excellent article in CoDe Magazine, written by Scott Bellware, that brought it down to the practical level of improving the way unit tests are written, with code samples that made sense to me.  I’m sure there’s a lot of subtle philosophy in the article that went over my head (I’m a barbarian, sorry), but I immediately seized on the Context/Specification format for my tests.  (BDD adherents prefer to call them “specs”, for very good reasons, but to avoid confusion my team has continued to call them tests for now.)

I won’t give a thorough definition of Context/Specification or of the terms it uses here; you can refer to Bellware’s article for that.  However, the elements of the Context/Specification approach I found most helpful are:

  1. Use language that focus on behaviors, particularly user-oriented behaviors, rather than implementation details.
  2. Emphasize the distinction between the initial conditions in your tests, the actions that your tests take to change those initial conditions, and the expectations you have about the results.
  3. Group your tests together according to context.

Focusing on behaviors rather than implementation details helps to focus my mind on the “design” aspect of TDD rather than jumping ahead to making assumptions about how I’m going to implement it.  It often leads me to a design that better reflects the domain in which I’m working.

Separating the the setup, action, and observation parts of my tests helps make them much easier to quickly scan to find the particular test I’m interested in at the moment.  Or if I’m looking at unfamiliar code for the first time, it’s easy to get a high-level look at what the class’s responsibilities are and what circumstances it’s equipped to handle without getting bogged down in test implementation details.

Grouping tests according to context helps me to think carefully about what I need to consider and makes it easier to see when there behaviors that I’ve overlooked.

I don’t doubt that executable specifications and ubiquitous language and all that stuff has a lot of value and I’m sure I’ll learn more about those concepts more in the future, but for now I’m content to use the BDD idioms merely as a better way to drive the design of my code.

Arrange-Act-Assert Syntax

I settled on Oren Eini’s Rhino Mocks pretty early on for my interface mocking needs and I was relatively happy with it.  The only problem was that the record-playback pattern was counter-intuitive and was difficult to use in a clearly-understandable way.  Fortunately, version 3.5 was released a few months ago with a new arrange-act-assert syntax that makes tests much easier to understand.

With the AAA syntax, there’s a much better separation between the code that sets up the mocks and the code that verifies your expectations.  The flow reads naturally from beginning to end which is a huge help.

One minor drawback to the new AAA syntax is its reliance on lambda expressions, which can be somewhat daunting for people who aren’t familiar with it.  However, it makes sense once you understand what it means and familiarity with lambdas is an important skill to build in any case, so it’s not wasted effort.

BDD and AAA Example

Let’s try out an example.  In one of my projects, I have a class that watches a folder on disk for new files to be created.  When a new file appears, the upload file watcher will make sure it’s one we recognize and are interested in, and if so, will invoke an action.

In this interaction-based test, I was trying to express the idea that if you ask it to watch a folder that doesn’t exist, it should create it for you.  This is the test as I originally wrote it:

[TestMethod()]

public void CreatesFolderIfNonexistent()

{

    var mockTimer = this.mocks.Stub();

    var mockPath = this.mocks.DynamicMock();

    var stubFileSystemWatcher = new FileSystemWatcherStub();

    var mockLogger = this.mocks.Stub();

    using (this.mocks.Record())

    {

        SetupResult.For(mockPath.GetFiles())

            .Return(new List());

        SetupResult.For(mockPath.Exists).Return(false);

        mockPath.Create();

    }

    using (this.mocks.Playback())

    {

        var watcher = new UploadFileWatcher(mockPath, mockLogger, stubFileSystemWatcher, mockTimer);

        watcher.Start();

    }

}

 

I was already trying to orient my tests around class behaviors, but it’s difficult to absorb all the implications of this test name in a quick scan of the code, especially when it’s mixed in with a whole bunch of equally-terse names.

The record-playback mocking syntax is all jumbled up, with setup and expectations mixed together and the actual action coming last.  There’s no clearly-labeled expectation at all – you just have to know that the call to Create() inside the record block is the expectation.

Now here’s the test after I rewrote it to use both the Context/Specification style and the Rhino Mock’s AAA syntax:

[TestClass]

public class when_the_watcher_is_started_and_the_watch_folder_does_not_exist : UploadFileWatcherContext

{

    protected override void Context()

    {

        base.Context();

        this.stubPathToWatch.Stub(x => x.Exists).Return(false);

        this.stubPathToWatch.Stub(x => x.GetFiles()).Return(new List<IFileInfo>());

    }

    protected override void BecauseOf()

    {

        this.watcher.Start(this.stubUploadFileSource);

    }

    [TestMethod]

    public void should_create_the_watched_folder()

    {

        this.stubPathToWatch.AssertWasCalled(x => x.Create());

    }

    [TestMethod]

    public void should_start_watching_the_watch_folder()

    {

        this.stubFileSystemWatcher.AssertWasCalled(x => x.StartWatchingFolderForNewFiles(Arg.Is(this.stubPathToWatch), Arg<Action<IFileInfo>>.Is.Anything));

    }

}

 

(Sorry about the line wrapping: I’m still figuring out the best way to include code snippets.)

The first thing to notice is that there’s an entire test class defined with a really verbose name.  The test class represents a particular set of circumstances that we want to think about.  The class name is written out in proper English so it’s very explicit, eliminates guesswork, and is easy to scan.

Next, there is a method called Context() devoted strictly to setting up the initial conditions for this test.  Well, actually two of them: there’s now a base class called UploadFileWatcherContext that holds the common setup code used by all tests.  It just creates the bare stubbed interfaces that we need and injects them into the class under test – I don’t do anything fancy in the base class.  In this context, we set up a couple of behaviors on the stubbed interfaces.

There’s also a method called BecauseOf() devoted strictly to performing the action that causes the results we’re going to look for.

Finally, there are two actual tests that verify our expectations.  As with the context class, these tests are named in a verbose style that a) is explicit and easy to scan and b) uses behavior-oriented language and avoids implementation jargon.  Each test clearly verifies that a certain methods were invoked on our stubbed interfaces, which is how we determine whether the expected behavior occurred.

Think of it as a state machine.  Context() defines state A, BecauseOf() defines the transition, and the tests collectively determine whether we arrived at state B.

Although the second version of this unit test has many more parts than the first, there are several important benefits.  One, if you’re scanning the tests and don’t particularly care how they’re implemented, it’s easy to skip the Context() and BecauseOf() methods entirely.  They’re not important in understanding what the tests are expressing.  All you really need to understand are the class name and the test method names and you can optionally go deeper if you want more details on a particular test.

Two, when you first state a particular circumstance and then think about everything you expect to happen in that circumstances, it’s easier to come up with a complete list.  The original test didn’t actually verify that the newly-created folder would be watched after it was created, but that omission wasn’t at all obvious.  When I refactored the test to the new format it was immediately obvious that there were two expected behaviors in this situation, not just one.

Three, separating setup, acting, and observing helps to drive a better design.  In the old unit test, I was supplying the path to watch to the constructor rather than to the Start() method.  Once I separating setup from acting and moved the object construction to a base context class, it became obvious that the path properly belonged to the action phase, not the setup phase.  Once I moved the path parameter to Start(), the design of other classes that use the upload file watcher magically became a lot simpler.

Specification Extensions

The third technique for bringing clarity to unit tests is to use extension methods to make the tests read more naturally, ala SpecUnit.net.  (SpecUnit.net is written for xUnit.net, and right now I’m using the default Visual Studio test environment, so I just implemented my own.)

The standard Assert class gets the job done just fine, but it reads like code, not like English.  In unit tests, we want to optimize for scanning and instant comprehension.  One easy way to do that is to implement extension methods on types that express the intent of the tests as clearly and succinctly as possible.  For instance, consider this test, written two different ways:

[TestClass]

public class when_empty_encrypted_text_is_submitted_for_decryption : EncryptionServiceContext

{

    protected override void BecauseOf()

    {

        this.plainText = this.encryptionService.Decrypt(string.Empty);

    }

    [TestMethod]

    public void the_plaintext_should_be_empty_V1()

    {

        Assert.AreEqual(string.Empty, this.plainText);

    }

    [TestMethod]

    public void the_plaintext_should_be_empty_V2()

    {

        this.plainText.ShouldBeEmpty();

    }

}

 

In the first version of the test, we use the Assert.AreEqual, which works, but requires a bit of mental translation in order to comprehend it.  Sure, experienced programmers can do it very quickly, but it’s still an overhead cost that can be reduced.

In the second version, we’ve defined an extension method on the string type that simply performs the assert on our behalf.  Same functionality, but much simpler to scan and comprehend.

We started with a few basic methods in our extension library and every time we want to express an observation that doesn’t read naturally, we just add a new extension method to the library.  This way, even very complex observations can be encapsulated in one line that succinctly states the intent of the observation and hides the implementation details.

More To Come

There are a few interesting details still to cover, but this is enough for one post.  In the future I’ll cover handling expected exceptions and how to write BDD-style tests in the default Visual Studio test runner.  (It requires just a tiny bit of glue code.)