(I originally posted this on my MSDN blog.)
There have been several blog posts written recently on the topic of TDD and whether it ultimately makes you more productive or just slows you down. I don’t have much to add to that discussion but I found a comment left by Ben Rady for one of Bob Martin’s posts and thought that it was excellent (the comment, not the post, though the post was good too):
TDD slows you down if your goal is to be “done coding”. If your definition of done amounts to “It compiles, and nobody can prove that it doesn’t work” then writing a bunch of tests to prove whether or not it works makes your job harder, not easier. Sadly, in many organizations this is what “done” means, and so good developers wind up fighting against their environment, while the bad ones are encouraged to do more of the same.
If, on the other hand, your goal is to deliver a working system, then TDD makes you go faster. This is the only useful definition of “done” anyway, because that’s where you start making money.
This was a particularly interesting point because I was thinking of a point I heard someone make here at work last week – that Microsoft has a far higher tester to developer ratio than a lot of other leading software companies. Those other companies have a quality standard that is comparable to Microsoft but somehow they achieve it with many fewer testers. Why is that?
I’ve spent most of my career working as a developer of test tool in test organizations at Microsoft so I have a huge amount of respect for the great job that Microsoft testers do. But, having worked here for fifteen years, I believe that a large part of the work our test teams do is avoidable; it’s the unfortunate result of our traditionally developer-centric culture which has a lengthy history of focusing on the “done coding” goal rather than the “working system” goal. We need so many testers because they have to spend a large part of their time wrangling the devs into moving in the right direction.
I’m not sure if it’s cause or effect, but there’s definitely a strong correlation between our “done coding” culture and the strong wall we have between the development and testing disciplines at Microsoft. Developers write product code and testers write test code and never the twain shall meet. Developers are often completely ignorant of the tools and automated test suites that the testers use to test the builds. If a test tool gets broken by a product change, it’s pretty rare that a developer would either know or care. I’m pretty sure there’s a better way to do it.
To be fair, there’s nothing particularly unusual about Microsoft’s historical culture; that’s the way virtually the entire industry operated fifteen years ago. But in the past several years the industry (or a significant part of it, anyway) has made large strides forward and Microsoft is still playing catch-up. Again, to be fair, Microsoft is an enormous company with many different micro-cultures; there are plenty of teams at Microsoft who are very high-functioning, where developers take complete responsibility for delivering working systems, and where testers have the time to do deep and creative exploration of edge cases because the features just work. But from where I sit that doesn’t appear to be part of our broad corporate culture yet.
A lot of people are working hard to change that, and it is changing. As frustrating as it can be to deal with our historical cultural baggage, it’s also fascinating to watch a culture change happen in real time. I’m glad to be here and to be a small part of it.
Edit: I’m proud to say that Microsoft does value quality software quite a lot. It’s just that we take the long way around to achieving that quality; we’re apt to try to “test it in” after it’s written rather than focusing on reducing the need for testing in the first place. That’s the problem I’m talking about here.