The job market for senior software developers is very hot right now (as of early 2014), at least in Seattle. I know of several companies in the area that have aggressive hiring plans for the next 12 months, often with the goal of doubling or tripling in size. Most of these companies intend to hire mostly or exclusively senior developers because their long-term productivity is so much greater than the typical junior or just-out-of-school developer. I find it remarkable, however, that these same companies often don’t have an interviewing strategy that’s designed to select the kind of candidates they’re actually looking for.
A fairly typical interview strategy consists of an introductory conversation focused on cultural fit and soft skills. This is sometimes a panel interview but is relatively short. The vast majority of time is spent in one-on-one sessions doing whiteboard algorithm coding problems. These coding problems are typically small enough that there’s a possibility of completing them in an hour but also obscure or tricky enough that most candidates have a significant likelihood of messing up or not finishing. The candidate has to understand the description of the problem, construct an algorithm, and write code for the algorithm on a whiteboard, all the while talking out loud to reveal his or her thinking process. Google is the most famous for this style of interviewing, but Microsoft has been doing it for decades as well and the whole rest of the industry has pretty much followed the lead of the big dogs.
Broken By Design
So what’s the problem? Well, according to Laszlo Bock, senior vice president of people operations at Google, it simply doesn’t work:
Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess, except for one guy who was highly predictive because he only interviewed people for a very specialized area, where he happened to be the world’s leading expert.
Why doesn’t it work? The short answer is because what we ask people to do in interviews often has very little relationship to what we actually expect them to do on the job. Maria Konnikova offers a longer answer in a New Yorker article:
The major problem with most attempts to predict a specific outcome, such as interviews, is decontextualization: the attempt takes place in a generalized environment, as opposed to the context in which a behavior or trait naturally occurs. Google’s brainteasers measure how good people are at quickly coming up with a clever, plausible-seeming solution to an abstract problem under pressure. But employees don’t experience this particular type of pressure on the job. What the interviewee faces, instead, is the objective of a stressful, artificial interview setting: to make an impression that speaks to her qualifications in a limited time, within the narrow parameters set by the interviewer. What’s more, the candidate is asked to handle an abstracted “gotcha” situation, where thinking quickly is often more important than thinking well. Instead of determining how someone will perform on relevant tasks, the interviewer measures how the candidate will handle a brainteaser during an interview, and not much more.
(Edit: she’s referring to non-coding brainteaser questions here, but many “coding” interview questions also fall in to the toy brainteaser category.)
Why is it that we set out with the intent of hiring senior developers who are valuable specifically for their maturity, experience, and enormous depth of knowledge about how to build large software systems and end up evaluating them strictly on small-scale tactical coding exercises that would look right at home in any undergraduate homework set? Why do we evaluate them in an artificial environment with horribly artificial tools? Why is it so completely out of context?
Don’t get me wrong – it’s obviously important that the people we hire be intelligent, logical, and competent at tactical coding. But tactical coding is only one part of what makes a great senior developer. There are many vital technical skills that we don’t often explore. When our interviews are artificial and one-dimensional we end up with poor-quality teams, because a healthy software team needs a variety of people with various strengths. Selecting for only one narrow type of skill, even if it’s a useful skill, is a mistake. There has to be a way to create more relevance between our interview questions and the palette of skills we’re actually looking for.
Focus on Reality
What is it that we do all day at our jobs? We should take whatever that is and transfer it as directly as we possibly can into an interview setting. If we’re not doing it all day every day we shouldn’t ask our candidates to do it either.
Let’s start with the venerable whiteboard: when was the last time any of us wrote more than a couple of lines of syntactically-correct code on a whiteboard outside of an interview setting? That’s not a job skill we value, so don’t make candidates do it. Give them a laptop to use, or better yet, allow them to bring their own, if they have one, so they have a familiar programming environment to work with.
Next, what kind of problems do we solve? A few of us regularly invent brand new algorithmic concepts, be it in computer vision, artificial intelligence, or other “hard computer science” fields. But let’s be honest – the vast majority of us spend our time doing more prosaic things. We just move data from point A to point B, transforming it along the way, and hopefully without mangling it, losing parts of it, or going offline during the process. That’s pretty much it. Our success is measured in our ability to take old, crappy code, modify it to perform some additional business function, and (if we’re really talented) leave it in a slightly less old and slightly less crappy state than we found it. Most of us will never invent any new algorithm from first principles that’s worth publishing in a journal.
This is the reality of day-to-day software development. Small-scale tactical coding skills are expected to measure up to a certain consistent bar, but the key differentiator is the ability to write self-documenting, maintainable, bug-free code, and to design architectural systems that don’t force a wholesale rewrite every three years. Clever binary tree algorithms we can get from the internet; a clean supple codebase depends directly on the quality of the developers we hire and ultimately determines whether our companies succeed or fail.
How do those skills translate into an interview setting? I think a refactoring exercise is a great way to accomplish this. Give the candidate a piece of code that works correctly at the moment but is ugly and/or fragile, then ask them to extend the behavior of the code with an additional feature. The new feature should be small, even trivial (this isn’t an algorithm test) because the challenge for the candidate is to add the new behavior without introducing any regression bugs and also leaving the code in a much better state than in which it started. I’m sure we all have lots of great messes we can pull straight from our production codebases (don’t lie, you know you do!), but if you want an example, take a look at the Gilded Rose Kata (originally in C# but available in several other languages as well).
A few companies have expanded even further on this idea of reality-based interviewing. They’ve done things like dropping the candidate into the team room for the entire day and having them pair with multiple team members on actual production code. Other companies have given candidates a short-term contract job that can be completed in a week of evenings or a weekend. The candidate gets paid a standard contracting rate for their time and the company gets either good code and a well-validated employee or at worst avoids a bad full-time hire. Those techniques may have logistical problems for many companies, and they don’t scale to high volume very well, but every company ought to be able to come up with some way to ground their interviewing process more firmly in reality.
Edit: coincidentally, Daniel Blumenthal published a defense of the traditional whiteboard coding exercise just after I wrote this. It’s an interesting counter-point and it’s a good contrast to what I’ve written here. He wrote, “you don’t get to choose how you get interviewed.” That is, of course, completely correct. My argument is not that we should make interviews easier to pass, or lower our standards, but rather that we should construct our interviews to screen for the skills that we actually want our employees to have. If you really need your people to solve “completely novel problems”, as Daniel wrote, then interview for that skill. If you actually need other things, interview for those things.