Fixing the error “Unable to launch the IIS Express Web server. Failed to register URL. Access is denied.”

I had a Visual Studio web project that was configured to use IIS Express on port 8080.  That should normally be fine and IIS Express is supposed to be able to run without administrator privileges, but when I would try to run the app I would get this error:

Unable to launch the IIS Express Web server.

Failed to register URL "http://localhost:8080/" for site "MySite" application "/". Error description: Access is denied. (0x80070005).

Launching Visual Studio with administrator credentials would cause it to be able to run the application successfully, but that kind of defeated the purpose of using IIS Express in the first place.

It turns out that the problem in my case was that something else had previously created a URL reservation for port http://localhost:8080/.  I have no idea what did it, but running this command in Powershell showed the culprit:

[C:\Users\Eric] 5/3/2012 2:39 PM
14> netsh http show urlacl | select-string "8080"

    Reserved URL            : http://+:8080/

The solution was to run this command in an elevated shell:

[C:\Users\Eric] 5/3/2012 2:39 PM
2> netsh http delete urlacl
http://+:8080/

URL reservation successfully deleted

Now I can run my web app from VS without elevation.

Fixing “Could not load file or assembly ‘Microsoft.SqlServer.Management.SqlParser’”

It’s incredibly frustrating that this still happens, but it turns out that if you have Visual Studio 2010 and SQL Server Express 2008 (or R2) on your machine, and you uninstall SQL Server Express 2008 and install SQL Server Express 2012 instead, you’ll get an error trying to load database projects in Visual Studio 2010: “Could not load file or assembly ‘Microsoft.SqlServer.Management.SqlParser’”.

Why can’t SQL Server 2012 install the stuff it knows Visual Studio requires?  Fine, whatever.

The fix for this problem is the same as the last time I posted something like this:

  1. Locate your Visual Studio 2010 installation media.
  2. In the \WCU\DAC folder, you’ll find three MSIs: DACFramework_enu.msi, DACProjectSystemSetup_enu.msi, and TSqlLanguageService_enu.msi. Run and install each of them. (Possibly just the third one is required in this case.  I’m not sure.)
  3. Reapply Visual Studio 2010 SP1.

You should be back to a working state.

I Hate #Regions

This is mostly a note to myself so I don’t lose this awesome Visual Extension.

I hate C# #region tags in Visual Studio.  I hate opening a class file and then having to expand all the regions in order to read the code.  If you feel the need to use regions to make your class file more manageable, then it’s quite likely that your class is actually too big and desperately needs to be refactored.  Just . . . no.

But since not everyone is as enlightened as myself (ahem), sometimes I have to work with code that uses lots of #region tags.  In those cases where I can’t delete them all with extreme prejudice, I can at least install the excellent I Hate #Regions Visual Studio extension that will auto-expand regions whenever I open a file and will also display the #region tags in very small type so that they’re not as obnoxious.  Ah, relief!

Eliminating Friction, Part 3: The Build Server

Once we had a build script for Jumala we could do something about setting up a continuous integration build server.

Before the build server got going, we had several chronic problems that sapped time and morale from our team:

  • People would forget to include a new file with a commit or break the build in various other ways. This wouldn’t be discovered until someone else updated from source control and couldn’t build any more. That’s never fun.
  • Because there wasn’t any official build, the process of deploying builds to the live web sites was pretty much having someone build on his/her local machine then manually copy the resulting binaries to the web server. Every once in a while a bad build would go out with personal configuration options or uncommitted code changes that shouldn’t have been made public.  That’s never fun either.
  • As a result of the previous point, we could never tell exactly what build was running on the web servers at any given moment.  The best we could do was inspect timestamps on the files but that didn’t tell us exactly what commits were included in a build.

Needless to say that wasn’t a great experience for our development team. The solution was to set up a continuous integration build server that would automatically sync, build, and report automatically after each commit.

Solving the problem

I downloaded and installed TeamCity from JetBrains.  TeamCity is one of those rare tools that manages to combine both ridiculously awesome ease of use and ridiculously powerful configurability into one package.  I highly recommend it if you’re looking for a CI build server.  JetBrains offers very generous licensing terms for TeamCity which makes it free to use for most small teams.  By the time your team is large enough to need more than what the free license provides, you won’t mind paying for it.

I set up a build configuration for our main branch in SVN, set it to build and run unit tests after every commit, and configured TeamCity to send email to all users on build failures. This way when someone breaks the build we know about it instantly and we know exactly what caused it.  Sure, we still have occasional build breaks but they’re not a big deal any more.

I extended our Rake build script to copy all binaries to a designated output folder in the source tree then configured TeamCity to grab that output folder and store it as the build artifacts.  Now when we want to deploy a build to our web servers, we can go to the TeamCity interface, download the build artifacts for a particular build, and deploy them without fear of random junk polluting them.  No more deploying personal dev builds!

I also set up a build numbering scheme for the TeamCity builds so that every build is stamped with a <major>.<minor>.<TeamCity build counter>.<SVN revision number> scheme.  This lets us unambiguously trace back any build binary to exactly where it came from and what it contains.

Finally, once the TeamCity build was working smoothly I created a template from the main branch build and use that template to quickly define new build configurations for our release branches as we create them. When we create a new release branch I copy the version number information from the main branch build definition to the new release branch build definition, then bump the minor version number and reset the TeamCity build counter on the main branch.  This way all builds from all branches are unambiguous and we know when and where they came from.

A dedicated build server has been considered a best practice for decades now, and a CI build server doing builds on every single commit has been a best practice for a long time as well, but small startup teams can sometimes get so caught up in building the product that they forget to attend to the necessary engineering infrastructure.  Setting up a build server isn’t a very sexy job but it pays huge dividends in reduced friction over time.  These days it’s amazingly easy to do it, too.  If your team doesn’t have a CI build server, you should get one right away.  Once you do you’ll wonder how you ever got along without it.

Fixing “The ‘VSTS for Database Professionals Sql Server Data-tier Application’ package did not load correctly”

Visual Studio 2010 and SQL Server Express have an uneasy alliance, at best.  When you install Visual Studio 2010 it installs SQL Server Express 2008 for you, but only the database engine, not SQL Server Management Studio.  If you mess with SQL Server Express in order to install the management tools, or upgrade to 2008 R2, or install the Advanced Services version, things break and you can no longer reliably use the Visual Studio database projects.

In particular, if you remove SQL Server Express 2008 and install SQL Server Express 2008 R2, you’ll probably run into an issue where if you try to open a schema object in a Visual Studio database project you’ll get an error that says:

The ‘VSTS for Database Professionals Sql Server Data-tier Application’ package did not load correctly.

The fix for this problem can be found here.  Here’s the short version:

  1. Locate your Visual Studio 2010 installation media.
  2. In the \WCU\DAC folder, you’ll find three MSIs: DACFramework_enu.msi, DACProjectSystemSetup_enu.msi, and TSqlLanguageService_enu.msi.  Run and install each of them.
  3. Reapply Visual Studio 2010 SP1.

You should be back to a working state.

Eliminating Friction, Part 2: The Build Script

image_thumb7

Having talked about scripting your way to success, build scripts deserve a special mention.

At Blade we have components of our product in four different solutions but those solutions have a lot of shared code between them.  When we make changes to that shared code we really need to build each of the four solutions to verify that we didn’t break anything.  Unfortunately it was up to each developer to load and manually build each solution, which as a practical matter didn’t often happen.  I also started adding unit tests to one of the solutions which added another manual step.  Clearly we needed an automated build script that would do all of those tasks for us and report with a simple pass/fail result.

I’ve used a few different build frameworks in the past to automate builds.  I’ve used straight MSBuild, which is a very powerful tool that’s seriously hampered by its reliance on XML.  The fact of the matter is that XML is a terrible scripting language.  It’s really, really painful to write and debug anything even moderately complex.  Build scripts can have very complex logic and should be treated as first-class development citizens.  I don’t recommend writing MSBuild scripts directly if you can avoid it.

psake is a pretty nice Powershell-based build automation tool that I’ve used on a couple of projects.  It works as advertised and I didn’t have any particular issues with it, though I haven’t really fallen in love with Powershell as a programming language.  I love the object-based command line system but don’t really care for the language syntax.  psake is a good choice if you want to avoid MSBuild but your team is uncomfortable with non-Microsoft environments.

For creating the build script at Blade I chose to try out Ruby, Rake, and Albacore and I’m very pleased with my experience so far.  It was ridiculously easy to set up a script that builds all four solutions, runs our unit tests, and reports any errors.  It was even easier to add variations for doing a single step (clean, build, or test) for all solutions or performing all steps for a single solution.  The Ruby language is succinct and expressive, the Rake build system is powerful, and Albacore streamlines some common Windows/.Net operations.

I won’t spend time writing a step-by-step guide to Ruby/Rake/Albacore as that’s already been done very well by others.  The point of this blog post is simply to encourage you to write a build script for your project if you don’t already have one.  There are tools available that make it easy – embarrassingly easy.  Just do it.  There’s absolutely no excuse not to.

Eliminating Friction, Part 1– Anything Worth Doing is Worth Scripting

imageI’ve been with Blade Games World working on Jumala for a couple of months now and I’m having a blast.  The team is fun and the work is very interesting.  However, as is often true in startups, there’s just so much work to do that infrastructure and tooling has sometimes gone by the wayside.  I can understand that – things that contribute directly to attracting new users and impressing investors always rank higher than things that don’t – but it’s also true that the only way to go fast is to go well.  I thought I’d write a series of blog posts about the things I’m doing to help the Jumala team go fast.  There isn’t going to be any genuinely new thoughts here but I hope my rehashing of the same old thoughts will help others who are still in the same boat.

No time to sharpen the saw

It’s always amazing to me how much time a team can waste doing things in a very error-prone and tedious manual fashion just because no one makes the time to stop and write a script to do it automatically.  I mean, we build software for a living, right?  We sit around all day writing code that makes computers automatically perform jobs and solve problems for other people but we find it difficult to do the same thing for ourselves.

I fall into that pit quite often myself.  I’ll find myself repeating the same manual steps over and over again, having to remember how to do it every time, and probably overlooking or messing up some parts, and part of my brain says, “You know, we really ought to write a script to do this for us.”  The rest of my brain responds, “Shut up, we’re too busy too fool around with that right now and just doing it by hand takes less time than building some automated widget.”  Well, yeah, that’s often true for a one-time event, but by the third or fourth or twentieth time I’m muddling through the same problem by hand, the first part of my brain starts snarkily reminding me that if I’d just scripted it back in the beginning I’d be way ahead on time now.

If you did it twice by hand, script it

Of course I don’t want to waste time building tools that won’t ever pay back what I invested into them.  Many times I do something by hand that’s truly a one-time task and that’s fine.  But quite often those “one-time” tasks pop up again and again and I discover to my chagrin that they’re not so one-time after all.  Whenever I find myself doing the same manual thing that I’ve already done twice before, I’ll try to find a way to automate it or at least put it on my hit list of things that need to be automated.

It’s important to recognize these scripting candidates early, before they become invisible.  Most of us are creatures of habit and if we do something often enough, even if it’s inefficient and dumb, it becomes invisible and “just the way it is.”  It takes real discipline (or a fresh set of eyes) to catch these sorts of things and realize that there’s a better way.

I’ve already written about one example of this sort of thing in regard to setting up and configuring new installations of Windows.  That’s something I don’t do often so it never seemed like something that was worth automating, but at some point I realized that my list of configurations and customizations had gotten so long that I’d starting writing them down in a manual “script” so I wouldn’t overlook any of them or forget how to do them.  Um, yeah, time to automate some of this nonsense.

At Blade I’ve run into several manual processes that had become “just the way it is” to the team.  For example, we sometimes want to point the game client to different server environments depending on what we’re working on.  The process of redirecting the client to a different server involved loading the project in Visual Studio, editing multiple URLs in the settings files, building the project, and running it.  Oh, and you had to repeat this for three different projects in order to get everything redirected.  And of course the settings files were checked in to source control so people had to remember not to commit the edited versions with their next changelist.

Fixing this one wasn’t completely trivial.  I spent a few hours figuring out how the Jumala settings system worked and how .Net configuration files worked (hint: the file attribute of the appSettings element is your friend).  It was difficult to make the choice to invest that kind of time when I had so much real work piling up.  But at the end I had a Powershell script that I could invoke with the same of an environment and it would redirect all components without modifying any files under source control.  I added the script to source control so the whole team could use it.  The savings we get from that script is only a minute or so every time anyone uses it but it’s already paid for itself and it’ll continue to save minutes forever.

Resist Stockholm Syndrome

It’s important keep your eyes open and to think critically about your processes at all times.  If you don’t, you’ll eventually succumb to Stockholm Syndrome and begin to accept or even even defend the very things that are holding your productivity hostage.  It’s certainly true that you can’t always fix everything all at once and sometimes you have to pick your battles.  But what I’m really talking about here is developing a spirit of continuous improvement, looking for ways to eliminate waste, and investing for the future as well as for today.  What’s holding back your productivity?  What manual, tedious, error-prone work can you eliminate with a bit of scripting?  Fix it – you’ll be glad you did.

The Most Important Question

I referred to this in a previous post but I want to highlight it here because I believe it’s so critical to a well-functioning team: in a team environment, there is one question that you want all team members to ask themselves every single day.  You need to create systems that remind them to ask that question and systems that give them the information necessary to arrive at the most timely and best possible answer to the question.

The question is this:

What is the single most effective thing I could choose to do today to help the team deliver business value?

So much work is wasted because people are busily doing what was estimated to be the most effective thing when planning was done last week, or last month, or last quarter (but that planning is now obsolete and wrong).  Or maybe they’re doing the work that’s most most personally interesting rather than what’s most valuable.  In any case, if you’re not asking and answering the question every day with up-to-date information, you’re probably doing things that don’t matter.  And that’s threatening the effectiveness of your team and the viability of your business.

That doesn’t mean you need to hop around from task to task with the attention span of a drug-crazed squirrel.  Task switching has its own costs and that needs to be taken into account.  It’s also true that the most urgent thing that needs doing is not always the most important and effective thing you could do.  Of course the evaluation function for “the single most effective thing” will vary from team to team.  The point is to get clear about what your evaluation function is and make sure that everyone is applying it correctly every day using the most current information available.

How well does your team handle this question?  Do they have frequent opportunities to ask the question and make choices about how they spend their time?  Do they have the information they need to correctly answer the question?  If not, why not?  Are there systems in place that encourage people to act on stale information?  If so, get rid of them.  Does your team’s view of the world frequently diverge from reality?  If so, figure out what causes that to happen and fix it.

Do the most effective thing.

Configuration Scripts For Lazy Developers

Laziness is a virtue

I recently paved my laptop and set it up again from scratch.  (I wanted to get rid of accumulated corporate network goo.)  I always dread doing that because it takes so long to get everything installed and configured the way I like to have it, especially since I use non-default settings for several things.  It’s even more obnoxious because I regularly use multiple computers and I want to have all settings the same across all of them.

Well, in the spirit of “laziness is a virtue”, I decided to start scripting some of this stuff so that I don’t have to do it by hand every time and it’ll be easier to apply a consistent set of settings across all of my machines.  A full-blown automated configuration system would install software and do absolutely everything for me but I’m starting simple with some Powershell scripts that configure the behavior of Windows Explorer, the console, Git, and Notepad++.

(I have a feeling that there are already tools/projects/script libraries out there that do this sort of thing in a much more complete way but I didn’t run across anything during a quick web search.  If there’s something I overlooked, let me know in the comments.)

I thought I’d share my configuration scripts/files, not because the configuration settings I choose to use are all that interesting (they’re not), but because the mechanism of where to find these settings and how to script them may be interesting for others who want to do something similar.  You can find them at http://github.com/SaintGimp/ConfigurationScripts.

How I use them

So here’s the way I’m using these scripts:

  1. I keep them in a Configuration folder inside my WindowsPowershell folder which contains my Powershell profile, Powershell Community Extensions, and other useful Powershell stuff.  I sync the WindowsPowershell folder across all of my computers using Windows Live Mesh so that when I make a change to any of the files on one computer the change shows up on all my other computers automagically.
  2. When setting up a new computer I’ll first install Notepad++, Git, and Windows Live Mesh, then sync the aforementioned WindowsPowershell folder.
  3. I start an elevated Powershell console and cd to the WindowsPowershell\Configuration folder.
  4. I run all of the Configure-[foo] scripts in the folder.
  5. When I change something about my preferred configuration, I’ll update the script files and run them on all my other computers the next time I use them.

What they do

  • Configure-Console.ps1: I like to use the Consolas font and to use several other non-default settings for my console windows and I want my cmd consoles to look different than my Powershell consoles.  To accomplish this I found that I need to copy modified Powershell shortcuts (which contain console settings) into the Start Menu folder, plus load other stuff into the registry for default settings and for Powershell instances that aren’t launched through the shortcuts.  The registry settings are contained in a .reg file I exported from regedit after I got everything set up the way I want it.
  • Configure-Explorer.ps1: I use a command from Powershell Community Extensions to add an “Open Powershell Here” context command to Windows Explorer, then I import a bunch of registry settings for view options and start menu options.  Finally I add Notepad to the Send To menu and remove several other things that I never use from that menu.
  • Configure-Git.ps1: This sets several Git configuration options including a prettified version of log (git lg) and Notepad++ as my default editor.
  • Configure-Notepad++.ps1: I have a small shim that I register as a debugger for Notepad.exe which runs Notepad++.exe instead.  I prefer doing that rather than hunting down every place that Notepad.exe may be invoked in my tools and changing the command.  Next I change a couple of Notepad++ settings to the behavior I prefer.

There’s obviously a lot more that could be added but I’m going to do that on a “pain threshold exceeded” basis; that is, when configuring something by hand annoys me too much then I’ll sit down and figure out how to automate it.

Why The Standup Meeting Is Important

High-bandwidth communication

If you’re trying to do agile software development, high-bandwidth communication is incredibly important.  It’s almost impossible to overstate the urgent need for team members to be in constant communication with each other and to know what’s going on at all times.  That’s the only way people can make the right tactical decisions and adapt to the current situation rather than merely operate according to their understanding of the situation as it was a few days ago, or last week, or last month.

There are several ways to encourage frequent and high-bandwidth communication.  Having a co-located team room is probably the best possible way to do this, though I’ve never had the pleasure of experiencing that for myself yet (but I will soon!)  If, like all my teams up to this point, you’re sitting in individual private offices, then a daily standup meeting becomes super important.  I often get pushback on the standup meeting, with people questioning why it’s necessary and whether the daily interruption is worth it, and here’s what I tell them.

The most important question

I’ve worked at Microsoft for a long time, working in several different areas of the company, and in that time I’ve observed that we often optimize for the wrong things.  Historically, Microsoft has optimized heavily for individual developer productivity (in the sense of lines of code written).  However, Microsoft doesn’t usually have a problem with devs writing code too slowly; most of our product code bases are enormous.  We’re awash in code.  What we actually struggle with is that we write the wrong code, or code that doesn’t work right, or we write a bunch of individual pieces of code that don’t interact well when we try to fit them together.

It turns out that an engineer’s job is not just to write software.  His or her job is to deliver value to customers in a timely manner, make a valuable impact on the business, and ultimately to earn money for the company.  Writing software is an essential step but it’s not actually the end goal.  Delivering business value requires more than just individual code-writing prowess.  It requires frequent, consistent communication across the entire team so that we’re all pulling in the same direction at the same time and doing things that actually matter.  It requires a team-oriented mentality; we don’t make commitments as individuals, we make commitments as a team and we succeed or fail as a team based on the value of what we deliver.

With that in mind, the point of the standup meeting is not to communicate status to managers.  We could do that perfectly well once a week over email.  The point of the daily standup is to communicate together as a team so that every day each team member can ask and answer the question, “What is the single most effective thing I could choose to do today to help the team deliver business value?”

That’s a very powerful question to ask.  Contrast that with the traditional Microsoft model of asking yourself, “What do I need to do to finish all of the tasks that have been assigned to me?”  To answer this traditional question, you don’t really need to know what anyone else is doing or what’s going on across the team.  You just crank through your assigned work.  The problem is that the tasks that were assigned to you might grow stale very quickly; that is, the value that we thought a task had when it was written down might not be the value it has now because reality has a way of changing things over time.  If you let “the plan” diverge too far from reality, you end up building software that doesn’t matter.

Avoiding wasted effort

Have you ever had the experience of spending months working on a piece of software only to have that software never get deployed/sold/used?  I certainly have.  I’ve seen it happen for all of the following reasons:

  1. When finished, the software didn’t solve the problem the customer wanted solved, either because: A) you never correctly understood the real problem in the first place, or B) you understood the original problem but it evolved and changed between the time you started and the time you finished, so the software you wrote to meet the original problem became ill-fitting.
  2. You ran out of time in the schedule and while you had a lot of code written, the pieces that were finished couldn’t be easily assembled to solve any problems on their own, so the whole thing was useless.
  3. You built good software that solved a real problem but your customers weren’t aware of what you were building or how much progress you were making so in the meantime they ran off and invested in some other solution instead, making your solution irrelevant.
    The daily standup meeting (and the other sprint meetings we have) are intended to avoid those failure scenarios by creating frequent opportunities for communication.  This communication is intended to make each team member aware of exactly what the state of the project is right now, today, so that we can make the best possible choice about how to help drive the team toward delivering real value.  The task board is in a big visible location and is updated every day so we can see what’s been done, what’s left to do, and where we need to load-balance.  We try to have a customer representative on hand so that we immediately hear about any changes in what the business needs from us and we can get good guidance when we have to make hard tradeoff decisions.  We listen to what each team member is doing because we often pick up serendipitous pieces of information that save us far more time than the 15 minutes it costs us.  Examples:

Person A: I’m struggling to solve this problem.

Person B: Oh, I solved that same problem last week.  Here’s the solution.

Person C: I’m working on this particular task today.

Person D: Hey, I’m working on this other thing and I just realized that’s going to drastically affect your work.  We need to talk.

Person E: This really important story is going more slowly than we thought it would.

Person F: The stuff I was going to work on isn’t as valuable to our customers; how can I help speed up the important story?

Person G: I chose to implement this new feature in this certain way.

Customer Rep: Hmm, I don’t think that’s going to work well for us.  Have you thought about…

Talk to each other

It’s remarkable the lengths that software developers will go to in order to avoid talking to their teammates.  Many of us are introverted in the first place (I certainly am!), plus our industry still has this conception of the mythical cowboy coder who locks himself in his office, getting pizza and soda slipped under the door, and every once in a while code occasionally emerges.  I’m not sure that model ever worked all that well, but in any case we (should have) outgrew it a long time ago.  It’s simply not workable these days when you consider the speed and volume at which our customers expect us to deliver value.  Talk or die.  The choice is that simple.

Update: For lots of good patterns related to the daily standup meeting, see It’s Not Just Standing Up: Patterns For Daily Standup Meetings.