Hardware Hacking For Software Developers

July 23, 2014 1 comment

I studied Software Engineering in college.  Most of the major classes were about various software topics, of course, but I was also required to take a couple of hardware/electronics classes where we soldered together transistors and logic gates and such things.  At the time I didn’t particularly enjoy it – software was logical and always worked the same way while hardware was messy and was constantly not working due to bad solder joints or “letting the magic smoke out” or what have you.  Very frustrating.

Fast forward 20 years and I’ve been doing the software thing for a long time.  I’m still passionate about programming and love what I do but I was looking for something new to spend hobby time on.  Preferably something relatively cheap, that I could combine with my interest in amateur science, and that would bring me back to a state of “beginner mind” where I could have fun learning again.  I decided to turn my attention back to the world of electronics to see if it would “take” any better the second time around.

It turns out that the world of hobby electronics is currently in a golden age and it’s amazing!  Between the ocean of free information available on the internet, the fantastic companies that cater to hobbyists, and the wide availability of low-cost, high-quality tools and equipment, there’s never been a better time to get into the hobby.

So you want to hack some hardware?

I’ve been working with this new hobby for about a year now and while I’m certainly no expert, I thought I’d share the results of some of my research to maybe save others some time and to “give back” to the internet community that’s helped me so much.  This post will be a high-level overview of some of the interesting directions in which one can explore, and resources I’ve found to be the most useful.  It’s adapted from a talk I did for the South Sound Developers User Group earlier this year.

As a software developer, it’s pretty easy to ease into the electronics field, starting with high-level things that are comfortably familiar and working your way down to raw electronics with transistors and opamps and stuff.  There are a huge array of options available and I can’t possibly cover them all, so I’ll just describe the journey I took and the results I got.

Microprocessor boards

Probably the easiest place to start is with one of the many types of small microprocessor boards available today.  Two of the most popular right now are the Raspberry Pi and the BeagleBone Black.  These are full self-contained computers on a single small board, usually running some kind of Linux-based OS on an ARM processor.  The fact that they’re small and low-power is nice, I guess, but the really interesting thing about them is that they usually include a number of general purpose I/O pins that you can hook up to external electronic circuits in order to control them or to acquire data from them.  These GPIO pins aren’t tied to any kind of complex protocol like USB or even serial; instead you can directly set each output pin on or off, or check the current state of each input pin.  This allows you to do something trivial like blink an LED on and off, or things that are far more complex.

product_detail_black_sm[1]

One of my first electronics purchases was a BeagleBone Black and I’m very happy with it.  It has enough processing power to run Linux, lets me program in my language of choice (Python, Javascript, C, etc.), and is easy to hook up to my home network and thus to the internet.  It has a wide array of GPIO pins, both digital and analog, and also has USB and HDMI ports to connect a monitor and keyboard if desired (though I usually just SSH into it).

I bought the BeagleBone Black along with an inexpensive beginner’s electronics parts kit (see below) and had a lot of fun wiring up LEDs, switches, and other components on a breadboard and connecting them to the BeagleBone.  I refreshed my memory about the basics of simple electronic circuits but soon found myself wanting to go a bit more low-level.  After all, the BeagleBone Black wasn’t too much different than the laptop I work on every day and I wanted something a little more different than my day job.

Microcontroller boards

Having the power of a full Linux OS is nice but I wanted to get a little closer to the hardware.  I picked up an Arduino Uno, which is a microcontroller board.  Like the BeagleBone Black, the Arduino has several digital and analog GPIO pins which which to interface with external electronic circuits.  However, unlike microprocessor boards, the hardware resources available on a microcontroller board are much more limited.  The Arduino Uno has only 32 KB of storage and 2 KB of RAM, so you aren’t going to be running Linux on this thing.  In fact, you don’t have any kind of OS available at all.

Instead, you write self-contained programs which are uploaded to the board over USB and get directly executed by the microcontroller.  There’s no OS to get in the way; your program is the only code running and you have full control over and responsibility for everything that happens.  This is simultaneously limiting (no networking, video, or keyboard/mouse supported out of the box) and also liberating because your code executes predictably in real time without being preempted by other threads or processes.  This allows you to do things that require precise timing control such as operating servo motors for robotics projects.

It’s still pretty easy to program, though, because there’s an IDE and libraries available to help you with the dirty work.  Using the Arduino IDE on your computer, you write code in Arduino-flavored C++, cross-compile it for the Atmel AVR ATmega328 microcontroller on the board, and upload the resulting binary via a USB cable.

The Arduino is an extremely popular hobbyist board and there are a ridiculous number of tutorials, project descriptions, and forums to give you advice and inspiration.

Bare microcontrollers

The Atmel AVR ATmega328 microcontroller that runs the Arduino is not a particularly modern chip but it’s well understood and a huge community has grown up around it.  It’s quite inexpensive and easy to experiment with.  It turns out that it’s not particularly difficult to buy the chip all by itself, whack it and a couple of other cheap components onto a breadboard, and make your own homebrew Arduino.

image 

Furthermore, the AVR line of microcontrollers includes chips that are both larger and smaller than the ATmega328 used in the Arduino but they’re all software-compatible, so you can choose precisely the minimum amount of silicon you need to build your project.  Using a bare chip in your project is a little more inconvenient than using an Arduino board but if you’re soldering stuff together anyway and you need it to be small, including an AVR chip directly in your circuit is the way to go.  Besides which, soldering bare chips makes you feel like you’re actually doing electronics!

Analog circuits

So far, everything we’ve discussed is primarily centered around writing code for a processor of some kind with some ancillary circuitry thrown in.  That’s a great way for a software developer to start but since my goal was to do something completely new, I needed to break out of the programming paradigm completely.

3714224615_90908a2353_z

It turns out that there are all kinds of things you can do with just analog components and a breadboard; no digital code required.  That was a revelation to me.  I mean, duh, of course I knew that, but after 20 years as a software developer I had some pretty deep-seated biases toward digital logic.  The first time I wired up a 555 analog timer chip to blink an LED and was able to change the rate of the blinking by merely swapping one resistor value for another, I realized that this was the “totally new” thing I’d been looking for.  I still do a lot of stuff with microcontrollers in my projects but it’s become a game of sorts to see how little code I can get away with, with zero being the ideal.  I’ve learned a lot about low-level electronics over the past year and I’ve barely scratched the surface.  I’m having a ton of fun, though!

Tools and resources

There are several tools I’ve purchased and valuable resources I’ve found that have helped me in my learning.  Below are some of the highlights.  This is by no means an exhaustive list and there may well be even better choices out there, but these are the things that I found valuable to me.

Hobbyist stores and distributors

There are a lot of online stores out there that offer all kinds of electronics components.  Some of them are good, a lot of them are sketchy.  Adafruit and Sparkfun are two companies that I can highly recommend.  They both carry high-quality components suitable for hobbyists, both carry lots of custom-designed components and boards to make building projects easy, and both publish a wide range of tutorials and project guides to help you use what you buy.  Their prices aren’t the cheapest you can find, but I strongly recommend sending your business to these companies because they stand behind their products and are actively building up the hobbyist community.  Just beware – browsing either one of those websites can be hazardous to your wallet.  So much awesome stuff!

If you’re shopping for unusual components, or components with very specific characteristics, or a large quantity of components, then you might want to try a large distributor house.  Mouser and Digikey are both well-regarded, reliable distributors that stock literally hundreds of thousands of different electronics parts.  If you need it, they probably have it.  Your biggest problem will be deciding exactly what you want to order.  You want a capacitor?  Mouser has 346,459 different kinds.  You’ll need to get proficient with their catalog search tools and know how to read datasheets because there’s no handholding here.

Also be aware that these distributors focus on large-volume orders.  They’re willing to sell you singles of most things but it’s not going to be particularly cheap.  You need to buy in reasonable quantities to get discounts.

Electronic parts and kits

To get started, I purchased a couple of breadboards, the Beginner Parts Kit, and the Resistor Kit from Sparkfun.  These are nice little kits that gives you a few of each of the most commonly-used components for beginning electronics projects.  This is a great way to get jumpstarted with building simple electronics circuits to connect to your BeagleBone Black or Arduino.  There’s lots of other options out there, of course, but I was happy with these.

When it was time to build a larger parts inventory, I was pleased with kits put together by Joe Knows Electronics that I ordered from Amazon.  I have the resistor kit, capacitor kit, semiconductor kit, and LED kit and they’ll keep me going for a good long time.

Multimeter

EX330 - 12 Function Mini MultiMeter + Non-Contact Voltage Detector

A good multimeter is the first tool you should invest in as you pursue a hobby in electronics.  A good choice is the Extech EX330.  It’s reliable, has a good set of features, and is reasonably priced.

Soldering Station

Aoyue 9378 60 Watt Programmable Digital Soldering Station - ESD Safe, includes 10 tips, C/F switchable, Configurable Iron Holder, Spare Heating Element,100-130V

I learned a long time ago that if you’re going to invest in equipment for a hobby, you should buy quality stuff that actually works well.  There are lots of low-end soldering irons out there but most of them will simply cause you to be frustrated and maybe give up on the whole thing altogether.  After some research I purchased an Aoyue 9378 soldering station that I’ve been happy with.  There are several other good choices in this price range, too, but this one seemed to be recommended a lot.

Power supply

Mastech HY3005F-3Triple Linear DC Power Supply, 30V, 5A

One of the first “from scratch” projects you can build is a breadboard power supply to feed 3.3V or 5V power to your circuits.  It’s fun and satisfying to build your own tools.  However, when your needs expand to higher voltages or high current, and you need features like current-limiting to avoid blowing stuff up, then it’s time to get a real power supply.  I purchased a Mastech HY3005F-3 power supply.  It’s solid and reasonably priced.  It’s not programmable or anything (that stuff is found in higher price levels) but it gets the job done.

Oscilloscope

An oscilloscope is an incredibly useful tool for understanding and diagnosing non-trivial circuits.  Where a multimeter will tell you the voltage at a particular contact point right now, an oscilloscope allows you to see how it varies over time.  This is the sort of tool where it’s hard to imagine how it would be useful until the first time you actually use one, then wonder how you ever got along without one.  I chose the Rigol DS1102E and like it a lot.  There are fancier ones available for more money but this one seems to do everything I need for now.

Blogs

image

There are a ton of great resources for the electronics hobbyist on the internet and I won’t attempt to list them all.  One of them particularly stands out in terms of being both instructive and entertaining, though: EEVBlog.  Dave has hundreds of video episodes covering everything from electronics fundamentals and theory to teardowns of various pieces of equipment to live repair jobs.  I learned a tremendous amount just from watching how he uses his tools.  I recommend starting with episode #168: How To Set Up An Electronics Lab.

Circuit simulation

Sometimes you don’t want to take the time to actually build a circuit on a breadboard to see if it will work.  Or maybe you don’t have the correct parts at hand, or you’re afraid you might screw it up and blow up a component, or whatever.  In such cases it’s very useful to be able to simulate your circuit ideas in software first before you commit to the real thing.  There are software packages you can download for this purpose, but I’ve been pleased with CircuitLab.com.  It’s not free but the user interface is very intuitive and it’s helped me better understand how transistors and opamps work because I can quickly set up various scenarios and see instantaneous voltage and current values, or chart them over time.

Go forth and learn!

So that’s a lot of high-level information about how to get started with an electronics hobby.  You’ll notice that I didn’t discuss the “how-to” on anything, but a few web searches should turn up lots of resources for any particular topic that catches your interest.  The most important thing is to actually do something.  Start small, understand and master what you’ve purchased, then move on to the next step.  Personally, I’ve found it to be very rewarding and I hope you will too.

Categories: Uncategorized

Reality-based Interviewing

March 18, 2014 2 comments

The job market for senior software developers is very hot right now (as of early 2014), at least in Seattle.  I know of several companies in the area that have aggressive hiring plans for the next 12 months, often with the goal of doubling or tripling in size.  Most of these companies intend to hire mostly or exclusively senior developers because their long-term productivity is so much greater than the typical junior or just-out-of-school developer.  I find it remarkable, however, that these same companies often don’t have an interviewing strategy that’s designed to select the kind of candidates they’re actually looking for.

A fairly typical interview strategy consists of an introductory conversation focused on cultural fit and soft skills.  This is sometimes a panel interview but is relatively short.  The vast majority of time is spent in one-on-one sessions doing whiteboard algorithm coding problems.  These coding problems are typically small enough that there’s a possibility of completing them in an hour but also obscure or tricky enough that most candidates have a significant likelihood of messing up or not finishing.  The candidate has to understand the description of the problem, construct an algorithm, and write code for the algorithm on a whiteboard, all the while talking out loud to reveal his or her thinking process.  Google is the most famous for this style of interviewing, but Microsoft has been doing it for decades as well and the whole rest of the industry has pretty much followed the lead of the big dogs.

Broken By Design

So what’s the problem?  Well, according to Laszlo Bock, senior vice president of people operations at Google, it simply doesn’t work:

Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess, except for one guy who was highly predictive because he only interviewed people for a very specialized area, where he happened to be the world’s leading expert.

Why doesn’t it work?  The short answer is because what we ask people to do in interviews often has very little relationship to what we actually expect them to do on the job.  Maria Konnikova offers a longer answer in a New Yorker article:

The major problem with most attempts to predict a specific outcome, such as interviews, is decontextualization: the attempt takes place in a generalized environment, as opposed to the context in which a behavior or trait naturally occurs. Google’s brainteasers measure how good people are at quickly coming up with a clever, plausible-seeming solution to an abstract problem under pressure. But employees don’t experience this particular type of pressure on the job. What the interviewee faces, instead, is the objective of a stressful, artificial interview setting: to make an impression that speaks to her qualifications in a limited time, within the narrow parameters set by the interviewer. What’s more, the candidate is asked to handle an abstracted “gotcha” situation, where thinking quickly is often more important than thinking well. Instead of determining how someone will perform on relevant tasks, the interviewer measures how the candidate will handle a brainteaser during an interview, and not much more.

(Edit: she’s referring to non-coding brainteaser questions here, but many “coding” interview questions also fall in to the toy brainteaser category.)

Why is it that we set out with the intent of hiring senior developers who are valuable specifically for their maturity, experience, and enormous depth of knowledge about how to build large software systems and end up evaluating them strictly on small-scale tactical coding exercises that would look right at home in any undergraduate homework set?  Why do we evaluate them in an artificial environment with horribly artificial tools?  Why is it so completely out of context?

Don’t get me wrong – it’s obviously important that the people we hire be intelligent, logical, and competent at tactical coding.  But tactical coding is only one part of what makes a great senior developer.  There are many vital technical skills that we don’t often explore.  When our interviews are artificial and one-dimensional we end up with poor-quality teams, because a healthy software team needs a variety of people with various strengths.  Selecting for only one narrow type of skill, even if it’s a useful skill, is a mistake.  There has to be a way to create more relevance between our interview questions and the palette of skills we’re actually looking for.

Focus on Reality

What is it that we do all day at our jobs?  We should take whatever that is and transfer it as directly as we possibly can into an interview setting.  If we’re not doing it all day every day we shouldn’t ask our candidates to do it either.

Let’s start with the venerable whiteboard: when was the last time any of us wrote more than a couple of lines of syntactically-correct code on a whiteboard outside of an interview setting?  That’s not a job skill we value, so don’t make candidates do it.  Give them a laptop to use, or better yet, allow them to bring their own, if they have one, so they have a familiar programming environment to work with.

Next, what kind of problems do we solve?  A few of us regularly invent brand new algorithmic concepts, be it in computer vision, artificial intelligence, or other “hard computer science” fields.  But let’s be honest – the vast majority of us spend our time doing more prosaic things.  We just move data from point A to point B, transforming it along the way, and hopefully without mangling it, losing parts of it, or going offline during the process.  That’s pretty much it.  Our success is measured in our ability to take old, crappy code, modify it to perform some additional business function, and (if we’re really talented) leave it in a slightly less old and slightly less crappy state than we found it.  Most of us will never invent any new algorithm from first principles that’s worth publishing in a journal.

This is the reality of day-to-day software development.  Small-scale tactical coding skills are expected to measure up to a certain consistent bar, but the key differentiator is the ability to write self-documenting, maintainable, bug-free code, and to design architectural systems that don’t force a wholesale rewrite every three years.  Clever binary tree algorithms we can get from the internet; a clean supple codebase depends directly on the quality of the developers we hire and ultimately determines whether our companies succeed or fail.

How do those skills translate into an interview setting?  I think a refactoring exercise is a great way to accomplish this.  Give the candidate a piece of code that works correctly at the moment but is ugly and/or fragile, then ask them to extend the behavior of the code with an additional feature.  The new feature should be small, even trivial (this isn’t an algorithm test) because the challenge for the candidate is to add the new behavior without introducing any regression bugs and also leaving the code in a much better state than in which it started.  I’m sure we all have lots of great messes we can pull straight from our production codebases (don’t lie, you know you do!), but if you want an example, take a look at the Gilded Rose Kata (originally in C# but available in several other languages as well).

A few companies have expanded even further on this idea of reality-based interviewing.  They’ve done things like dropping the candidate into the team room for the entire day and having them pair with multiple team members on actual production code.  Other companies have given candidates a short-term contract job that can be completed in a week of evenings or a weekend.  The candidate gets paid a standard contracting rate for their time and the company gets either good code and a well-validated employee or at worst avoids a bad full-time hire.  Those techniques may have logistical problems for many companies, and they don’t scale to high volume very well, but every company ought to be able to come up with some way to ground their interviewing process more firmly in reality.

Edit: coincidentally, Daniel Blumenthal published a defense of the traditional whiteboard coding exercise just after I wrote this.  It’s an interesting counter-point and it’s a good contrast to what I’ve written here.  He wrote, “you don’t get to choose how you get interviewed.”  That is, of course, completely correct.  My argument is not that we should make interviews easier to pass, or lower our standards, but rather that we should construct our interviews to screen for the skills that we actually want our employees to have.  If you really need your people to solve “completely novel problems”, as Daniel wrote, then interview for that skill.  If you actually need other things, interview for those things.

Categories: Uncategorized

Merging Two Git Repositories Into One Repository Without Losing File History

January 22, 2013 9 comments

A while ago my team had code for our project spread out in two different Git repositories.  Over time we realized that there was no good reason for this arrangement and was just a general hassle and source of friction, so we decided to combine our two repositories into one repository containing both halves of the code base, with each of the old repositories in its own subdirectory.  However, we wanted to preserve all of the change history from each repo and have it available in the new repository.

The good news is that Git makes this sort of thing very easy to do.  Since a repository in Git is just a directed acyclic graph, it’s trivial to glue two graphs together and make one big graph.  The bad news is that there are a few different ways to do it and some of them end up with a less desirable result (at least for our purposes) than others.  For instance, do a web search on this subject and you’ll get a lot of information about git submodules or subtree merges, both of which are kind of complex and are designed for the situation where you’re trying to bring in source code from an external project or library and you want to bring in more changes from that project in the future, or ship your changes back to them.  One side effect of this is that when you import the source code using a subtree merge all of the files show up as newly added files.  You can see the history of commits for those files in aggregate (i.e. you can view the commits in the DAG) but if you try to view the history for a specific file in your sub-project all you’ll get is one commit for that file – the subtree merge.

This is generally not a problem for the “import an external library” scenario but I was trying to do something different.  I wanted to glue to repositories together and have them look as though they had always been one repository all along.  I didn’t need the ability to extract changes and ship them back anywhere because my old repositories would be retired.  Fortunately, after much research and trial-and-error it turned out that it’s actually very easy to do what I was trying to do and it requires just a couple of straightforward git commands.

The basic idea is that we follow these steps:

  1. Create a new empty repository New.
  2. Make an initial commit because we need one before we do a merge.
  3. Add a remote to old repository OldA.
  4. Merge OldA/master to New/master.
  5. Make a subdirectory OldA.
  6. Move all files into subdirectory OldA.
  7. Commit all of the file moves.
  8. Repeat 3-6 for OldB.

A Powershell script for these steps might look like this:

# Assume the current directory is where we want the new repository to be created
# Create the new repository
git init

# Before we do a merge, we have to have an initial commit, so we’ll make a dummy commit
dir > deleteme.txt
git add .
git commit -m “Initial dummy commit”

# Add a remote for and fetch the old repo
git remote add -f old_a <OldA repo URL>

# Merge the files from old_a/master into new/master
git merge old_a/master

# Clean up our dummy file because we don’t need it any more
git rm .\deleteme.txt
git commit -m “Clean up initial file”

# Move the old_a repo files and folders into a subdirectory so they don’t collide with the other repo coming later
mkdir old_a
dir –exclude old_a | %{git mv $_.Name old_a}

# Commit the move
git commit -m “Move old_a files into subdir”

# Do the same thing for old_b
git remote add -f old_b <OldB repo URL>
git merge old_b/master
mkdir old_b
dir –exclude old_a,old_b | %{git mv $_.Name old_b}
git commit -m “Move old_b files into subdir”

Very simple.  Now we have all the files from OldA and OldB in repository New, sitting in separate subdirectories, and we have both the commit history and the individual file history for all files.  (Since we did a rename, you have to do “git log –follow <file>” to see that history, but that’s true for any file rename operation, not just for our repo-merge.)

Obviously you could instead merge old_b into old_a (which becomes the new combined repo) if you’d rather do that – modify the script to suit.

If we have in-progress feature branches in the old repositories that also need to come over to the new repository, that’s also quite easy:

# Bring over a feature branch from one of the old repos
git checkout -b feature-in-progress
git merge -s recursive -Xsubtree=old_a old_a/feature-in-progress

This is the only non-obvious part of the whole operation.  We’re doing a normal recursive merge here (the “-s recursive” part isn’t strictly necessary because that’s the default) but we’re passing an argument to the recursive merge that tells Git that we’ve renamed the target and that helps Git line them up correctly.  This is not the same thing as the thing called a “subtree merge“.

So, if you’re simply trying to merge two repositories together into one repository and make it look like it was that way all along, don’t mess with submodules or subtree merges.  Just do a few regular, normal merges and you’ll have what you want.

Categories: Uncategorized

The Windows Experience Index for your system could not be computed

December 14, 2012 1 comment

I recently had a problem with a Windows 8 computer where I couldn’t run the Windows Experience Index.  I had previously run the WEI just fine on this computer but all of a sudden it started giving me this error message:

image

A cursory search online didn’t yield a solution but I did find out that WEI writes a log file to C:\Windows\Performance\WinSAT\winsat.log.  Looking in that log file showed me this at the end of the log:

338046 (4136) – exe\syspowertools.cpp:0983: > Read the active power scheme as ‘381b4222-f694-41f0-9685-ff5bb260df2e’
338046 (4136) – exe\main.cpp:2925: > power policy saved.
338078 (4136) – exe\syspowertools.cpp:1018: ERROR: Cannot set the current power scheme to ‘8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c': The instance name passed was not recognized as valid by a WMI data provider.
338078 (4136) – exe\main.cpp:2942: ERROR: Can’t set high power state.
338078 (4136) – exe\processwinsaterror.cpp:0298: Unspecified error 29 occured.
338078 (4136) – exe\processwinsaterror.cpp:0319: Writing exit code, cant msg and why msg to registry
338078 (4136) – exe\syspowertools.cpp:1015: > Set the active power scheme to 381b4222-f694-41f0-9685-ff5bb260df2e’
338078 (4136) – exe\main.cpp:2987: > Power state restored.
338078 (4136) – exe\main.cpp:3002: > Successfully reenabled EMD.
338109 (4136) – exe\watchdog.cpp:0339: Watch dog system shutdown
338109 (4136) – exe\main.cpp:5341: > exit value = 29.

Ah! This computer is used by my entire family and I had been using the Local Group Policy Editor to lock down some settings that I didn’t want people to change, including the power management policy.  Apparently, if users can’t change the power management policy then WEI can’t change it either, and it gets grumpy about that.

The solution was to turn off enforcement of the active power management plan, run WEI (which now worked fine), then re-enable enforcement.

Categories: Uncategorized

Fixing the error “Unable to launch the IIS Express Web server. Failed to register URL. Access is denied.”

May 3, 2012 7 comments

I had a Visual Studio web project that was configured to use IIS Express on port 8080.  That should normally be fine and IIS Express is supposed to be able to run without administrator privileges, but when I would try to run the app I would get this error:

Unable to launch the IIS Express Web server.

Failed to register URL "http://localhost:8080/" for site "MySite" application "/". Error description: Access is denied. (0x80070005).

Launching Visual Studio with administrator credentials would cause it to be able to run the application successfully, but that kind of defeated the purpose of using IIS Express in the first place.

It turns out that the problem in my case was that something else had previously created a URL reservation for port http://localhost:8080/.  I have no idea what did it, but running this command in Powershell showed the culprit:

[C:\Users\Eric] 5/3/2012 2:39 PM
14> netsh http show urlacl | select-string "8080"

    Reserved URL            : http://+:8080/

The solution was to run this command in an elevated shell:

[C:\Users\Eric] 5/3/2012 2:39 PM
2> netsh http delete urlacl
http://+:8080/

URL reservation successfully deleted

Now I can run my web app from VS without elevation.

Categories: Uncategorized

Fixing “Could not load file or assembly ‘Microsoft.SqlServer.Management.SqlParser’”

April 17, 2012 6 comments

It’s incredibly frustrating that this still happens, but it turns out that if you have Visual Studio 2010 and SQL Server Express 2008 (or R2) on your machine, and you uninstall SQL Server Express 2008 and install SQL Server Express 2012 instead, you’ll get an error trying to load database projects in Visual Studio 2010: “Could not load file or assembly ‘Microsoft.SqlServer.Management.SqlParser’”.

Why can’t SQL Server 2012 install the stuff it knows Visual Studio requires?  Fine, whatever.

The fix for this problem is the same as the last time I posted something like this:

  1. Locate your Visual Studio 2010 installation media.
  2. In the \WCU\DAC folder, you’ll find three MSIs: DACFramework_enu.msi, DACProjectSystemSetup_enu.msi, and TSqlLanguageService_enu.msi. Run and install each of them. (Possibly just the third one is required in this case.  I’m not sure.)
  3. Reapply Visual Studio 2010 SP1.

You should be back to a working state.

Categories: Uncategorized

I Hate #Regions

April 11, 2012 Leave a comment

This is mostly a note to myself so I don’t lose this awesome Visual Extension.

I hate C# #region tags in Visual Studio.  I hate opening a class file and then having to expand all the regions in order to read the code.  If you feel the need to use regions to make your class file more manageable, then it’s quite likely that your class is actually too big and desperately needs to be refactored.  Just . . . no.

But since not everyone is as enlightened as myself (ahem), sometimes I have to work with code that uses lots of #region tags.  In those cases where I can’t delete them all with extreme prejudice, I can at least install the excellent I Hate #Regions Visual Studio extension that will auto-expand regions whenever I open a file and will also display the #region tags in very small type so that they’re not as obnoxious.  Ah, relief!

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.