Enrolling For Smartcard Certificates Across Domains

I originally posted this on my MSDN blog.)

In my current work, I have a specific scenario involving smart cards that works (roughly) as follows:

  1. Users have accounts in domain A.
  2. Administrators have accounts in domain B.
  3. Administrators need to run an application in domain B that will allow them to burn smart cards that allow users to access resources in domain A.

It turns out that it’s really hard to find decent documentation and C# sample code for doing this sort of thing.  After much web searching, experimentation, and picking the brains of people much smarter than me, I have some proof-of-concept code that I’d like to share.

The Disclaimer

First, the disclaimer.  This code is proof-of-concept only, not production-ready.  It represents only my current understanding of how things work, and is probably laughably wrong in some respects.  I’m emphatically not a smartcard, certificate, or security expert.  I’ve verified that it works on my machine, but that’s all I can promise.  Corrections welcome!

The Concept

Ok, now that’s out of the way, let’s talk about the concept.  The basic idea here goes like this:

  • The client builds a certificate request for a user in domain A.
  • The client gets a string representation of the request and sends it across the network to the server in the other domain.
  • The server component runs under the credentials of a system account that has the right to enroll on behalf of other users and has a valid enrollment agent certificate.
  • The server wraps the client’s certificate request inside another request, sets the requester name to the subject name of the client request, and signs it with its agent certificate.
  • The server submits the agent request, gets the resulting certificate string, and returns it to the client.
  • The client then saves the certificate to the smartcard.

The code uses the new-ish Certificate Enrollment API (certenroll) that’s available only on Vista+ and Windows Server 2008+.  It won’t run on XP or Server 2003.

The Code

So here it is.  I used the CopySourceAsHTML Visual Studio add-in because it works well in RSS, but the line wrapping is a bit obnoxious.  Oh well.  You’ll need to add references to two COM libraries in order to build this code:

  • CertCli 1.0 Type Library
  • CertEnroll 1.0 Type Library

using System;

using System.Collections.Generic;

using System.Text;

using System.Security.Cryptography.X509Certificates;

using System.Security.Cryptography;

using CERTENROLLLib;

using CERTCLIENTLib;

using System.Text.RegularExpressions;

namespace CertTest

{

    public enum RequestDisposition

    {

        CR_DISP_INCOMPLETE = 0,

        CR_DISP_ERROR = 0x1,

        CR_DISP_DENIED = 0x2,

        CR_DISP_ISSUED = 0x3,

        CR_DISP_ISSUED_OUT_OF_BAND = 0x4,

        CR_DISP_UNDER_SUBMISSION = 0x5,

        CR_DISP_REVOKED = 0x6,

        CCP_DISP_INVALID_SERIALNBR = 0x7,

        CCP_DISP_CONFIG = 0x8,

        CCP_DISP_DB_FAILED = 0x9

    }

    public enum Encoding

    {

        CR_IN_BASE64HEADER = 0x0,

        CR_IN_BASE64 = 0x1,

        CR_IN_BINARY = 0x2,

        CR_IN_ENCODEANY = 0xff,

        CR_OUT_BASE64HEADER = 0x0,

        CR_OUT_BASE64 = 0x1,

        CR_OUT_BINARY = 0x2

    }

    public enum Format

    {

        CR_IN_FORMATANY = 0x0,

        CR_IN_PKCS10 = 0x100,

        CR_IN_KEYGEN = 0x200,

        CR_IN_PKCS7 = 0x300,

        CR_IN_CMC = 0x400

    }

    public enum CertificateConfiguration

    {

        CC_DEFAULTCONFIG = 0x0,

        CC_UIPICKCONFIG = 0x1,

        CC_FIRSTCONFIG = 0x2,

        CC_LOCALCONFIG = 0x3,

        CC_LOCALACTIVECONFIG = 0x4,

        CC_UIPICKCONFIGSKIPLOCALCA = 0x5

    }

    class Program

    {

        static void Main(string[] args)

        {

            // Do this on the client side

            SmartCardCertificateRequest request = new SmartCardCertificateRequest(“user”);

            string base64EncodedRequestData = request.Base64EncodedRequestData;

            // Do this on the server side

            EnrollmentAgent enrollmentAgent = new EnrollmentAgent();

            string base64EncodedCertificate = enrollmentAgent.GetCertificate(base64EncodedRequestData);

            // Do this on the client side

            request.SaveCertificate(base64EncodedCertificate);

        }

    }

    public class SmartCardCertificateRequest

    {

        IX500DistinguishedName _subjectName;

        IX509PrivateKey _privateKey;

        IX509CertificateRequestPkcs10 _certificateRequest;

        public SmartCardCertificateRequest(string userName)

        {

            BuildSubjectNameFromCommonName(userName);

            BuildPrivateKey();

            BuildCertificateRequest();

        }

        public string Base64EncodedRequestData

        {

            get

            {

                return _certificateRequest.get_RawData(EncodingType.XCN_CRYPT_STRING_BASE64);

            }

        }

        public void SaveCertificate(string base64EncodedCertificate)

        {

            _privateKey.set_Certificate(EncodingType.XCN_CRYPT_STRING_BASE64, base64EncodedCertificate);

        }

        private void BuildSubjectNameFromCommonName(string commonName)

        {

            _subjectName = new CX500DistinguishedName();

            _subjectName.Encode(“CN=” + commonName, X500NameFlags.XCN_CERT_NAME_STR_NONE);

        }

        private void BuildPrivateKey()

        {

            _privateKey = new CX509PrivateKey();

            _privateKey.Pin = “0000”;

            _privateKey.ProviderName = “Microsoft Base Smart Card Crypto Provider”;

            _privateKey.KeySpec = X509KeySpec.XCN_AT_SIGNATURE;

            _privateKey.Length = 1024;

            _privateKey.Silent = true;

        }

        private void BuildCertificateRequest()

        {

            _certificateRequest = new CX509CertificateRequestPkcs10();

            _certificateRequest.InitializeFromPrivateKey(X509CertificateEnrollmentContext.ContextUser, (CX509PrivateKey)_privateKey, null);

            _certificateRequest.Subject = (CX500DistinguishedName)_subjectName;

            _certificateRequest.Encode();

        }

    }

    public class EnrollmentAgent

    {

        private readonly string _certificateTemplateName = “MyTemplate”;

        private readonly Regex _commonNameRegularExpression = new Regex(“CN=(.+?)(?:[,/]|$)”, RegexOptions.Compiled);

        public string GetCertificate(string base64EncodedRequestData)

        {

            IX509CertificateRequestPkcs10 userRequest = new CX509CertificateRequestPkcs10();

            userRequest.InitializeDecode(base64EncodedRequestData, EncodingType.XCN_CRYPT_STRING_BASE64);

            IX509CertificateRequestCmc agentRequest = BuildAgentRequest(userRequest);

            string certificate = Enroll(agentRequest);

            return certificate;

        }

        private IX509CertificateRequestCmc BuildAgentRequest(IX509CertificateRequestPkcs10 userRequest)

        {

            IX509CertificateRequestCmc agentRequest = new CX509CertificateRequestCmc();

            agentRequest.InitializeFromInnerRequestTemplateName(userRequest, _certificateTemplateName);

            agentRequest.RequesterName = GetCommonNameFromDistinguishedName(userRequest.Subject);

            agentRequest.SignerCertificates.Add((CSignerCertificate)GetSignerCertificate());

            agentRequest.Encode();

            return agentRequest;

        }

        private string GetCommonNameFromDistinguishedName(IX500DistinguishedName distinguishedName)

        {

            MatchCollection matches = _commonNameRegularExpression.Matches(distinguishedName.Name);

            if (matches.Count > 0)

            {

                return matches[0].Groups[1].Value;

            }

            else

            {

                throw new Exception(“There is no common name defined in the distinguished name ‘” + distinguishedName.Name + “‘”);

            }

        }

        private ISignerCertificate GetSignerCertificate()

        {

            ISignerCertificate signerCertificate = new CSignerCertificate();

            signerCertificate.Silent = true;

            signerCertificate.Initialize(false, X509PrivateKeyVerify.VerifyNone, EncodingType.XCN_CRYPT_STRING_BASE64, GetBase64EncodedEnrollmentAgentCertificate());

            return signerCertificate;

        }

        private string GetBase64EncodedEnrollmentAgentCertificate()

        {

            X509Store store = new X509Store(StoreLocation.CurrentUser);

            store.Open(OpenFlags.ReadOnly);

            X509Certificate2Collection enrollmentCertificates = store.Certificates.Find(X509FindType.FindByTemplateName, “EnrollmentAgent”, true);

            if (enrollmentCertificates.Count > 0)

            {

                X509Certificate2 enrollmentCertificate = enrollmentCertificates[0];

                byte[] rawBytes = enrollmentCertificate.GetRawCertData();

                return Convert.ToBase64String(rawBytes);

            }

            else

            {

                throw new Exception(“The service account does not have an enrollment agent certificate available.”);

            }

        }

        private string Enroll(IX509CertificateRequestCmc agentRequest)

        {

            ICertRequest2 requestService = new CCertRequestClass();

            string base64EncodedRequest = agentRequest.get_RawData(EncodingType.XCN_CRYPT_STRING_BASE64);

            RequestDisposition disposition = (RequestDisposition)requestService.Submit((int)Encoding.CR_IN_BASE64 | (int)Format.CR_IN_FORMATANY, base64EncodedRequest, null, GetCAConfiguration());

            if (disposition == RequestDisposition.CR_DISP_ISSUED)

            {

                string base64EncodedCertificate = requestService.GetCertificate((int)Encoding.CR_OUT_BASE64);

                return base64EncodedCertificate;

            }

            else

            {

                string message = string.Format(“Failed to get a certificate for the request.  {0}”, requestService.GetDispositionMessage());

                throw new Exception(message);

            }

        }

        private string GetCAConfiguration()

        {

            CCertConfigClass certificateConfiguration = new CCertConfigClass();

            return certificateConfiguration.GetConfig((int)CertificateConfiguration.CC_DEFAULTCONFIG);

        }

    }

}

Indiana Jones And The Temple Of Software Doom

(I originally posted this on my MSDN blog.)

I want to expand a bit more on something I mentioned in my last post about not piling new code on top of old code.

At various times in my career I’ve had the dubious pleasure of inheriting applications that have gone through several different owners over several years.  It’s fascinating how you can, with a lot of effort, reconstruct sequences of events and long-ago circumstances that have been altogether forgotten.  After awhile you can tell who wrote a piece of code, and when, and sometimes why, at a glance.  It’s like peeling back layer upon layer of detritus to reveal mysterious clues and ancient traps.  Seriously, sometimes I feel like I should put on an Indiana Jones hat and start exclaiming, “Now, here’s an excellent specimen from the early ‘Mike’ dynasty!  Watch out, it’ll kill you if you touch it!”

I like to call this experience ‘software archeology” or “forensic development”.  (Aside: I came up with both of those terms on my own, thank you very much, but a quick web search shows that I’m not the first for either of them.  Sigh.)  On the whole, it’s an activity I’d rather not engage in if I can avoid it, though it does have a certain charm when you finally figure out the keys to the puzzles.

How’d those code bases get to be in that state, anyway?  Mostly because the previous developers didn’t take the time to understand the code they inherited, I suppose, and just bolted on new code on top of the old code with an absolute minimum of changes.  That technique seems to make sense in the short term.  After all, no one wants to break working code, right, and if we don’t understand it, then it’s best to avoid touching it as much as possible.  However, in the long term that idea usually ends up creating a byzantine edifice of code that’s very difficult to work with.  You haven’t avoided any work or risk; you’ve merely deferred it to the next guy.

I once inherited an application that uploaded crash dumps from game consoles to a server for processing.  The code that actually sent the files from the console and received them on the server was ridiculously complex and I had a really hard time understanding it.  I peeled back layer after layer of seemingly useless code over several days until I finally managed to reconstruct the entire sequence of events that led to the current state of things; then just shook my head and laughed.  I’m recalling this from memory so I might get a few details wrong, but here’s what I discovered.

You see, originally the application just uploaded a single crash dump file to the server.  When the server noticed that a file had been uploaded, it would process it.  Very straightforward.

Later, a new requirement was added for uploading additional supporting files (like logs) in addition to the crash dump.  Hmm, this added quite a bit of complexity because now uploading isn’t an atomic action any more.  As we upload one file at a time, the server needs to know whether it has all the files and can begin processing or whether it needs to wait for more files to be uploaded.  Someone added a bunch of code that on the client generated a manifest file listing all of the files in the upload, and on the server tracking newly-initiated uploads, partially-finished uploads, and completed uploads.  On the server side this involved spinning up a new thread for each upload, writing status information to the database for each state, and all kinds of things.  There was logic to detect uploads that were abandoned halfway through as well as to deal with unexpected server restarts.

Everything was fine for awhile until a new requirement came along that the application should be able to upload crash dumps from external sites across the internet.  Game companies are usually quite secretive and paranoid about their unreleased projects so the uploaded files needed to be encrypted in transit to avoid leaking confidential information.  Someone whipped up a homebrew encryption algorithm (which later proved to be horribly broken, as virtually all homebrew encryption algorithms are) and added code to encrypt each file before transit and decrypt it on the server.

Fast-forward again and we find that people got impatient waiting for all of these files to be uploaded.  Crash dumps can be quite large, of course, so wouldn’t it be great if we could compress them before uploading them?  Sounds good!  And so code was added to take all of the upload files on the client (including the manifest), generate a zip file containing them, and unpack the zip file (using custom-written code) on the server.

At each point in the story, as far as I can tell, the developer took pains to not change any existing code but rather just add new code as a layer around the old code with minimal, or sometimes even no changes to the existing system.  That sounds good, right?  It’s a time-honored engineering technique.  Heck, it’s even got its own pattern in the GoF book: Decorator.  So what’s wrong?

Well, think about the total behavior of the system that I inherited.  It took a bunch of loose files, cataloged them and generated a manifest listing them all, then encrypted each individual file.  It then took all the encrypted files and put them in a zip file, which it uploaded to the server.  On the server side, it first unpacked the zip file into a directory, wrote some info to the database, spun up a thread, and pointed the thread to the folder.  The thread unencrypted each file, opened the manifest, and started verifying the existence of each file listed in the manifest, writing information to the database as it did so.  Once it figured out that all the files were there, it handed the folder name off to the processing subsystem.

Once my team and I had reverse-engineered all the different layers and figured out what each one did, we realized that we could chop out most of the code.  The manifest system that was built to deal with multi-file uploads was completely unnecessary because we weren’t uploading multiple files any more; we were just uploading a single zip file as an atomic action just like in the original design.  We ripped the whole thing out which drastically simplified the code.  No more tracking state in the database, no more multithreading.

We then ripped out the homebrewed encryption system and used standard AES encryption as supported by standard zip libraries.  This further simplified the code plus allowed us to easily inspect and create encrypted packages with a zip utility for testing and debugging purposes.  Plus the encryption actually, you know, worked.

The final design was that the client took all the files to be uploaded and put them into a standard encrypted zip file which it uploaded to the server.  When the server received the zip file, it unpacked it using a third-party zip library and handed the files off to the processing sub-system.  End of story.

Ok, so all of that is a very long-winded way to make my point: when modifying an existing application, don’t just bolt on new code on top of the old code in order to avoid changes.  If you do, you’ll end up with a Rube Goldberg system that makes absolutely no sense.  Instead, think about the old requirements, the new requirements, and actually change the existing code to reflect the new requirements.  Sure, it’s more work up front, and more risk if you don’t have a unit test safety net, but your code base will wind up much smaller and much more maintainable in the long run.

That old code that you’re trying so hard to preserve untouched?  It’s not an asset, it’s a liability.  Get rid of it.

Source Code Is A Liability, Not An Asset

(I originally posted this on my MSDN blog.)

In my previous job, I once scribbled something on a window that caused a bit of head-scratching by people who read it:

“Source code is a liability, not an asset!”

Of course, the idea isn’t original with me.  I got it from (I think) Tim Ottinger and Alan Cooper said it earlier, and for all I know someone else said it even earlier.

(As an aside, one of the painful realizations that have dawned on me as a new blogger is that I’m not very likely to invent and blog about a truly unique idea.  I’m very thankful that I’m standing on the shoulders of so many giants, but the bar for originality is, um . . . rather high.  That’s ok.  I can still restate other people’s ideas in my own terms, within my own context, and I think that adds value to the community.)

Anyway – source code is a liability.  That doesn’t make much sense at first.  Don’t we spend our entire professional careers designing, writing, and testing code?  That’s what we do, right?  Well, no.  What we actually do is deliver business value, or solutions to real-world problems, or however you want to state it.  That’s why companies employ software developers.  Source code is merely the necessary evil that’s required to create value.  With few exceptions, source code itself is not a valuable commodity.

Source code is akin to financial debt.  Suppose you want to buy a house to live in.  Most people don’t have the financial wherewithal to pay cash for a house, so you take out a mortgage instead.  The mortgage is the instrument that allows you to have a house to live in.  But the mortgage itself is not a great thing to have.  It’s a liability.  You have to pay interest on your debt every month.  You always want your mortgage to be as small as possible while still allowing you to buy a house that meets your needs.

And as we’ve seen in the news recently, if you don’t do some careful thinking about the terms and implications of your mortgage before you buy a house, you can wind up in a load of trouble.  It’s possible to end up underwater in a situation where your debt is actually bigger than your house is worth.

So too in software development.  We start with a value proposal; something we want to accomplish.  Writing code is the means we use to realize that value.  However, the code itself is not intrinsically valuable except as tool to accomplish some goal.  Meanwhile, code has ongoing costs.  You have to understand it, you have to maintain it, you have to adapt it to new goals over time.  The more code you have, the larger those ongoing costs will be.  It’s in our best interest to have as little source code as possible while still being able to accomplish our business goals.

This has implications for the way we do metrics and the way we design software.  For example, LOC (lines of code) numbers should never be taken as an asset measurement.  “This week our team wrote 10k LOC!  We’re very productive!”  Well, no, that number doesn’t say anything about whether you’re being productive, any more than saying, “This week we took on $10,000 in new debt!” indicates whether you’re being productive.  The question is entirely, “What did you do with that LOC/debt?”

Taking the long term view, probably one of the most productive statements a team can make is, “This week we removed 10K LOC while preserving all existing functionality!”  That doesn’t sound very sexy on the face of it but it contributes to your organization’s long-term health in the same way as reducing your financial debt.

Software designs that favor less code tend to be better designs that give you a higher return on your investment.  You can take that too far, certainly; well-written, maintainable code is often more verbose than dense, impenetrable code, but you should always favor maintainability.  The difference between the two is relatively minor compared to decisions about whether to use a third-party library or write your own implementation, or whether to refactor existing code to meet changing requirements vs. simply piling your new stuff on top of the old stuff.

(Another aside – ever notice how companies will happily spend multiple months of an employee’s time, at a true cost of at least $10,000 a month, counting benefits, office space, and the like, in order to avoid paying $1000 or even $500 to license a third-party software package that does the same thing?)

I’m looking right now at a project that’s been passed through the hands of several different owners over several years.  It has 134K LOC checked in to source control, and it could likely be implemented in about 10% of that, maybe 5%, using common, industry-standard libraries available today.  Granted, many of those libraries didn’t exist when this project was first started, but the code wasn’t refactored to keep up with the times.  In some respects, it’s underwater now in terms of its debt-to-value ratio.

When you’re faced with a project like that, one of the most beneficial things you can do is to start paying down the debt.  Bust out your delete key and start whacking.  Until you do, that debt is going to paralyze any other efforts to deliver new value to your customers.  Of course, it’s got to be done very carefully, especially if you don’t have the safety net of unit or automated acceptance tests (and in situations like this, you usually don’t).  Nonetheless, it’s absolutely critical that you get yourself above water as soon as possible.

 

Formalism vs. Hermeneutics

(I originally posted this on my MSDN blog.)

I’m about 2/3rds of the way through Object Thinking by David West.  I picked it up on Mo Khan’s recommendation and it’s definitely worth a read.  It’s not a new book (published in 2004) but the principles it explains are pretty timeless.  I found the historical and philosophical explorations in the first couple of chapters to be particularly interesting.  David West describes the long-running struggle between two schools of thought in software development; hermeneutics and formalism, or, in the words of Michael McCormick, “the scruffy hackers versus the tweedy computer scientists”.

Formalism is, generally speaking, a worldview that everything functions according to a finite set of unambiguous rules:

“Central to this paradigm are notions of centralized control, hierarchy, predictability, and provability (as in math or logic).  If someone could discover the tokens and the manipulation rules that governed the universe, you could specify a syntax that would capture all possible semantics.”

Unsurprisingly, computer science has its deepest roots in formalism.  This colors the way practitioners have approached the software development process:

“As a formalist, the computer scientist expects order and logic.  The ‘goodness’ of a program is directly proportional to the degree to which it can be formally described and formally manipulated.  Proof – as in mathematical or logical proof – of correctness for a piece of software is an ultimate objective.  All that is bad in software arises from deviations from formal descriptions that use precisely defined tokens and syntactic rules.  Art has no place in a program.”

Conversely, hermeneutics is a worldview that espouses interpretation, heuristics, and emergence:

“The hermeneutic conception of the natural world claims a fundamental nondeterminism.  Hermeneuticists assert that the world is more usefully thought of as self-organizing, adaptive, and evolutionary with emergent properties.”

Hermeneutics has a long tradition in software development, though it’s been rather a minority point of view in that field.  It has weighty implications as well:

“The hermeneutic [developer] sees a world that is unpredictable, biological, and emergent rather than mechanical and deterministic.  Mathematics and logic do not capture some human-independent truth about the world.  Instead, they reflect the particularistic worldview of a specific group of human proponents.  Software development is neither a scientific nor an engineering task.  It is an act of reality construction that is political and artistic.”

The school of object-oriented (behaviouristic) software design, XP, Scrum, design patterns, and other practices under the Agile umbrella are clearly hermeneutic in heritage.  They disavow the idea that software development is merely a quest for a more perfect process and instead embrace the messiness and uncertainty of reality.  That’s not to say that those practices are inherently undisciplined.  On the contrary, they require great discipline.  But they also celebrate and take full advantage of the human mind’s ability to cognate in ways that can’t be neatly summarized in a formula.  To put it bluntly, Agile practices insist that people matter.

It’s impossible to say that one approach is clearly superior in all aspects.  They each have their uses.  I’m not sure you could make meaningful progress in fundamental computer science without a formalist approach.  The formalists give us the raw building blocks upon which the software industry is built.  However, when you turn your attention to the task of building real-world software for real-world people in real-world environments, you discover something disconcerting . . . the real world is friggin’ messy!  Yeah, this job would be perfect if it weren’t for people.  Computer science formalism tends to break down as you scale up in the real world and that’s where Agile hermeneutics steps in with its focus on flexibility, adaption, heuristics, and ultimately, delivered value.

Does it necessarily have to be that way?  Is formalism intrinsically unsuited for the real world?  I don’t know.  I do know that formalism simply doesn’t yet have enough “precisely defined tokens and syntactic rules” to deal with the overwhelming complexity and ambiguity that we humans introduce to the equations.  To some extent that merely demonstrates the relative youth and immaturity of the field.  I have no doubt that formal computer science will continue to grow and bring deeper and more powerful understandings to bear.  However, it seems that the more formal tools we have at our disposal, the more we overreach them in trying to solve ever more complex problems.  I have no idea where our industry will ultimately end up but Agile practices are definitely here to stay for the foreseeable future.

It was interesting to observe the Alt.Net Seattle 2009 conference with these ideas in mind.  Take, for example, the Open Spaces process we used to run the conference.  It’s a very squishy sort of process, and almost (dare I say it?) mystical in its flavor.  But somehow it works, and works well.  In fact, I was struck by the group’s acceptance and willingness to go along with the opening and closing exercises, which aren’t typically what you’d expect engineering types to tolerate.  It helps that the Alt.Net participants are predisposed to that way of thinking, I guess.  All in all, the experience helped to confirm for me that yes, these people think in a different way.

I tend to be a formalist by nature and training.  However, there’s a part of me that recognizes and embraces (with great relief) the value of aformal thinking.  In my experience, it’s simply a better fit for many of the challenges I face in my professional and personal life.  As West writes:

“ . . . object thinking and agile thinking are not a means for solving software problems; they are a means for creating better people and better teams of people.  Object thinking and XP will produce a culture, not a technique; they will give rise to better people capable of attacking any kind of problem and able to develop systems on any level of complication and scale.”

Thank You

(I originally posted this on my MSDN blog.)

I attended the Alt.Net Seattle conference this weekend and thoroughly enjoyed it.  There were lots of interesting sessions, which was great, but for me the best part of the experience was meeting and putting faces to so many people who have expanded my horizons and sharpened my abilities via blogs, books, email lists, and even Twitter.

I said this at the closing session, but I thought I’d repeat it here online – I owe a huge debt of gratitude to the collective community of folks who have spent so much of their own time to educate people like me.  Thank you all.  My professional career would not be the same without you.  I can’t pay you back, so I hope to pay it forward.