Microsoft, Open Source and National Security
Apr 22, 2004 10:21 AM PT
Two weeks ago, I wondered out loud about the top 10 worst IT business decisions ever made and nominated HP's decision to follow DEC down the road to oblivion for top spot. Today I'd like to suggest that the U.S. Defense Department's continued use of Microsoft's software is likely to top a future list of this kind.
The equation here is simple. First, recognize that Microsoft's software security depends crucially on keeping its source code secret. That's not a comment from an anti-Microsoft bigot -- it's the testimony given under oath by Microsoft vice president Jim Allchin. Even limited release of Microsoft's code, Allchin told judge Colleen Kollar-Kotelly's federal court in May 2002, would threaten national security because the code is both seriously flawed and widely used in the Defense Department.
But consider that only nine months later, in February 2003, Microsoft announced an agreement giving communist China full access to the source code for Windows and related tools.
You don't negotiate any kind of agreement with communist China in a few days or weeks; it usually takes months or years to get even simple agreements approved. Remember, theirs is a command economy in which nothing happens without government approval. This particular agreement included a personal briefing given to the chairman of the Chinese Communist Party by Bill Gates himself.
Does Not Compute
Think about that for a moment. Here we have a senior Microsoft vice president telling a U.S. court that releasing the code to American companies would threaten national security at about the same time some of his colleagues were negotiating a hand-over of that same code to communist China -- a country that supports North Korea, maintains the largest standing army in the world, and continues to publicize its idealogical commitment to the replacement of American democracy with a socialist dictatorship.
The question now is what China might do with its access to Microsoft's source code. Most people would agree, I think, that a few thousand really bright programmers with lots of time and full access to the code could accumulate enough information about its weaknesses to develop viruses and other exploits for use as economic weapons against the United States and key democratic allies like Taiwan.
The question, therefore, isn't whether this could happen but whether it will happen.
The Military Mandate
Business, like law enforcement, reacts in arrears -- i.e., after the event. As a result, no American businessman is going to face criminal charges for failing to react to a threat that may or may not materialize.
The military, however, has a proactive mandate and is required to react to potential threats as if they are real threats. Thus, any officer now in a decision-making role who fails to react effectively to the threat posed by the combination of Microsoft's reliance on obscurity for its operating-system security and communist China's access to the code eventually could be charged with dereliction of duty.
To make such a charge stick, two elements would have to be proved: first, that the officers responsible for the decision to continue using Microsoft's products were aware of the potential security problem; and second, that that they had a better alternative open to them.
It's impossible to believe that anyone now working in military IT could reasonably claim competence while denying knowledge of either the general vulnerability of Microsoft's software or communist China's access to the source code. What any future congressional inquiry would focus on, therefore, is whether or not there was a reasonable basis, in the 2003-2004 time frame, for believing that open source offered a better alternative.
In other words, the question would be whether or not there was compelling reason to believe, in 2003 and 2004, that open-source software could be as secure as, or more secure than, proprietary software whose source code is too flawed to be revealed to the public but is available to a foreign power.
Security vs. Obscurity
Consider, on this, what Bruce Schneier says in the introduction to the second edition of his book Applied Cryptography about the difference between security and obscurity:
If I take a letter, lock it in a safe, hide the safe somewhere in New York, then tell you to read the letter, that's not security. That's obscurity. On the other hand, if I take a letter and lock it in a safe, and then give you the safe along with the design specifications of the safe and a hundred identical safes with their combinations so that you and the world's best safecrackers can study the locking mechanism -- and you still can't open the safe and read the letter -- that's security.
There's no possibility of obscurity in open source. That's one of its great values and part of what Eric Raymond meant with his comment that "given enough eyeballs, all bugs are shallow." In this sense, open source is a continuation of the academic process of peer review, in which the feedback loop between those who originate new ideas and colleagues who review the work generates a Darwinian competition of ideas in which the fittest survive.
That's the difference: Microsoft relies on obscurity but sells the safe to communist China, while open source subjects both the code and the design ideas behind it to intensive peer review and so evolves increasingly secure systems.
As choices go, this pretty much defines the no-brainer category, with open source winning every time -- and establishes the consequence that some future congressional inquiry may nominate the Pentagon's current failure to replace every Microsoft product with an open-source equivalent as the worst IT decision ever made.
Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues.