OPINION

The Top 10 List of Worst Business IT Decisions

About a month ago we had some people over for dinner and the discussion, at least on my side of the table, drifted to top-10 lists of the Letterman variety. What had happened to make that topical was that Canada’s entry into the late-night talk-show format had just been cancelled and the local papers were snitting about an American talk show host whose “insult puppet” had taken a round out of some walking embarrassment from Quebec.

As part of that conversation, I got challenged to name the top 10 worst IT decisions ever — something I couldn’t do then and still can’t do now, which is why I’m asking for your help in defining the criteria needed and then identifying examples.

The “rules,” as I made them up, are that the technology component of the decision must have been subordinate to the business decision, the decision makers must have had a clear choice, the outcome must have been a business disaster, and the documentation supporting the story should be reasonably clear and comprehensive.

Thus the story about Kmart’s top executive deciding to stop an early and successful supply-chain pilot from going to full implementation — thereby handing the chain’s market to Wal-Mart simply because Kmart was unable to see supply chain as more than an expensive inventory management tool — would be a perfect example if the documentation for it were strong enough for us to be sure it really happened that way.

Unfortunately, the conditions surrounding business IT decisions big enough to make the list are usually underreported relative to their real complexity. In most cases, we don’t really understand what went on and so can’t fairly judge the quality of the decision. For example, a lot of people I talked to about this immediately nominated IBM’s decision to use Microsoft’s MS-DOS instead of CP/M as the worst such decision ever made, but the evidence on that is at best inconclusive.

In fact, I think there’s a better case for considering the decision to license a new — and very limited — OS from Microsoft a brilliant example of creative corporate maneuvering, rather than a mistake.

Technically Indefensible Decisions

The key factor driving many of the decisions made by the PC development team at IBM was fear that their project would meet the same fate as the “future systems architecture” had in 1972.

Now sold as the iSeries but introduced seven years late as the System 38, the database architecture had been developed as a replacement for the card-based transactions model enshrined in the System 360 and was clearly better, smarter, faster and less expensive. Unfortunately, IBM’s System 360 customer base rejected absolutely everything about it, and IBM’s board caved in, almost literally at the last minute, to order the product shelved.

Eight years later, in 1980, the System 360 architecture was even more deeply entrenched. Knowing this, PC developers believed that anything perceived as powerful enough to threaten the mainframe would be impossible to get past the board.

As a result, they picked the 8088 — an eight-bit compatible downgrade to Intel’s already uncompetitive 8086 — instead of the MC68000, turned down both BSD and AT&T Unix for their machine, omitted both real graphics and a hard disk from the design, killed an internal effort to port the original PC client software from the IBM 51XX series, and contracted for a very basic OS from Microsoft to gain the support of William Gates II, father to the version 3.0 we all know and at that time an influential advisor to the IBM board.

In other words, their decisions, while technically indefensible, almost certainly have no place on a top 10 list of the worst-ever business IT decisions. Weakening the product in these ways got IBM into the game and may well have been required to get the company’s PC to market at all.

Destruction of DEC

The destruction of Digital Equipment Corporation (DEC) at the hands of its own management might be a much better bet, despite the ambiguities in the record. By about 1988, the company had grown to be IBM’s biggest competitor by building high-quality hardware and leaving software development mainly to the research community and its commercial spin-offs. At about that time, however, some of the company’s most senior executives fell victim to their own success and started talking about becoming IBM by leveraging their proprietary software to grow services revenues.

What this really meant was that they just didn’t understand their own markets: Commercial VAX users simply didn’t behave the way IBM’s mainframe community does and were fundamentally uninterested in buying support services for packaged applications that generally already worked as advertised.

As a result, when DEC started the Open Software Foundation — an organization founded with IBM and seven others in an attempt to bring Unix development under proprietary control — and cut off new development funding for Unix users, it effectively cut its deepest roots and left itself with no place to go once the services experiment failed.

By the time DEC’s executives understood this, however, the situation had changed. On the positive side, the Alpha CPU was an emerging success, but most of the innovators who had driven the company’s growth had switched to Sun- and Intel-based Unix, while AT&T’s adventure with NCR — itself a clear candidate for this top-10 list — had taken the strategic value out of the OSF.

In theory, the board could have admitted failure at that point and set out to rebuild developer loyalty, but instead it dithered for nearly two years before betting what was left of the company on Microsoft’s promise to take over for the research and other software innovators who had been driven off.

Coming Up with a Sensible List

As a business strategy, trying to outsource innovation to Microsoft turned out to be both naive and suicidal for DEC. But you can see how the people involved were led to that decision one mistake at a time. HP’s current executives, however, have no such excuse: With DEC’s example in front of them, they still chose to direct their company down the same path to oblivion — blissfully assuming that the combination of a hot new server CPU and Microsoft’s historic commitment to software innovation would open the road to IBM-like services revenues.

Wrong on all counts, of course, but it does have the virtue of nailing down HP’s “Itanic” partnership with Intel and Microsoft as the leading candidate for top spot on this list of the worst-ever business IT decisions.

Personally, I’d put DEC’s failure to recognize that commercial VMS users weren’t remotely like mainframers (in their spending or thinking) in solid second place, though I can think of some other contenders too — including AT&T’s purchase of NCR, American Microprocessor’s decision to lay off the first microprocessor design team, the Defense Department’s choice of staff and criteria in the development of ADA, and Intel’s decision to continue 64-KB block addressing in the i80286.

On the other hand, I’ll bet you have some ideas for both examples and criteria, and that’s what this column is about: asking for your help in coming up with a sensible list.


Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues.


No Comments

  • I have always thought the Apple’s decision to extract high margins on hardware rather then violume on software was a disaster. They have always been a leader in user friendly interface, etc. but gave Microsoft/Intel the primary user market despite a hugh lead in GUI, etc. As a result, they continue to be the also ran in the PC market place. Despite the obvious advantages, companies refused to pay the Mac premium for the "same" capability they could get with an IBM clone – althoguh they probably paid the difference in training and lost productivity!

    • I don’t think the Y2K was a hoax. My company at the time (GE), like many others, had massive Y2K problems. The Y2K problem was definitely overhyped by companies trying to cash in on the hysteria, but if the hysteria wasn’t there the management incompetents at companies like mine would have done NOTHING until it was too late. Even as it was, the GE management incompetents dragged their feet until the last possible minute, leaving the technical people like myself having to pull off a miracle to keep things running after Dec 31, 1999.
      We did pull off the miracle but luckily I saw, like a few others, that as long as you have the pointy-haired in charge there would never be a future at GE. I left shortly after and they have since shipped many of the jobs overseas, a testiment to their own incompetence.

      • Oh, sure, it’s available, but you’ll never hear a Sun rep say, "We can give you your choice of Solaris, Linux, or BSD on Solaris hardware."
        .
        The problem isn’t availability, but Sun’s unwillingness to push the currently hot technology. When Schwartz says, "We don’t believe in Linux on the server." I hear, "We don’t have a clue."
        .
        Alex

  • While my post is not about a single incident, it shows the incredible stupidity and arrogance of the "brains" in the management at Commadore.
    In 1982 or 1983 the Commadore 64 accounted for about 25% of world wide computer sales. In 1985 the company introduced the highly advanced Amiga series of computers. Like Sony in the Beta video format wars, C= was mighty proud of the product without knowing quite what to do with it.
    A few years later Sun wanted to start a series of low end workstations as sort of an entry level thing. Sun wanted to license the Amiga OS for considerable $$$. Commadore turned them down flat.
    Hollywood special effects people and other film industry techs liked & used the Amiga. When Star Trek IV (the one with the whales) went into production they needed a 20th century computer for a prop in a scene with Scotty & McCoy. They wanted to use an Amiga system. Commadore turned them down flat.
    So they called Apple. Apple put an engineer and the latest, greatest, Mac on the next plane to L.A. The engineer was instructed to make the Mac do whatever was asked of it.
    These are only two in a series of unthinking, bone headed, bean counter, moves that put the company under. (Or, as some Amiga fans say, they repeatedly shot themselves in the foot with a fifty calliber machine gun.)

  • No mention of OS/2 yet. I believe that what took it off the rails was IBM’s decision to make it back-compatible with the 286 processor.
    In those days people did not foresee the continuous cycle of hardware upgrades that we take for granted now. A 286 was regarded as the basic desk top for the forseeable future and a 386 was only for a power user. A bit like 35mm compared with 6×6 cameras, if you know anything about cameras – or horses for courses if you prefer. Therefore the 286, widespread in corporates at the time, had to be supported.
    So IBM spent $guzillions and wasted maybe 2 years trying to get OS/2 to run on 286’s (and with only 2Mb I believe). It was not easy because of the 286 crippled architecture, and it is not clear if they ever succeeded, because eventually the effort was overtaken by the start of the hardware upgrade race and the huge drop in memory prices – by then everyone was buying 486’s and even 386’s were toast.
    Of course, the delay made Microsoft lose patience and led to their split with IBM, and of course to Windows.

  • How could you overlook the "Y2K" hoax? It was primarily a business decision (avoid business interruption) which business owners and leaders had a clear choice to buy into or ignore. The outcome was a business disaster for every industry except the IT industry, and the proof is that productivity gains due to IT investments were not recognized until well into 2002 and 2003. I believe business productivity should have been evident in the 1990’s except for the hugh investment that businesses made in that period for IT upgrades that were never proven to be required.
    Of course, I admit that I enjoyed the IT spending boost as much as anybody (my best years were 1999 – 2001) but it was still a business spending disaster in my opinion. I’m looking for a great 2004 as business gets back to enjoying higher productivity and smart IT investing.

    • I wonder. A lot of people share your opinion on this one – I don’t, I’m in the camp of those who think the remediation effort avoided the disaster, but I’ve never seen anything that looked remotely like a definitive answer one way or the other.
      <P>
      On the other hand you’d still have a point even if I’m right about this because you could argue that the creation of the Y2K disaster by short cutting programmers represented a perfectly valid example – and I’d have to agree with you.
      <P>
      I didn’t think of the Y2K issue, but I did think
      of a class of disaster into which this one fits: lemming behavior. Real lemmings don’t actually march of cliffs; but decision makers seem to have herd instincts that lead to this all the time (and not just in Apple ads) and I may devote a future column to some examples – like adopting client-server!

      • As bad technical choices go almost everything about that one was a winner. The original bad choices go back to the process of ripping out the kernel/shell separation in CP/M to make it run faster on an Intel 8088 and so produce QDOS (not that MS has ever really admitted this happened; but they did settle a lawsuit on this very very quickly) but the expanded/extended memory choices have an even more chuckle worthy history.
        <P>
        IBM’s System 360 used 24 bit addressing but 32bit "words" made up for four bytes. That "wasted" byte in addresses then became a target for programmers seeking ways to save on core (memory) use and eventually trapped IBM into continuing the 24bit structure pretty right up to the System 390s. One of the attempts to get around this was later copied by intel – the use of an eight bit latch to select which 24bit memory block an address went into.
        <P>
        What MS copied (intentionally or otherwise, I’ve no idea) was a consequence of the System 370 memory architecture in which the first 24bit address space had special properties and you couldn’t buy enough physical memory to get near the 128MB accessible via the latched addresses – allowing some in memory addresses to actually point to drum or disk pages.
        <P>
        Some of that was treated as extended – meaning program accessible – and some as expanded – meaning hypervisor accessible (I may have that backwards. MS copied one of these (extended?) and
        Oracle independently copied the other – resulting in an MS-DOS release that supported special drivers for both and confused millions of users.

  • One that I’ve had to deal with is on a low level, maybe not what you are looking for. Microsoft’s decision, while probably making sense at the time, of putting ROM/System Use RAM in MSDOS from location 640k to 1024k was a problem. Later in it’s life memory managers had to be made to leap over this part of system memory for programming to get access to 1meg and up RAM. Perhaps putting those at 0k to 384k would have made managing RAM in later versions of MSDOS easier. It was still a barrier until Windows 2000 I believe.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

How confident are you in the reliability of AI-powered search results?
Loading ... Loading ...

LinuxInsider Channels