Steve Jobs
Steve Jobs; drawing by David Levine

1.

The “computer revolution” of the last twenty years or so is often discussed as if it were a single huge phenomenon. But it has involved many separate technical and business trends moving in different directions at different speeds. The technical change that has had the biggest impact on daily life has been the phenomenal advancement in semiconductor chips over the past fifteen years. The chips, tiny collections of electronic circuits, fall into two large categories: memory chips (which store information) and processors (which carry out instructions about what to do with the information). The central-processing chip that controls one of today’s typical personal computers operates about one hundred times as fast as the chip supplied with the original IBM personal computer in 1981. The memory chips on today’s typical personal computers can store five hundred times as much information as the original IBM PC did, for about the same price.1

These huge advances in a computer’s speed and capacity have in turn made possible far more sophisticated programs than computers could previously run. Twenty years ago, all interaction between a computer and its human user had to be carried out in computer language, which ranges from “assembly language code,” consisting entirely of arcane abbreviations, to programming languages like BASIC or Pascal whose command structures use recognizable English words (“IF/THEN/ELSE,” “GOTO,” and so on). It took a great leap to move from such languages to the “graphical interface,” which makes it possible to control the computer by manipulating simple iconic drawings on the screen with a hand-held “mouse.” Such “graphical” software required much more memory and much faster processing speed than early machines possessed.

Business empires have sprung up (and in many cases crashed down) in the last decade, as technology and market tastes have been constantly changing. For example, Motorola and, especially, Intel dominate the processing-chip business. Motorola makes the main processing chips used in Macintosh computers; Intel makes the chips for most other personal computers. These two companies have prospered through a combination of technical excellence and business aggressiveness, fighting many legal wars to defend their chip designs against imitators.

Almost every niche of the computer industry contains its own idiosyncratic business dramas. Fifteen years ago, most of the information processed on a personal computer had to be stored on a “floppy disk,” which worked very slowly and had limited space. The development of small hard-disk drives, which store far more data inside the computer and retrieve it far more quickly, created a whole new industry in disk production. Since then, companies specializing in hard-disk drives, such as Shugart, Core, Seagate, Maxtor, and Conner, surged to profitability and momentary prominence within the industry only to fall back when new competitors and new technologies emerged. The market for display screens, by contrast, has been dominated through the years by big, familiar companies like Sony, Zenith, and NEC (along with a few newcomers, like the Japanese firm Nanao). A dozen years ago Compaq started producing the first “IBM-compatible” personal computers—machines that competed successfully with IBM by being engineered to accommodate IBM software and accessories but at a lower price and with greater speed. Compaq’s original innovation was to produce a portable computer, much smaller and lighter than the IBM PC, but it has made several dramatic changes in business strategy in order to survive. Other companies that seemed like promising clone-makers a decade ago have vanished altogether.

During this same period software firms have also gone through radical ups and downs. In the early 1980s, the most popular word-processing program was WordStar. Two computer scientists at Brigham Young University formed a company called Satellite Software and introduced their word-processing program, WordPerfect. By the late 1980s WordPerfect was the dominant word processor in the world, and WordStar had virtually disappeared (along with the Wang corporation, which previously had led the market for word processors in large corporations but which went into bankruptcy in 1992). But late last year WordPerfect replaced its president, announced it would lay off one sixth of its work force, and displayed other signs of corporate distress. Nearly fifteen years ago, a program called VisiCalc dramatically shaped the growth of the computer industry. VisiCalc introduced the concept of a “spreadsheet”—a grid of dynamically linked numbers and formulas, which allowed users to see how changes in each variable, from mortgage-interest rates to monthly sales estimates, would affect the final result. The VisiCalc spreadsheet gave many business officials and bankers their first clear idea of how small computers might be useful to them.

This in turn helped to create a market for the first widely available personal computer: the Apple II, which was introduced in 1977. In retrospect, the spreadsheet helped to bring about the merger and takeover binge of the 1980s. With spreadsheets, analysts could quickly crank out calculations of how share price, interest rate, asset valuation, and other factors would affect takeover bids. People interested in takeovers have always made such calculations, of course, but without spreadsheets the process would have been too slow and cumbersome to permit bidding wars like those described in Barbarians at the Gate by Bryan Burrough and John Helyar. Yet VisiCalc, for all its historic effect, has now practically disappeared, having been displaced in the mid-1980s by Lotus’s spreadsheet “1-2-3″—which in its turn has largely been surpassed by newer programs.

Advertisement

Of these many dramas, two are most frequently discussed in the computer world today. One involves the long decline of IBM. Twenty-five years ago it dominated its business more thoroughly than any other firm in any other field. So complete was its mastery of the technology, marketing, and standards for the computer business that it spent much of the 1970s fighting an anti-trust suit the US Justice Department filed to give IBM’s competitors a chance to survive. Since the mid-1980s IBM has lost more than $70 billion of stock valuation and has eliminated 200,000 jobs. Efforts to explain this decline have become the business press’s counterpart to analyzing the fall of the Roman Empire, with no Gibbon yet at hand.

The other major drama is the rise of Microsoft, the giant of the software business in the United States and worldwide. Microsoft’s story is usually paired with IBM’s because they offer such striking contrasts. Microsoft was a tiny firm at the beginning of the 1980s; by the end of the decade, its stock valuation exceeded that of IBM. Microsoft, based in Redmond, Washington, outside Seattle, has a cocky, swaggering corporate reputation and bears the stamp of its thirty-eight-year-old founder, Bill Gates. (IBM, based in Armonk, New York, has a longstanding reputation for conformity and stodginess and has difficulty overcoming it.) Microsoft got its crucial break with the crumbs from IBM’s table, a contract in 1980 to provide software for the first IBM personal computer. The result was DOS (Disk Operating System), the set of instructions used to operate nearly all IBM-compatible personal computers. Microsoft’s copyright control of this crucial system is the foundation of its spectacular successes. In the last decade Microsoft has sold at least 60 million copies of its operating system software, which accounts for about 80 percent of all such software sold in the world. Through the mid-1980s Microsoft was IBM’s partner in software development; late in the decade it split with IBM, and the two companies now battle for the power to set standards for the personal computer industry.

Writing about business has begun to catch up with these developments. The books under review cover many different aspects of high-tech business competition. The common message that emerges is that for all the skill and determination that have gone into creating the new industrial empires, blind luck has often been decisive.

2.

Randall Stross is a California writer whose previous book, Bulls in the China Shop, described the misadventures of foreign firms in China. The story he tells in Steve Jobs and the NeXT Big Thing is the second chapter of one of the computer industry’s most familiar and important tales.

The first chapter of this story involves the efforts of Steve Jobs and Steve Wozniak, two young Californians, to create the first commercially successful personal computer, which they called the Apple. Computers are generally classified into three categories. “Mainframe” computers are the huge machines used by airlines, banks, the federal government, and other large organizations. IBM is the traditional power in this field. “Minicomputers” are smaller machines still designed for institutional use, for example in universities. Digital Equipment, or DEC, has been a leading minicomputer maker. Apple was the first popular “personal” computer—that is, a computer designed to be used by one person, rather than shared by many users in a network or a central data-processing site. These machines were also known for a while as “microcomputers,” referring to the “microprocessor” chips that controlled them.

The Apple was not in a strict sense the first personal computer. The Altair 8800, released in February of 1975, was the true pioneer in this field. Early in 1976, Jobs and Wozniak founded Apple Computer; both were in their early twenties. The following year they released the Apple II. The Altair 8800 had no keyboard and accepted coded instructions through a bank of off-on levers. The Apple II had a keyboard and could be connected to an external monitor and disk drive, like a modern machine. The appearance of the VisiCalc program heightened demand for the Apple, and by 1980, just before the appearance of the IBM personal computer, Apple was perhaps the most admired small computer company in Silicon Valley, and Jobs was its leader and symbol.

Advertisement

Things were never the same for Apple after the IBM PC appeared. The two companies—huge, cautious IBM and young, fast-growing Apple—started out with similar strategies but ended up in far different circumstances by the middle of the 1980s. Apple hoped that independent software companies would write programs and games that would run on the Apple II computer. The more software the computer could run, the more attractive it would be to purchasers. When IBM entered the market it naturally hoped that its system, which was completely incompatible with Apple’s, would instead become the standard, attracting more support from software companies and more customers. While both companies welcomed software that would run on their respective computers, only Apple was vigorous in using lawsuits and other means to fight off companies that tried to produce imitations of the computer itself. IBM watched passively as an industry of “IBM-compatible” clones grew up.

IBM’s approach succeeded in establishing an industry standard: more than 85 percent of personal computers sold worldwide are based on the IBM design. (Such machines are now referred to simply as “PC-compatible.”) Unfortunately from IBM’s point of view, it makes only about a quarter of these machines. While computers based on the Apple design, mainly the Macintosh, are much less popular overall, Apple Computer makes them all.

Apple has never suffered (and could not have survived) huge, sustained losses like IBM’s; its business history has been one of dramatic ups and downs. The Macintosh has slowly gained market share and respectability, and it clearly will survive as a long-term alternative to PC-compatible machines. But Apple gambled last year with a huge marketing and publicity campaign on behalf of its “Personal Digital Assistant,” called “Newton.” The Newton was supposed to be able to read each user’s handwriting, use artificial intelligence to figure out schedules, and in other ways to make itself indispensable to business users. In practice it did none of these things satisfactorily, and John Sculley, the Apple chairman who had most effusively touted the Newton, resigned soon after its release. (A new model of the Newton, which is supposed to be far more practical, will be released in March.)

Steven Jobs had a central part in Apple’s initial business decisions, and in the early 1980s, when he was not yet thirty years old (he was born in 1955), he prepared the company for a major leap whose effects are still felt today. Most computer companies of the day were trying to make slightly faster, or slightly cheaper, or (in the case of the original Compaq) slightly smaller versions of the IBM PC. Apple, with significant urging by Jobs, tried instead to create an entirely different kind of personal computer.

The first, flawed version of what Jobs had in mind was a machine called the “Lisa,” which Jobs began working on even before the IBM PC was unveiled. Its purpose was to be vastly easier to use than normal computers—less austere and arcane-seeming, “user friendly” before that term became a cliché. Instead of typing in commands from a keyboard—“DIR C:” or “CD D:DOS”—the person using the Lisa moved a small hand-held mouse to control a pointer on the screen, and clicked the mouse to indicate his intentions. The Lisa, which cost about three times as much as a comparably powered IBM PC, was a business flop, but by 1984 Apple was ready to unveil a much better version of the same idea.

This machine was the Macintosh—“the computer for the rest of us,” according to Apple’s ads. It was small and rounded, while IBM-type machines were big and boxy. The members of its design team were featured in advertisements, whereas the IBM machines seemed themselves to have sprung from machines. Learning to use it was supposed to be much faster than learning the arcane commands for operating an IBM-style PC.

The introduction of the Macintosh ten years ago marked the high point of Jobs’s career. He was the chairman of Apple at the time, holding stock valued at well above $100 million, and was one of only three people under age thirty on Forbes’s first list of the four hundred richest Americans. (The other two were oil heiresses.) During the broadcast of the Superbowl game in 1984, Apple announced the Macintosh’s arrival with the most famous advertisement in the history of the high-tech industry. Into a dismal workhouse full of inmates with shaven heads, being harangued by the magnified face of Big Brother on an oversized screen, runs a brave young woman athlete who hurls a sledge-hammer at the screen, shattering the image of Big Brother. The symbolism may seem overblown now, but at the time it represented the way Jobs felt about himself, Apple, and the Macintosh. IBM then seemed to be an unstoppable force, and Jobs saw himself and his computer as instruments of liberation and democracy.

The Macintosh endures, but within a year of its introduction Jobs was on his way out of Apple. As the company expanded, Jobs’s disorganized, back-pocket approach to management became a serious problem. In mid-1985 Jobs lost a crucial power struggle against John Sculley, who had come into the company from PepsiCo to apply standard corporate practices to Apple. By the end of the year Jobs had left Apple.

What he has done since then is the subject of Randall Stross’s wonderful book, which combines clear mastery of the relevant technology with great story-telling skill. Stross says that despite being pushed aside at Apple, Jobs had lost little of his self-confidence or his belief that he could again reshape the computer business, as he had helped do with the original Apple and the Macintosh. At the same time, Jobs burned with a sense of injustice that made his desire for another big success all the more intense. The years after he left Apple were exactly the years in which Bill Gates, of Microsoft, replaced Jobs as the young symbol of the computer business and grew far richer than Jobs had ever been. (Depending on the price of Microsoft stock, Gates’s net worth has ranged between $6 and $8 billion in the last two years.) Stross is especially effective in contrasting Jobs and Gates, and in explaining why Jobs considered himself the more ambitious, serious, daring figure.

Gates’s business success has been similar to that of railroad or utility titans early in the century. His company, Microsoft, got its crucial break in 1980, with the contract to provide the “operating system” for the IBM personal computer. At the time, Microsoft’s main business was selling programming languages, like BASIC and Pascal, for computers. According to the best overall history of the personal computer industry, Gates, by Stephen Manes and Paul Andrews (recently released in an updated paper-back edition), not even Bill Gates realized how lucrative and important the contract to provide the operating system for the IBM PC would turn out to be.

The operating system allows the machine to accept information from the keyboard, display it on the screen, store it on disk, send instructions to the central processing chip for execution, and so on. By the mid-1980s Microsoft’s operating systems were becoming the dominant ones for PC-compatible machines. Since then the company has in effect collected royalty payments every time an IBM-compatible personal computer is sold. Nearly all of these machines are equipped with Microsoft’s Disk Operating System, “MS-DOS,” and most now come with a copy of its “Windows” graphical environment. The resulting steady flow of income has provided the money with which Microsoft can steadily expand the other programs it offers. Gates, Stross says,

stood by what he called an evolutionary approach, improving existing software incrementally…. When Microsoft introduced a new kind of software program, more often than not, it would be deeply flawed. But successive versions would eliminate the problems, and by dint of steady investment and persistence the program would mature into a well-received product that computer owners could use on the personal computers they already owned. Jobs’s style was antithetical:… Jobs’s customers had to buy new computers and new software and invest considerable time learning how to use both…. Jobs blazes the trail, and Gates comes behind, incorporating Jobs’s revolutionary leap in a more modest fashion, but one which appeals to the millions of computer users who are reluctant to jettison past investments.

The “revolutionary” step Jobs envisioned when he left Apple was a machine he called the NeXT computer, with software known as NeXTSTEP. (The computer business is full of strange orthography; programs have names like dBaseII or AmiPro 3.0 or InfoSelect. By these standards, “NeXT” looks like a normal name. Stross says that the word “next” was a self-indulgent reference, by Jobs, to the next phase of his career.) The machine and its programs would be completely new, designed from scratch to be simpler to use than existing computers.

In one sense Jobs succeeded. In the fall of 1988 he introduced the “NeXT Cube,” a small, elegant box with black-matte finish that contained the working hardware of the computer. It was connected to a monitor almost twice as large as normal computer screens, also with a black-matte finish and with an extremely high-resolution display. The new NeXTSTEP software appeared soon thereafter. Since no existing word processor, spreadsheet, or other program would run on Jobs’s new machine, he had developed a system that would greatly simplify the task of writing programs for the NeXT. This NeXTSTEP software was “object oriented,” which meant that programmers could combine predefined modules to build a new word-processing or data-base program, rather than having to write code for the entire program. (The object-oriented approach is roughly comparable to assembling a stereo system from components, selecting an amplifier, a CD player, speakers, and so on, and plugging them together in the way you want, rather than having to find a complete system that exactly suits your tastes.)

In a technical sense, Jobs met his objective, but in the real marketplace the NeXT completely failed. Through the end of 1992 NeXT sold a total of 50,000 computers—or, as Stross points out, as many as Apple sells every six days. Early in 1993 the company left the computer-making business altogether, with cumulative losses of some $40 million. It exists now solely to sell its NeXTSTEP software to companies producing programs for IBM-style machines.

What went wrong? Stross spends most of the book answering that question. One principal explanation is that the fundamentals of the market had changed since Jobs introduced the Macintosh. When the Macintosh appeared, fewer than one million Americans owned personal computers (plus a handful outside the United States). By the time of the NeXT, more than 50 million PCs had been sold worldwide, representing a vast investment in the existing standards that Jobs hoped to overturn.

Stross writes that Jobs also misremembered the business history of the Macintosh and therefore made a major mistake with the NeXT. The real business history of the Macintosh, as Stross tells it, involved a struggle to overcome IBM’s three-year lead in selling machines and establishing a standard for personal computers. The Macintosh was completely incompatible with existing computers; no program written for IBM-style computers or for earlier Apples would run on it. To sell the Macintosh, Apple had to convince consumers to abandon their previous investment in machinery and software.

Apple’s management decided that its best hope lay in selling the Macintosh to college students. Few of them already owned computers, so they would not worry about their lost investment in IBM-style machines. The hip image Apple cultivated for the Macintosh would presumably attract young users. And if Apple could convince colleges to adopt the Macintosh as a standard, encouraging or even requiring students to purchase it, the company might quickly attain high-volume sales.

This is exactly what happened. Through painstaking effort and shrewd salesmanship, Apple managed to get a consortium of major universities to embrace the Macintosh, offering it at a discount to students as the “official” university computer. Stross says that the Macintosh might not have succeeded without this coup—and that the university sales strategy could never have succeeded if Apple had not ruthlessly held the price of the Macintosh down. (Typically it sold to students for between $1,000 and $1,500.)

By the time he got to NeXT, Steven Jobs remembered only part of this story, Stross says. He forgot the importance of cost-cutting and remembered only the technical innovations that had gone into the Macintosh. Therefore he indulged every whim for innovating with the NeXT, while costs went through the roof. For instance: Jobs was determined, for aesthetic reasons, that the NeXT Cube should have a perfectly cubical shape, with sharp ninety degree angles at every corner. Normal computers have a slight taper, to make it easier to remove the computer housing from its mold when it is cast. (For the same reason, muffin tins and cake pans are tapered.) Jobs refused to accept this imperfection; the resulting “zero draft,” mold cost an additional $650,000.

Market pressures should presumably have stopped Jobs before his plans for the NeXT became too Neronic, but Stross says that the climate of high-tech finance in the 1980s only made things worse. In a previous business age, entrepreneurs took their companies public largely because they needed to raise capital for expansion. In the high-tech business the Initial Public Offering, or IPO, has become an end in itself, rather than a means toward future business growth. The entrepreneurs who founded the company issue themselves large amounts of stock, which suddenly become valuable when the shares are publicly sold. Although his company had virtually no assets and absolutely no income, Jobs insisted that its valuation at the IPO be set at $30 million.

Most of the usual investors were scared off by this figure. Jobs found only one taker—H. Ross Perot—a major character in Stross’s book, whose portrayal of him will not improve his reputation for level-headedness or consistency. Perot apparently listened to Jobs’s pitch and had an “It’s that simple!” reaction: he wanted to be in on this dream. In announcing that he had become NeXT’s backer, Perot gave a rhapsodic speech at the National Press Club in Washington that was inaccurate on numerous points about Jobs and NeXT. (For example, Perot lauded Jobs as “a young man…so poor he couldn’t afford to go to college.” In fact, Jobs was enrolled at Reed College, a costly private school in Oregon, before he dropped out for lack of interest. “His dad came in [to the garage] one day and said, ‘Steve, either make something you can sell or go get a job.’ Sixty days later, in a wooden box his dad made for him, the first Apple computer was created.” Steve Wozniak is generally thought to have had a larger part than Jobs in building the first Apple.)

Perot eventually lost the $20 million he put into NeXT, and Jobs was tarred with a huge failure. Stross says that during the early years at Apple, Jobs assumed that business success was a natural consequence of his talent and vision. When NeXT failed, Jobs leapt to the conclusion that the whole process of innovation in the computer industry must be at risk. In his eloquent last chapter, Stross says that nothing of the kind is true. The struggle for survival in business has always involved elements of both merit and chance, and (as he demonstrates) many theoretically deserving contenders have been killed off along the way. Jobs will probably try again, and could be luckier next time.

3.

David Sheff’s Game Over, another skillful work of business journalism, concerns a roaring business success rather than a failure. Its subject is Nintendo, a name that is of huge significance to many Americans and is unknown to many others.

Nintendo is the strongest player in the home video-game industry, which like the personal computer industry did not exist fifteen years ago. The first widely noticed video game was a primitive pastime called “Pong,” which was made by the Atari company of California. “Pong” was played on big table-sized machines that started showing up in bars and coffee shops in the mid-1970s, and players tried, by turning knobs, to “bat” an electronic blip back and forth across a “net,” as if they were playing ping-pong. In the early 1980s video-game arcades opened up, with games like “PacMan” (in which players tried to gobble up little yellow dots, before themselves being gobbled by blue ghosts) or “Donkey Kong” (in which the player tries to rescue a damsel from a big ape). Some now vanished home computers, like the Coleco Adam, ran arcade-type games, including “Donkey Kong.” By the mid-1980s home systems designed purely for playing video games were selling briskly, especially those made by the Nintendo company of Japan.

The typical Nintendo setup has two components. One is the game system itself. This is essentially a one-function computer, which the user controls with buttons or “joysticks” and which is connected to a home television screen to display the game. The other component is the game “cartridge,” which houses chips containing the game program and is snapped into the machine. The most basic Nintendo system costs about $80, with individual game cartridges costing between $35 and $70. More advanced models from competing companies called Sega and 3DO cost twice to four times as much as the basic Nintendo.

Hundreds of games are available for home video systems, but three types account for nearly all of them. In “action” or combat games, the kind most hated by anti-violence advocates, the player tries to kick, punch, stab, or shoot an enemy figure. In dexterity games, the player tries to keep a car or motorcycle on a high-speed course, or shoot down incoming missiles before they hit the ground. This category also includes the famous game “Tetris,” designed by Russian mathematicians, in which the player tries to fit geometric forms into a grid as they descend. In fantasy or adventure games, the player seeks some prize or avoids some danger, meanwhile constantly passing through magic doors and entering hidden realms. Nintendo’s most popular game, “Super Mario Brothers,” is of this sort. It features two characters who look like moustachioed janitors. They fly into the heavens (sprouting raccoon tails with which to propel themselves) and dive into the sea, all the while moving from left to right along a constantly scrolling screen.

Few adults can stand to play “action” games, which are repetitive and mindless in addition to whatever coarsening effect they have on players who behead and disembowel their on-screen foes. Although adults are worse than children at dexterity games, many become engrossed by “Tetris” in particular. (I had to remove it from my computer, because it was so hard to avoid playing.) Some of the fantasy games are charming and to a degree valuable for children. In “Super Mario Brothers,” for instance, there is always the possibility that one of the janitors will find a new trap door, which will lead to new scenery with new perils and new rewards. In addition, many of the fantasy games are designed with the same pop-culture genius that went into Disney or Warner Brothers cartoon characters. The music that accompanies the “Super Mario Brothers” game has been performed by a symphony in Tokyo. A Sega game character called Sonic the Hedgehog is probably as well-known to today’s American grade-school children as is Mickey Mouse, and much better known than Popeye.

This is the industry that Nintendo has made, and that has made Nintendo. Early in this century Nintendo was a small, family-owned company, based in Kyoto, that sold packs of playing cards. Twenty-five years ago, Nintendo moved into the toy business, and about fifteen years ago it began producing simple electronic games, for example a shooting-gallery game in which a light beam mounted on a toy rifle would strike a target and set off a buzzer.

By the end of the 1980s, Nintendo was by many measures the most successful Japanese company of all. With its 850 employees, Nintendo earned more than $1 billion in profit per year—as much as the electronics giant Fujitsu, with 50,000 employees. The worldwide market for video games was by the early 1990s slightly larger than the worldwide market for movies, and Nintendo’s share of the worldwide video-game market was 85 to 90 percent. Nintendo earned more profit per year than America’s five largest movie studios combined. By the early 1990s, Nintendo had sold 50 million to 60 million of its machines worldwide. While only half as many American households had Nintendo machines as had VCRs, Sheff points out that the VCRs came from several competing manufacturers, whereas all the Nintendo machines came from one firm.

Sheff combines two main themes in telling the story of Nintendo’s rise. One involves the varied cast of Japanese characters who guided the company to success. Westerners often assume (and sometimes accurately) that faceless committees make Japan go. Sheff, by contrast, devotes much attention to the high executives at Nintendo, their feuds and capacity for teamwork. Three people dominate his story.

Hiroshi Yamauchi, now in his late sixties, is the elder statesman of Nintendo, having inherited control of the company as a young man soon after World War II. He has no interest in technology himself: he has never played a video game, but he became convinced by the 1970s that electronics would create a vast new market for games. Minoru Arakawa, in his late forties, is the business leader of today. The youngest son of an aristocratic Kyoto family, he came to MIT in the early 1970s and became, by Japanese standards, Americanized. He then married Hiroshi Yamauchi’s daughter and led Nintendo’s expansion into the United States. Sigeru Miyamoto,2 now just over forty, is the artistic genius behind Nintendo’s success—a term that does not seem inappropriate considering Nintendo’s worldwide appeal. Sheff says that Miyamoto “had the same talent for video games as the Beatles had for popular music. It is impossible to calculate Miyamoto’s value to Nintendo, and it is not unreasonable to question whether Nintendo would have succeeded without him.”

The other theme Sheff stresses is the interplay among business systems, in particular the steps that were possible for Nintendo to take within the Japanese business system that would have been difficult anywhere else. Many traits are familiar and admirable-sounding—for instance, the Japanese financial system that does not impose pressure for short-term profitability—but the one Sheff emphasizes most is the Japanese system’s tolerance for monopoly.

Nintendo has been so profitable for so long because it has avoided price competition. The programmed cartridges containing games like “Super Mario Brothers” or “Donkey Kong” cost a dollar or two to manufacture but have been sold for years, without discounting, for their $40-and-up list prices. Nintendo was able to maintain these margins, Sheff says, mainly by stifling competition of any sort. The first one third of the book explains how Nintendo’s executives created their games and built their business; the rest of the book describes the means, most of which would be illegal in the United States, by which they kept competitors from rising up.

Once Nintendo game machines become popular in the US and Japan, the company did everything possible to control the games that could be played on its machines. Many other companies designed video games, but unless Nintendo licensed them (and received a fee for each cartridge sold) they could not adapt the games for play on Nintendo machines. It was as if Matsushita supplied 90 percent of the VCRs in Japan and America—and only tapes licensed by Matsushita could be viewed. Nintendo maintained conditions of artificial shortages, licensing only a small number of new games per year, as a way of maintaining high prices (and license fees). It intimidated merchants in Japan and America who sold cartridges at a discount, cutting off their future supply. One dealer advertised cartridges at a few cents off the standard price. Nintendo quickly brought him back into line by suspending cartridge shipments.

Nintendo was terrified that a game-rental industry would evolve, comparable to today’s huge video-rental industry, and undercut sales. (The fear was that children would rent a game for a few days, for five dollars, and tire of it rather than buying it outright.) It told retailers that if they wanted to keep selling Nintendo cartridges, they could not sell more than one or two copies of a game to a customer. Bulk purchasers might be planning to set up rental outlets. Sheff describes Nintendo’s successful campaign to squash a challenge mounted by Masaya Nakamura, the most influential figure in the Japanese video-game industry, who was not part of Nintendo. Nakamura took the unusual step of filing an anti-monopoly suit in Japanese court, complaining about Nintendo’s practices. If anyone could challenge Nintendo, it was thought to be Nakamura. His company, Namco, had come up with the famous “Pac-Man” game. But Nintendo stonily refused to compromise, and rather than face the threat of losing his Nintendo license, Nakamura capitulated and accepted Nintendo’s terms.

Nintendo’s dominance, Sheff shows, is neither inevitable nor permanent. It was almost aborted before it started. In 1982, the American entertainment firm MCA Universal demanded that Nintendo turn over all revenues from what was at that point its only successful video game, “Donkey Kong.” MCA claimed that the game, in which janitors resembling the Mario Brothers rescue a woman from an ape; infringed the copyright to King Kong. (The odd name of this game, which was designed by Sigeru Miyamoto, resulted from Miyamoto’s search through an English dictionary. “Kong” implied ape; “donkey” was meant to suggest the stubborn wiliness of the beast.) For complicated reasons, MCA decided at the last moment not to sue, and Nintendo survived.

Now the company faces other threats. Starting in 1990, the Congress and federal regulatory agencies have scrutinized Nintendo for possible anti-trust violations. Nintendo responded by making it somewhat easier for other companies to produce games for its machines. The worldwide market for video games has stopped growing, at least temporarily, and Nintendo’s total sales fell by about 10 percent last year. Another Japanese game company, Sega, has been gaining market share against Nintendo with higher-priced but more sophisticated game machines. Sega’s games, moreover, tend to be more violent. Nintendo is holding on—last year, its pretax annual profit exceeded $1 billion—but the competition with Sega is bound to become more and more intense.

If Nintendo had been an American company, it could never have grown so large or rich. The reason does not involve differences in savings rate or work habits between the two countries. Rather, within the US legal structure the threat of anti-trust action would have hung over everything the company did. Indeed, the underlying fear of anti-trust enforcement, rather than an actual suit, is part of the background of the most famous single episode in the history of American personal computer industry.

4.

This moment occurred in 1980, when IBM was preparing for the launch of its first personal computer. At the time IBM enjoyed clear leadership in the world market for large computers, a position it had maintained since the dawn of the computer age after World War II. Its most important business decision during this period had been the introduction of its System 360 computer in 1964. The System 360, which had required enormous capital investment by IBM, offered much more computing power for the price than competing machines from companies like Burroughs and UNIVAC. “Instead of raising prices, in traditional monopolist style,” Charles Ferguson and Charles Morris wrote in their book Computer Wars:

IBM typically forced widespread price-cutting through the industry, always following up its initial offerings with a steady stream of new technology breakthroughs…. IBM’s leadership was based not on controlling a technology but on exploiting it better than anyone else.

By 1980, IBM decided it could no longer ignore the personal computer market whose potential Apple and other companies were beginning to demonstrate. IBM’s personal-systems division, based in Boca Raton, Florida, rushed to put together a system that could be based on existing components and technology, instead of requiring complete re-engineering as the mammoth System 360 had. Among the other components it wanted to acquire was an operating system for the new machine.

The most likely candidate to produce IBM’s operating system was a small company called Digital Research, whose operating system, “CP/M,” was the market leader for personal computers. But IBM could not come to an agreement with Gary Kildall, the engineer who was head of Digital. The usual explanation for this failure is that Kildall missed his golden opportunity by choosing that day to go flying in his private plane. But as Manes and Andrews point out in Gates, Kildall’s wife, Dorothy McEwen, normally handled business negotiations for Digital Research. She was there to meet with the IBM representatives—and to reject the terms they offered as being too one-sided in IBM’s favor. (Her main objection was to the “nondisclosure” agreement required by IBM, which would, as she saw it, have allowed IBM to hear all about Digital Research’s products and plans and then go out and duplicate them on its own.)

Microsoft saw more advantages to accepting IBM’s terms. Kildall’s company had only one main product, its operating system. According to Manes and Andrews, Digital Research feared that if it sold the rights to the program for the flat fee IBM was offering, it might end up with no future business base. Microsoft, by contrast, looked on the operating-system contract as a vehicle that would allow it to sell its real products—the programming languages. So late in 1980 IBM signed the agreement that would eventually help Gates and his partners Paul Allen and Steve Ballmer become billionaires.

Every history of either IBM or Microsoft describes the moment when IBM went looking for Kildall and ended up with Gates. Manes and Andrews describe IBM’s attitude toward Microsoft as an indication of how small a role the new personal-computer division seemed to play in IBM’s plans. IBM’s main goal was to get a machine on the market in a hurry. In order to do so, IBM decided to buy existing components from a number of outside suppliers, rather than go through the painstaking process of developing a new system in-house. This approach extended even to the operating system. IBM code writers could of course have turned it out, but going through their official channels might have taken several years. Gates’s Microsoft, however, could supply one almost immediately. (Microsoft, in turn, bought the rights to what was called the “Quick and Dirty Operating System” or QDOS, from the small firm Seattle Computer. Microsoft first paid $25,000 for non-exclusive rights—a sign, as Manes and Andrews stress, that Gates did not realize how valuable DOS would become—and then paid another $50,000 for exclusive rights. In 1986 Microsoft paid Seattle Computer nearly $1 million to settle a dispute over rights to DOS.)

After this start, Microsoft shrewdly eliminated rivals in the operating-system business, especially the later versions of Gary Kildall’s CP/M, and established DOS and Windows as near-requirements for PC-compatible computers. While Microsoft’s market control was solidifying, IBM’s was eroding—especially because it waited for years to challenge the “clone makers” who were trying to push it out of the personal-computer business. IBM could theoretically have discouraged the clone-makers through legal challenges, like those Apple used to keep companies from building imitation Apples. (Apple won a famous copyright-infringement case against Franklin Computer in 1983 that effectively ended the “Apple-compatible” industry.) IBM could presumably have won a price war against any smaller company, using the earnings from its big-computer divisions to subsidize cutrate PCs. It did neither until it was far too late to recover its past dominance of the industry.

Why was IBM so passive? Manes and Andrews emphasize the position of the personal-computer division within IBM as a whole. For years the company underestimated the potential of the PC business; moreover, it was afraid of making the small machines too powerful or effective, since they might then undercut its profitable large computer business. In their book Computer Wars, Charles Ferguson and Charles Morris say another factor made IBM hesitant to defend its interests in the PC business. This was the after-effect of a long anti-trust battle with the federal government.

In 1969, the US Justice Department launched a wide-ranging anti-trust action against the company, because of its dominance of the computer industry. Litigation dragged out through much of the following decade. (Ferguson and Morris say that IBM’s chief expert witness in the case, Professor Frank Fisher of MIT, named the yacht he bought with his fees The Section 3, after the relevant part of the anti-trust statute). “Many of IBM’s actions in the 1970s and 1980s, particularly its supine attitude toward small suppliers of PC components and software, can be explained as reflexes ingrained by a decade in the courtroom’s harsh glare,” Ferguson and Morris write in Computer Wars.

The suit may not have been the reason IBM went to Bill Gates for an operating system. (Manes and Andrews say there is no evidence that anti-trust fears had any effect whatsoever on this decision. The paramount reason, they say, was IBM’s knowledge that it could get an operating system sooner if it bought it from some other firm.) Nonetheless Ferguson and Morris argue, as do several other accounts of IBM, that the company’s decisions from the 1960s through the 1980s were colored by the fear of anti-trust. Ferguson and Morris say:

There is no exonerating IBM executives for their company’s sudden decline; but it is only fair to point out that for some thirty years IBM’s business was carried on in the face of official hostility on the part of the governments of almost all industrial countries, most particularly those of Japan and the United States itself.

IBM did indeed have another headache throughout the period of its dealings with Microsoft, which was the Japanese government’s determination to build a Japanese mainframe computer industry on a par with IBM’s. Marie Anchordoguy of the University of Washington, in her analysis of this project in Computers Inc., also emphasized the aftereffects of anti-trust.3 At just the moment when its Japanese competitors were becoming more collusive and concentrated, IBM was under legal pressure to allow all its competitors, foreign and domestic, more room.

The point of recalling these background factors is that they play a significant part in understanding Microsoft’s current mastery and IBM’s current predicament—and they are also virtually absent from Paul Carroll’s supposed history of the company’s troubles, Big Blues.

Carroll is a reporter for the Wall Street Journal, and his approach to the IBM story is similar to the approach his former Wall Street Journal colleague Bryan Burrough took to the RJR-Nabisco buyout in Barbarians at the Gate. Burrough and his co-author, John Helyar portrayed the excesses of the leveraged-buyout era by concentrating on Henry Kravis, Ross Johnson, and the handful of other financiers making the deals. In their case, the approach was wise, since the people they portrayed really made the crucial decisions. Carroll tries the same thing with backstage anecdotes about IBM’s executives. Nearly all of them he shows to be smug, timid, short-sighted, and out of their depth. The book presents a simple morality play, in which the out-of-touch old men of the East, at IBM’s headquarters in Armonk, get their comeuppance from younger, smarter, faster, funnier competitors in Seattle and the Silicon Valley.

Even if we accept the premise that every damaging anecdote Carroll has collected is true, they are not enough to explain what happened to IBM. Most other accounts emphasize a tangle of problems. These include bureaucratic stodginess, to be sure. But at least as important a factor was the long-term shift in technology that allowed small, cheap computers to perform tasks that mainframes used to do. The mainframe market, in which IBM was strongest, shrank most dramatically, while the small-computer market, in which IBM was still learning its way, boomed. IBM was also affected by foreign strategies aimed specifically at copying its technology, especially by Japan; and by the lasting effects of the anti-trust suit.

In Carroll’s version, these other factors don’t matter; bureaucratic stodginess is enough. The index to his book contains thirty separate entries under the heading, “IBM: bureaucratic failings of,” but not a single reference to “anti-trust problems of” or “Justice Department actions against.” (He makes passing reference to the suit in his text.) The book is riddled with embarrassing small errors that cumulatively undermine its authority.4

More than merely imprecise, the book is actively biased: nearly everything IBM does is stupid, nearly everything Microsoft does is cool. On the basis of this book it would be very hard to understand why IBM had ever succeeded at anything it had done, Carroll, to cite one of many examples, says that in 1990 Gates visited IBM’s headquarters in Armonk, and saw that on many secretaries’ desks were machines comparable to the original IBM PC XT, by then seven years old. Carroll quotes with obvious approval Gates’s contemptuous reaction: ” ‘Jesus,’ Gates said, “this tells me more about IBM than anything I’ve ever seen.’ ”

What this episode tells Gates, of course, is that IBM is hopelessly behind the times. Yet someone with a different axe to grind could use the same anecdote to make exactly the opposite point. For purely secretarial purposes the “antique” XTs would have been adequate if not sexy. They could run word-processing programs and send electronic mail. By holding onto them, a company might show its determination to spend money only where it mattered. There is a lot to be told about IBM, but not from the computers on these desks.

Bill Gates and Microsoft may soon have more sympathy for the legal and institutional pressures that weighed for so long on IBM. Two years ago, the Federal Trade Commission began investigating Microsoft for excessive market power. When that investigation concluded last year, with no finding against Microsoft, the anti-trust division of the Justice Department began an investigation of its own.

There are two main complaints about the way Microsoft uses its power. One is that its dominance of the market for operating systems gives it an unfair advantage in selling other products. For instance: WordPerfect, Lotus, and other companies sell word-processing programs that must be compatible with Microsoft’s operating system, DOS, and its popular “Windows” software. If other companies’ programs are not compatible with DOS or Windows, the programs won’t run at all on most computers, and if the programs don’t mesh well with the commands used by DOS and Windows, they will not run as fast as they could. In addition to producing operating systems, Microsoft also makes its own “application” programs—word processors, spreadsheets, and so on—that compete head to head against those from WordPerfect, Lotus, and other companies. The other companies allege that Microsoft enjoys an inherently unfair advantage, comparable to “insider trading,” since the experts who produce its “application” programs may have prior knowledge of changes in the operating systems. If that were true, Microsoft programs would always run better than programs any other company could write. (Microsoft initially maintained that it carefully separated its employees writing operating systems from those developing application programs. It no longer makes that claim; instead, it says that any contact between the groups has no significant competitive effect.)

The second main complaint involves Microsoft’s pricing policy. It gives discounts to computer makers if they pay for one copy of DOS for every computer they sell. Its operating-system rivals (mainly Novell) claim that this policy gives computer-makers no incentive even to consider alternatives to Microsoft products.

These days the computer press is full of the same sorts of grievances about Microsoft that IBM’s competitors lodged against Big Blue two decades ago: that it is arrogant and out of touch with its customers, that its leaders are becoming smug, that it rolls over its competitors with advertisements and PR.

Microsoft’s most vociferous defenders contend that the company has earned its position through technical excellence alone. Moreover, they argue that there are strong natural tendencies toward dominance, even quasi-monopoly, in high tech industries. According to this reasoning, some company, somewhere, is likely to become strong enough to set standards for the others and enjoy extra profits. Therefore it is better, from an American economic perspective, if such a company is based in Redmond or Armonk rather than in Kyoto or Tokyo. In Computer Wars Charles Ferguson and Charles Morris say the Justice Department’s scrutiny of Microsoft makes sense in theory. They add, clearly thinking of the IBM case:

On the basis of history, however, the Justice Department will harass Microsoft for the next decade without ever reaching a conclusion in the case, to the detriment of everyone except lawyers.

Ferguson and Morris suggest replacing court cases with faster, less legalistic remedies—for instance, arbitration panels with expert fact-finders. Another alternative is to relax anti-trust laws in general so that Microsoft rivals such as Lotus, Borland, WordPerfect, Novell, and even IBM can collaborate to compete with it. The Clinton administration, although dense with lawyers, is committed to “grow” the high-tech economy. It can help to do so with computers by keeping the lawyers away.

This Issue

March 24, 1994