
In the final paragraph, the author writes, “This was an only-in-America story of glorious accomplishments and unfinished business, made possible by the broader political and economic currents that shaped more than half a century of history” (p. 411).
First, obviously, there is some hyperbole in this statement, though few would doubt that our everyday lives would look much different without personal computers, digitized word processing, the Internet, social media, mobile phones, or online shopping. This goes just as much beyond the “remaking of America” as did the invention of the wheel, printing, electricity or of the invention of personalized mass transportation. One does not necessarily need to connect this to certain well-known company names, such as Apple, Microsoft, Meta (Facebook), Alphabet (Google), or Amazon, because these companies did not normally invent those things at which they have become so immensely successful (and super rich). Rather, they are based on the optimization of preexisting inventions (anyone remembers Netscape or WordPerfect?), supported by venture capitalists and their managerial support (after all, the founders of the above companies were quite young). And the next “big thing” is already making strong waves, both in terms of enthusiastic anticipation and regarding worries about what will happen to humankind: Artificial Intelligence (AI).
Second, O’ Mara could have approached her theme from an academic-analytical perspective by systematically outlining the “broader political and economic currents,” and then locate the actors in this context, showing the interaction of context and action during the various periods that she covers. However, though there certainly are references to those context variables, her basic approach rather is a historical narrative that puts the actors first. Thus, there is a lot of name-dropping in the book, and she seems to aim for a writing style that she might have thought of as “cool,” but which sometimes got on my nerves. Thus, the book can be read as some kind of series of newspaper feature stories about roughly the past fifty years on what tech development has taken place in Silicon Valley, at Stanford University, but also in Seattle.
To put her narrative in a proper frame, O’Mara notes on p. 20:
It began with the bomb. To scientists and politicians alike, the technological mobilization of World War II—and its awesome and ominous centerpiece, the Manhattan Project [located at Los Alamos to create the first atomic bomb dropped on Hiroshima and Nagasaki, led by J. Robert Oppenheimer, MN]—showed how much the United States could accomplish with massive government investment in high technology and in the men who made it. … [the project] also catalyzed development of sophisticated electronic communications networks and the first all-digital computer-technologies that undergirded the information age to come.
Thus, the context of what was to happen later was a trinity: science—government money [be it from the military or NASA, MN]—scientists. Yet, this trinity occurred in a particular geopolitical situation, the Cold War, and the competition with the Soviet Union. The electronics industry became ever more important, especially after President Eisenhower and Secretary of State John Foster Dulles moved from a ground troop and conventional arms conception of warfare to advanced electronics in 1953.
In the mid-1950s, two students at Stanford University [that was to become a center for electrical engineering and computer science], supported by their mentor, Fred Terman, founded a company in what was then merely called Santa Clara Valley: Bill Hewlett and Dave Packard [who later became an influential Republican]. At that time, semiconductors were still in their early stage of development. William Shockley, one of the scientists who invented the transistors and who established the company Shockley Semiconductors was instrumental in this, but some members of his staff thought that he was going about this in the wrong way. Those members left his company and established Fairchild Semiconductor. They included Robert Noyce, Jay Last, Gordon Moore (“Moore’s Law”), among others. “Modern Silicon Valley” (p. 41) started with this company. Noyce and Moore moved on to found Intel, together with Andy Grove, who came to the US as a teenage refugee from Hungary.
In “early 1959, Jean Hoerni discovered a way to place multiple transistors on a single silicon wafer,” allowing “Bob Noyce to experiment with linking the transistors together, creating an integrated circuit … more powerful than any device before it” (p. 50). In fact, at Texas Instruments, Jack Kilby had the same idea almost simultaneously. However, he used germanium instead of silicon. Noyce filed his patent in 1961. Eventually, silicon gained the upper hand, because it was easier to use in the production of semiconductors (or “chips”).
In the early 1970s, chips were devised that went beyond their initial memory function. They became “programmable” (p. 102, italics in the original). This kind of chips were called “microprocessors.” First on the market was Intel with its 4004 in 1971, followed by the 8008 and, in 1974, the 8080, each of them much more powerful than its predecessor. Jerry Sanders of Advanced Micro Devices (AMD) gets several mentions. Their appearance made computers ever smaller, a far cry from the earlier huge IBM-manufactured mainframe computers that dominated the market [at that time, input was still by punch cards, pointing to considerations regarding the human-computer interface, MN]. They also made computers more affordable, leading to “microcomputers” that included a widely used precursor of the “cloud,” namely time sharing among users. Developers did not have to own a computer to do their work; they could simply rent computation time on such a microcomputer.
By 1975, Intel already had 3,200 employees, while Steve Wozniak and Steve Jobs were still garage-based computer nerds. In 1977, the Apple II appeared on the scene. A little earlier, William Gates and Paul Allen had met while still being schoolchildren using the computer facilities at the University of Washington. In 1973, Gates moved to Harvard University for his freshman year. Allen also moved to Boston to work at Honeywell. Perhaps one should keep in mind here that, at this point in the computer’s history, these devices were still new, and their use was limited to very small circles of people. The idea that such exotic machines could at some time in the future become mass consumer products—be it as a personal computer, laptop, tablet, or a mobile phone—that most people could not live without in managing their daily lives still did not exist as a realistic expectation.
That this nevertheless happened was not only achieved by the existence of hardware. Rather, the “ultimate disruptor” (p. 239) was the software that enabled computers to fulfil a multitude of tasks. Most importantly, Microsoft’s MS-DOS operating system was unlinked from a specific brand of hardware (as was the approach chosen by Apple). Rather, it could be used on all brands. Consequently, it “rapidly became the industry standard” (ibid.; readers should keep in mind that all these developments required capital, patents, competition, and business practices that some would call “hard-nosed,” if not something worse than that). Thereby, the IBM PC (personal computer) became ubiquitous in the form of “clones.” Yet, these clones were still mostly isolated machines; they were not connected among each other. This most important phenomenon of interconnected computer networks became known as the World Wide Web, or the Internet. However, it did not just drop from the sky as a finished product. As in most other cases mentioned here, it was based on a process of improvement and changing context conditions that led to a new tool. O’Mara writes, “It was more than thirty years old by the start of the 1990s, and it still had the academic and proudly noncommercial spirit it started with in 1969” (p. 287). For many years, the WWW depended on dial-up networks via modems and telephone lines. Bandwidth was very limited. In the early 1990s, “97 percent of Americans had no connection to the Internet” (p. 301).
Yet, the content stored on computers that were connected via the Internet increased considerably over the years. There needed to be a way of knowing what content existed and where users could find it. In 1993, the Internet browser Mosaic was introduced. It “turned the Internet into an immersive, colorful, point-and-click experience” (p. 305). Its chief inventor was Marc Andreessen, helped by venture capitalist John Doerr. Soon, the company was renamed Netscape. Later, Yahoo! appeared on the scene. These companies needed advertising to make money. Still, the Internet was largely about searching for information, not for using it for business transactions, which, obviously, would also require the development of secure online payment systems. Jeff Bezos’ Amazon came into being as an online bookseller in 1995, again with John Doerr’s help. This connection of ideas to capital and management expertise was also vital for a pair of innovators who improved the method of searching for information to an extent that the competition soon disappeared, Sergey Brin and Larry Page, the two co-founders of Google, while they were still PhD students at Stanford University. Interestingly, they were among the first who moved into the William H. Gates Computer Science Building that Bill Gates had donated to the university. In the year 2000, Google’s number of employees was just 60. In building its search engine, Google followed an approach used by Overture, meaning that one would not see advertisement banners on one’s screen. Rather, each search generated information for Google and its advertising customers, enabling them to better target their advertisements. Google grew by leaps and bounds, and in 2004, the company went public, making all involved (not least the early investors who had received shares in return for their money) immensely rich. O’Mara quotes one coder, who had remarked, “You’re not the customer. You’re the commodity” (p. 361f.).
The “next big thing” occurred in the same years that Google had its IPO: Facebook, founded by Marc Zuckerberg while still studying at Harvard University. The author speculates:
Facebook and other social networks also filled a cultural void created by a half century of political liberation and economic dislocation, the vanishing, the vanishing of the bowling leagues and church picnics and union meetings that had glued together midcentury America in conformity and community. Social media became a more cosmopolitan town square, one that crossed national borders, launched new voices, and created moments of joyous connection that could morph into real-life friendships. (p. 372)
[To me, Facebook was useful for several years since Thailand’s military coup in 2014. After a while, however, the share of advertisements had increased unacceptable to me, the feed was hardly relevant, and there was very little social communication among active members, who were but a tiny fraction of those listed as my “friends.” And no Facebook “friendship” ever evolved into a “real-life friendship,” neither was the experience “cosmopolitan.” Thus, in the end, I deleted my account entirely. MN]
Of course, both Facebook and Twitter are mentioned as potential tool for political mobilization, from the Arab Spring and Occupy Wall Street to Black Lives Matter and Barack Obama’s presidential election campaign.
O’Mara’s book breaks off in 2018. She tries to end it with a perspective that starkly contrasts with the big money-making machine that high-tech has become. People in the field of computer science, she suggests, are nowadays not merely in this field for career and money, but rather look for work-life balance and the creation of a personal vision about what they would like to do with their lives. Yet, this merely is a feel-good paragraph without analytical substantiation. Readers should keep in mind that this book is a historical narrative about a certain valley in California and the question what went on there that irreversibly changed the way people go about their economic, social, and political lives in this world, and not merely in this valley. This book is not a social science analysis about the origin, effects, problems, solutions, and the future of high tech, be it self-driving cars or KI.
Leave a comment