The book, while surprisingly brief, can be a bit obtuse for those who have a limited tolerance for acronyms. Despite this fact, the authors do a good job of describing the many facets of this complicated problem. They walk us through the economic downturns of the past 50 years, pointing out technology’s increasing involvement in each. The authors also explain the conspicuous absence of dialog about role of technology’s advance in the sluggish job growth of today’s economy.
Things get interesting when they begin to drive home the point that we have less time to solve this problem that we think. This is due to the exponential rate at which computing power is advancing. Everyone who has seen a science fiction movie like “The Matrix” or “The Terminator” has considered the possibility (however briefly) that people and computers may be in direct competition one day. Most people tend to dismiss those thoughts with the idea that these problems will take place in a distant future – preferably long after we’re gone. In Chapter 2 the authors, using Moore’s law as their foundation, dispel this belief. For those unfamiliar, Moore’s Law states that computing power doubles every 12-18 months. Since it was first introduced in 1965, Moore’s Law has been proven to be correct. This means that the capability of computers to perform complex tasks improves exponentially. To put it differently, this means the improvement in capability of the best processor of 1966 (after the start of Moore’s Law) would be double that of the best processor of 1965. Seeing that the amount of computing power of available in 1965 was small, doubling it in 1966 doesn’t mean much. Remember – the 1960s was still the era of punch cards and room-sized computers. As time progresses however, the incremental increase in computing power becomes staggering. The doubling in performance of a complex and powerful 2011 processor for example, means that the best processor in 2012 is profoundly more capable. This has a major impact on what a computer can do.
What does all this mean? It means that human tasks we thought a computer wouldn’t be able to for decades, like driving a car, are quickly within its grasp. Google was able to develop fully autonomous vehicle within 6 years. The authors point to IBM’s Watson supercomputer, which soundly beat human grand champions on the game show Jeopardy in 2011, as another example of computers being able to comprehend and reason in ways previously thought impossible.
Computers, as Brynjolfsson and McAfee point out, are a General Purpose Technology (GPT). GPTs are important because not only do they themselves improve over time, but they also cause other (previously unrelated) industries to improve as well. Think of it this way – a farmer in 1770 using a horse and buggy could bring his crop to market only so fast and so far. That same farmer in 1870, using a steam-engine powered locomotive, could both bring his crops to market faster and to markets further away. The steam engine improved not only the transportation industry, but the farming and food industry as well.
Just like the steam engine, computing technology is disrupting many industries, but there’s an important difference. Historically when a new GTP enters the scene, those whose employment is dependent on the previous mode of doing things lose their jobs. This downside is usually remedied by the economic growth resulting from the new technology. Those put out of work are retrained and eventually put to work in the new economy created by the GTP.
What makes the use of computer different from other GTPs, according to the authors, is that the computer is disrupting many different industries at an unprecedented rate. It is, in turn, displacing workers faster than the new digital economy can create opportunities for those who lose their jobs. And as the capability of new technologies increase exponentially (Moore’s Law – remember?), more workers will be displaced. This leads to the position the US economy is increasingly finding itself in. A condition where there is economic recovery with little to no job growth.
So how do we create an economy with institutions that can allow for the rapid growth of technology and innovation without leaving large segments of the workforce behind? This is the core question posed by Race Against the Machine.
The answers offered by Brynjolfsson and McAfee, after such a solid definition of the problem, are a bit unsatisfying. By and large their recommendations are things we’ve heard before. The authors suggest more entrepreneurship – with a particular focus on small businesses that address niche markets made viable by new technology. They also recommend increased training for people put out of work, and increased investment in infrastructure and education. The fact that Brynjolfsson and McAfee don’t describe in detail how these recommendations will address the problem of worker displacement points to the complexity of the issue.
Erik Brynjolfsson and Andrew McAfee, both graduates of MIT and members of the faculty at the Sloan School of Management, are far from being members of the Tinfoil Hat Society. In fact, they make a point of being very pro-innovation and pro-technology throughout the book. And it is because of this that their message should be listened to. Their book makes the point that the digital revolution is profoundly changing both the economy and society at a rate never seen before. Race Against the Machine underscores the need to begin a discussion of the potential fallout of these changes. If we don’t, our world may wind up in a predicament that technology won’t be able to help us out of.
Have you read this book?
Please share your thoughts in the comments…