Book Review: “Race Against the Machines” by Erik Brynjolfsson and Andrew McAfee

SONY DSC

The book, while surprisingly brief, can be a bit obtuse for those who have a limited tolerance for acronyms. Despite this fact, the authors do a good job of describing the many facets of this complicated problem. They walk us through the economic downturns of the past 50 years, pointing out technology’s increasing involvement in each. The authors also explain the conspicuous absence of dialog about role of technology’s advance in the sluggish job growth of today’s economy.

Things get interesting when they begin to drive home the point that we have less time to solve this problem that we think. This is due to the exponential rate at which computing power is advancing. Everyone who has seen a science fiction movie like “The Matrix” or “The Terminator” has considered the possibility (however briefly) that people and computers may be in direct competition one day. Most people tend to dismiss those thoughts with the idea that these problems will take place in a distant future – preferably long after we’re gone. In Chapter 2 the authors, using Moore’s law as their foundation, dispel this belief. For those unfamiliar, Moore’s Law states that computing power doubles every 12-18 months. Since it was first introduced in 1965, Moore’s Law has been proven to be correct. This means that the capability of computers to perform complex tasks improves exponentially. To put it differently, this means the improvement in capability of the best processor of 1966 (after the start of Moore’s Law) would be double that of the best processor of 1965. Seeing that the amount of computing power of available in 1965 was small, doubling it in 1966 doesn’t mean much. Remember – the 1960s was still the era of punch cards and room-sized computers. As time progresses however, the incremental increase in computing power becomes staggering. The doubling in performance of a complex and powerful 2011 processor for example, means that the best processor in 2012 is profoundly more capable. This has a major impact on what a computer can do.

What does all this mean? It means that human tasks we thought a computer wouldn’t be able to for decades, like driving a car, are quickly within its grasp. Google was able to develop fully autonomous vehicle within 6 years. The authors point to IBM’s Watson supercomputer, which soundly beat human grand champions on the game show Jeopardy in 2011, as another example of computers being able to comprehend and reason in ways previously thought impossible.

Computers, as Brynjolfsson and McAfee point out, are a General Purpose Technology (GPT). GPTs are important because not only do they themselves improve over time, but they also cause other (previously unrelated) industries to improve as well. Think of it this way – a farmer in 1770 using a horse and buggy could bring his crop to market only so fast and so far. That same farmer in 1870, using a steam-engine powered locomotive, could both bring his crops to market faster and to markets further away. The steam engine improved not only the transportation industry, but the farming and food industry as well.

Just like the steam engine, computing technology is disrupting many industries, but there’s an important difference. Historically when a new GTP enters the scene, those whose employment is dependent on the previous mode of doing things lose their jobs. This downside is usually remedied by the economic growth resulting from the new technology. Those put out of work are retrained and eventually put to work in the new economy created by the GTP.

What makes the use of computer different from other GTPs, according to the authors, is that the computer is disrupting many different industries at an unprecedented rate. It is, in turn, displacing workers faster than the new digital economy can create opportunities for those who lose their jobs. And as the capability of new technologies increase exponentially (Moore’s Law – remember?), more workers will be displaced.  This leads to the position the US economy is increasingly finding itself in. A condition where there is economic recovery with little to no job growth.

So how do we create an economy with institutions that can allow for the rapid growth of technology and innovation without leaving large segments of the workforce behind? This is the core question posed by Race Against the Machine.

The answers offered by Brynjolfsson and McAfee, after such a solid definition of the problem, are a bit unsatisfying. By and large their recommendations are things we’ve heard before. The authors suggest more entrepreneurship – with a particular focus on small businesses that address niche markets made viable by new technology.  They also recommend increased training for people put out of work, and increased investment in infrastructure and education. The fact that Brynjolfsson and McAfee don’t describe in detail how these recommendations will address the problem of worker displacement points to the complexity of the issue.

Erik Brynjolfsson and Andrew McAfee, both graduates of MIT and members of the faculty at the Sloan School of Management, are far from being members of the Tinfoil Hat Society. In fact, they make a point of being very pro-innovation and pro-technology throughout the book. And it is because of this that their message should be listened to. Their book makes the point that the digital revolution is profoundly changing both the economy and society at a rate never seen before. Race Against the Machine underscores the need to begin a discussion of the potential fallout of these changes. If we don’t, our world may wind up in a predicament that technology won’t be able to help us out of.

Have you read this book?
Please share your thoughts in the comments…

Step Into a World…

Image

Imagine we lived in a world that had never seen, heard of, or experience alcohol. Ever. Then imagine that something or someone enters the picture and introduces this world to alcohol. 

People slowly begin to try it, and they love it. It tastes great. It makes people feel good. It makes parties more fun.

As humanity learns how to make it, a huge industry is created. People find jobs. Economies grow. Everyone’s happy. 

With all this success, alcohol is viewed as the new wonder substance. It seems to make everything better. The success stories are all over the news and in every business magazine. Alcohol is the future. 

Since everything is going so well, humanity starts to use alcohol for EVERYTHING. To power cars, as a cleaning agent, to help school children with ADD, as a laxative, to help babies sleep – everything. The success stories continue. 

As the world economy becomes more dependent on the growing alcohol industry, stockholders push for more growth. This means finding new uses for alcohol. Pregnant women are told to drink it as a prenatal supplement. Doctors use it to treat heart attacks and cancer. 

At this point, serious problems start coming to the surface. Babies start being born with deformities. Alcoholism amongst 3rd graders becomes a growing problem. Forty percent of the drivers on the road are drunk, so traffic fatalities go through the roof. 

People start to protest. Some people conclude that alcohol is evil and we should eliminate it altogether. Other people say it’s a fundamental part of life and a pillar of the economy. To eliminate it would be insane. In fact, we haven’t even touched the surface of what alcohol can do. 

Who’s right? 

In the story above, there’s nothing wrong with alcohol. It IS a great social lubricant. It is part of the economy. And it does have many other important uses. 

But alcohol SHOULD NOT be given to children or pregnant mothers. Or to people who are, or will be driving. That is the WRONG way to use alcohol.

 With the introduction of alcohol to this imaginary world, the people learned that for some things alcohol, when used correctly, is great.

The people also needed to learn that there are some circumstances where alcohol should NEVER be used. It hurts not only the user, but the people around him or her.

 At this point, I would argue that our society is still in the first part of the previous story. The prevailing attitude is – “more technology is always better”, and it should be used everywhere, and for everything. We know all the upsides associated with new technologies, and the potential downsides are either not investigated or dismissed outright. The primary barometer of whether something is a good or bad use of technology is it’s financial success. If a new device or service is profitable or has a lot of users, then it is deemed an innovation, irrespective of its larger impact on society.

So – at what point will we reach the second part of the story?
In what areas should technology NOT be used?
How should we judge whether a new technology is truly an innovation or not? 

Please share your thoughts in the comments…