Can Anything Save Us from Unintended Consequences?

Stuart Vyse

Quick quiz: What caused the Great Recession of 2008?

OK, I admit it. That’s an unfair question. It was complicated. If you’ve done some homework, you might be able to mumble something about a housing bubble, subprime lending, mortgage-backed securities, credit-default swaps, big banks, irresponsible borrowers, George W. Bush, Barack Obama….

But there is one thing I can be fairly certain you will not mention: the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 (BAPCPA). You may not have heard of BAPCPA or, if you have, it may only be a foggy memory. So, here is a little history:

In the 1990s, personal bankruptcies were rising sharply, and the banking industry began a lobbying campaign to stiffen the bankruptcy requirements—a move that was expected to increase the profits of credit card companies. Attempts to pass a bill failed until President George W. Bush was reelected in 2004. Finally, after big banks spent $40 million in campaign contributions and millions more in lobbying efforts, the bill went into effect in late 2005 (Labaton 2005). It had a number of provisions, but most importantly it increased the up-front costs of filing for bankruptcy and made the process more onerous.

Of course, people should pay their bills. We all should live up to our responsibilities. But, as President Obama was fond of saying, America is a nation of second chances (Obama 2016). Bankruptcy is designed to give people a “fresh start,” a chance to right their financial ships. Furthermore, it often works as intended. While writing a book about the psychology of spending and debt, I met a number of people who went through bankruptcy and were able to get their lives back on course. But the money written off in bankruptcy cuts into the profits of credit card companies and other lenders and keeps interest rates high, so the banks lobbied to make the bankruptcy requirements more difficult. Little did they know what would happen next.

Figure 1. Non-business (personal) bankruptcies and percentage of homes in foreclosure (right-hand axis) for the years 1990–2016. The vertical line marks the implementation of the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005. Sources: U.S. Courts and Mortgage Bankers Association.

Although it took a few years to figure it out, several economic studies now conclude that the BAPCPA was a precipitating factor in the ensuing debacle of 2008 (Albanesi and Nosal 2016; Li et al. 2011; Morgan et al. 2012). According to these investigations, once the new bankruptcy requirements went into effect, many people who were struggling with debt found bankruptcy was no longer an option. Ironically, they could no longer afford to declare bankruptcy. But what remained an option was not paying the bills, and beginning in 2007, a little over a year after BAPCPA went into effect, an increasing number of homeowners chose to stop paying the biggest bill of all—the mortgage. The switch from bankruptcies to foreclosures can be seen in Figure 1.

The rest is history. Many of those mortgages had been bundled together and sold to investors, and when the value of those mortgage-backed securities started to tank, so did the stock market, the employment rate, and the economy in general. Obviously there are many other factors that went into the Great Recession. BAPCPA was not the only cause. But the crash of 2008 started when the real estate bubble finally burst in 2007, and the bursting of the bubble gained steam when thousands of homeowners stopped paying their mortgages. Had bankruptcy still been an option, some of those people might have been in a position to keep paying the mortgage.

The beauty of bankruptcy as a form of financial failure is that most of the effects are restricted to the individual filers and the banks. Foreclosure is a different matter entirely. When your neighbor forecloses, your house loses value, and the real estate market and the economy in general are affected. As President George W. Bush explained when he spoke to the nation in September of 2008:

Borrowers with adjustable-rate mortgages, who had been planning to sell or refinance their homes at a higher price, were stuck with homes worth less than expected, along with mortgage payments they could not afford.

As a result, many mortgage-holders began to default. These widespread defaults had effects far beyond the housing market. (Bush 2008)

So, in what would be a colossal case of unintended consequences, bankers hoping to save money may have fueled the biggest banking crisis since the Great Depression. It has been estimated that U.S. banks lost $550 billion in the 2008 recession (Boswell 2013), and homeowners, consumers, and workers lost much more.

Unintended Consequences

The history of economic and government intervention has produced many examples of unwanted consequences. Most of us can think of a military campaign that was heralded at the beginning but was later described as a quagmire. One of my favorite cases from behavioral economics is a study of daycare centers in Israel that introduced a fine for parents who were late picking up their children (Gneezy and Rustichini 2000). Once the policy was introduced, late pickups nearly doubled. Apparently parents thought of the fee not as a fine but as the cost of purchasing lateness. As a result, they felt more justified in being late than when the only consequence was the ire of the daycare staff.

Good, Bad, and Perverse Outcomes

The Israel daycare study produced what is sometimes called a perverse effect: making the problem it was designed to fix even worse. We also should acknowledge that sometimes unintended consequences are positive. For example, the drug that was eventually marketed as Viagra was originally developed to treat angina. Its more lucrative application was an unanticipated “side effect” discovered during clinical trials (Jay 2010). Many “off-label” uses of drugs begin with the discovery of unanticipated side benefits.

An unexpected benefit is a happy surprise, but for a couple of reasons we are more concerned about negative consequences. First, as the work of psychologists Daniel Kahneman and Amos Tversky demonstrates, losses hurt more than gains feel good (Kahneman 2011). Unexpectedly losing $100 gives us more displeasure than the joy of unexpectedly gaining $100. This factor tends to make us naturally risk averse and more concerned about unanticipated bad things than unanticipated good things.

In addition, we are usually talking about unexpected consequences of an action—rather than inaction—and acts of commission are more prone to regret (Gilovich and Medvec 1995). In a famous experiment, Kahneman and Tversky asked participants about the following scenario:

Mr. Paul owns shares in company A. During the past year he considered switching to stock in company B, but he decided against it. He now finds out that he would have been better off by $1,200 if he had switched to the stock of company B. Mr. George owned shares in company B. During the past year he switched to stock in company A. He now finds that he would have been better off by $1,200 if he had kept his stock in company B. Who feels greater regret? (Kahneman and Tversky 1982)

Although the amount of money lost was the same for both men, an overwhelming 92 percent of respondents said Mr. George, the investor who got burned when he made an act of commission, would feel more regret.1 Taking an action is less common and more salient than not taking an action, and—at least in the short run—a potential source of kicking ourselves after the fact. It is easy to imagine the counterfactual of doing nothing, and often, when our actions don’t work out, we wish we had done just that. Nothing.

Why Are We So Blind?

Over the years, many people have written about unintended consequences. American sociologist Robert Merton wrote a classic paper on the topic called “The Unanticipated Consequences of Purposive Social Action,” in which he identified five conditions that lead to unintended consequences. Two of these stand out as major culprits in the most common cases:

1. Lack of knowledge. Ignorance of science or of human behavior makes it impossible to predict outcomes. The Israeli daycare investigators clearly lacked sufficient knowledge of parental behavior to predict the outcome of their experiment. Thankfully, the perverse effect they produced did not cause a major social or political problem.

Donald Rumsfeld has further articulated the problem of ignorance with his famous statement of the unknown unknowns problem:

As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones. (Department of Defense News Briefing, February 12, 2002)

Sometimes we understand that certain pieces of information are missing, and we take a chance (or not) on the likely influence of these known unknown factors. But worse yet is the completely-out-of-the-blue unknown factor. I suspect the lobbyists for bankruptcy reform did not think far enough ahead to predict what overextended homeowners would do once bankruptcy was no longer an option. They also may not have known about the precarious nature of the mortgage-backed securities being promoted by the investment divisions of their banks.

2. Imperious immediacy of interest,” in which “the actor’s paramount concern with the foreseen immediate consequences excludes consideration of further or other consequences of the same act” (Merton 1936, 901).

This category of mistake would include actions based on emotional impulse rather than deliberative decision-making. For example, the all-too-familiar case of military invasions that fail to anticipate what will happen after the initial attack.

In addition to Merton’s factors, groupthink, as described by Irving Janis, is another phenomenon that has been frequently blamed for bad decisions, the classic example being John F. Kennedy’s undertaking of the failed Bay of Pigs invasion of Cuba. According to Janis, groupthink emerges from a highly cohesive group and involves the “non-deliberate suppression of critical thoughts as a result of internalization of the group’s norms” (Janis 1971, 44). Several aspects of the decision-making process used by Kennedy and his team seem to have produced a groupthink phenomenon, and after the invasion had been scrapped, Kennedy was reported to have said, “How could we have been so stupid?” (Janis 1971).

What Can Save Us from Unintended Consequences?

By now there has been over fifty years of rather vigorous research in decision-making—both about where it goes wrong and about how to improve it. Much of the effort to improve decisions is aimed at moving people from quick, intuitive decisions (what has come to be called System 1) to slower, more deliberative thinking (System 2). For example, a 2009 article in Perspectives in Psychological Science suggested that more effective decisions could be achieved by (a) trying to adopt an outsider’s perspective on the question or (b) deliberately considering the opposite of the decision you are about to make (Milkaman et al. 2009).

But perhaps some of the best advice comes from a familiar source: the basic principles of critical thinking and Carl Sagan’s “Baloney Detection Kit.” Among Sagan’s many good ideas, these seem particularly important:

  • Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
  • Spin more than one hypothesis.
  • Try not to get overly attached to a hypothesis just because it’s yours.
  • If there’s a chain of argument, every link in the chain must work (including the premise)—not just most of them.

This last suggestion is particularly important when the policy or intervention under consideration sets off—or could reasonably be expected to set off—a chain of events. As Merton’s “imperious immediacy of interest” suggests, often we are so focused on the initial, more easily predicted effects of a plan that we are blinded to the possible longer-term consequences.

But perhaps the biggest problem of all is one that is not adequately addressed by decision-making research and the principles of critical thinking alone: decision-makers have to want to make better decisions. Often decision-makers are driven by competing incentives. There is a strong push to do something immediately to address a pressing problem, or there are multiple motives bundled up in the decision-maker’s thinking. For example, abstinence-only sex education programs are often promoted by groups with religious objections to contraception and abortion, and yet abstinence-only sex education has little effect on sexually transmitted disease infection rates and is associated with higher teen pregnancy and birth rates. Nonetheless, these programs continue because they are driven by philosophy rather than evidence.

Better policy outcomes will be achieved only when decision-makers are motivated to use the best modes of thinking and deciding. It may slow things down to consider all the alternatives and think through the likely chain of events, but if we take the time, perhaps we can avoid the next Bay of Pigs or the next Great Recession.

1. There is no mention of the happiness or regret felt by Mr. John or Mr. Ringo.