Lecture 16: Uncertainty and Risk
The game show “Deal or No Deal” had a simple premise: there are 26 briefcases, each with a different amount of money ranging from a penny to a million dollars. The contestant chooses one briefcase at random. As the game goes on, the contents of the other briefcases are revealed, providing updated information as to the likely contents of the case held by the contestant. Each round, with new information revealed, a mysterious “banker” calls and offers an amount of money, which the contestant can accept (“deal”) or not (“no deal”). If they accept the banker’s offer, they get that amount of money for sure. If they don’t, they continue to play the game.
On September 1, 2008, Jessica Robinson was the contestant. As cases were revealed, the million-dollar value was still on the board – until the very end, when only two cases remained. One had $€200$,000, and the other had $€1$,000,000. The banker offered her $€561$,000.
Fundamentally, her choice was to consume ($€200$K with probability $\frac{1}{2}$, $€1000$K with probability $\frac{1}{2}$) or to consume $€561$K with certainty. How can we analyze her choice?
Lotteries
Generally speaking, we talk about preferences over certain quantities of goods, or amounts of money. But the world isn’t a certain place: chance determines a lot of outcomes.
Some chance we bring upon ourselves: we play the lottery, or we bet on the outcome of sports games, or we invest in a stock that could go up or down. Some chance occurrences are called “acts of God” - whether you get into a car accident, or your house burns down due to a freak accident.
A great deal of economic activity is centered around transferring risk from one person to another. Insurance contracts pay you money if something bad happens to you; hedge funds invest in securities that are negatively correlated with one another. Understanding how preferences over risk drive these markets is one of the critical tasks of modern economic theory.
To analyze situations like this, let’s define a “lottery” as a set of possible outcomes, each occurring with a certain probability. For example, suppose we bet $€150$ on a coin toss; heads I win, tails you win. There are two possible outcomes: the coin could come up heads, and you would lose $€150$, or it could come up tails, and you could win $€150$. Each occurs with probability $\frac{1}{2}$.
Suppose, as you consider whether to take this bet, you have $€250$ in your pocket. Therefore, from your perspective, this lottery would give you an outcome of $c_1 = €100$ if the coin comes up heads, and $c_2 = €400$ if it comes up tails. Of course, you could reject the bet, and have $c_1 = c_2 = €250$ regardless of whether the coin comes up heads or tails.
We can picture this lottery in “good 1 - good 2 space,” where “good 1” (or “state 1”) is consumption in the state of the world in which the coin comes up heads (written $c_1$), and “good 2” (or “state 2”) is consumption in the state of the world in which the coin comes up tails (written $c_2$).
Should you take the bet? It depends on your preferences. We can draw indifference curves through the point $(100,400)$. If the “don’t bet” point is preferred, then you shouldn’t take the bet; on the other hand, if the “bet” point is preferred, you should take the bet:
Let’s think about this another way. As we did last lecture, let’s assume that the way you feel about money doesn’t depend on whether you win the bet or not: that is, you have some value function $v(c)$ which says how much utility you get from having $c$ dollars, and that this function is independent of the state of the world.
If you do not take the bet, therefore, your utility is just $v(250)$. If you win the bet, your utility would be $v(400)$; if you lose, your utility will be $v(100)$.
Given this framework, your utility gain from winning the bet is $\textcolor{#31a354}{v(400) - v(250)}$, and your utility loss from losing the bet would be $\textcolor{#d62728}{v(250) - v(100)}$. Since each of the two outcomes is equally likely, you should therefore take the bet if \(\textcolor{#31a354}{v(400) - v(250)} > \textcolor{#d62728}{v(250) - v(100)}\) Let’s see when this is the case. The following diagram shows a particular kind of utility function where $v(c) = c^r$. The horizontal axis shows consumption, in dollars; the vertical axis shows utility, in “utils.” Initially it shows the case where $r = 0.5$, but you can use the slider to change $r$ to be anything from 0.25 to 2.
As you can see, for this particular case, it’s better to take the bet if $r > 1$, and better not to take the bet if $r < 1$. In order to think about this more generally, though, we need to introduce the notion of expected utility.
Expected utility
Recall from probability theory that if you have a random variable that takes on different possible values, the expected value of that variable is the weighted average of those values, where the weights are the probability of each value occurring.
For example, if $x = 16$ with probability $\frac{3}{4}$ and $x = 64$ with probability $\frac{1}{4}$, the expected value of $x$ is \(\mathbb{E}[x] = \frac{3}{4} \times 16 + \frac{1}{4} \times 64 = 28\) More generally, if we think about a lottery in which an agent has $c_1$ dollars with probability $\pi$ and $c_2$ dollars with probability $1 - \pi$, their expected consumption is \(\mathbb{E}[c] = \pi c_1 + (1-\pi) c_2\) The same logic may apply to their utility: that is, if we assume that they are perfectly rational people whose utility is the expected value of the within-state utility function $v(c)$ over all states of the world, then their utility from this lottery is their expected utility: \(u(c_1,c_2) = \mathbb{E}[v(c)] = \pi v(c_1) + (1 - \pi) v(c_2)\) Visually, if we plot the points $(c_1, v(c_1))$ and $(c_2, v(c_2))$, the point $(\mathbb{E}[c], \mathbb{E}[v(c)])$ lies fraction $\pi$ of the way along a line connecting those two points:
Notice that when $r < 1$, the line connecting $(c_1, v(c_1))$ and $(c_2, v(c_2))$ lies below the utility curve. In other words, the utility of consuming one’s expected consumption, $v(\mathbb{E}[c])$, is greater than the expected utility $\mathbb{E}[v(c)]$. The opposite is true when $r > 1$; and when $r = 1$, the consumer is indifferent between the lottery and the expected result of the lottery.
This leads to our formal definition of risk aversion: given a choice between facing a lottery (e.g., consume $c_1$ with probability $\pi$ and $c_2$ with probability $1-\pi$) and having the expected consumption from the lottery for sure (e.g., consume $\pi c_1 + (1-\pi) c_2$ with certainty):
- If a consumer gets more utility from the expected consumption, they are risk averse.
- If a consumer gets more utility from the lottery, they are risk loving.
- If a consumer is indifferent between the two, they are risk neutral.
Visually, you can see this in the following diagram. Notice that the height of the purple dot is the utility from consuming the expected value of the lottery for sure – that is, $v(\mathbb{E}[c])$. The height of the orange dot is the expected utility of the lottery, $\mathbb{E}[v(c)]$. When the purple dot is higher, the consumer is risk averse; when the orange dot is higher, the consumer is risk loving. Change $r$ to see how the curvature of the utility function affects the risk aversion of the consumer:
Another way we can think about these kinds of preferences is to relate this graph to our usual “good 1 - good 2” space. The lottery $(c_1,c_2)$ is a point in this space. We can also plot the point $(\mathbb{E}[c],\mathbb{E}[c])$ – i.e., a point that would represent consuming the expected value of the lottery in both states of the world. If a consumer is risk averse, they prefer to consume $\mathbb{E}[c]$ for sure than to face the lottery, so the point $(\mathbb{E}[c],\mathbb{E}[c])$ lies on a higher indifference curve:
Now that we have a good sense of what we mean by preferences over risk, let’s look at some of the ways consumers might try to improve their lot by paying to reduce their risk: that is, to move closer to consuming $c_1 = c_2$.
Certainty equivalence
Let’s go back to Jessica Robinson’s choice: consume a lottery of ($€200$K with probability $\frac{1}{2}$, $€1000$K with probability $\frac{1}{2}$) or to consume $€561$K with certainty. The expected value of the lottery was $€600$K (halfway between $€200$K and $€1000$K) – so why did the banker offer her $€561$K?
It makes sense that a risk-averse individual would accept some value less than the expected value of the lottery. Because they’re risk averse, we know that they would strictly prefer the expected value of the lottery for sure to facing the risk of the lottery; so it follows that there is some value less than the expected value which they would accept. We call this the certainty equivalent.
Up to now we’ve been thinking of the expected utility from a lottery in which consumption is different in different states of the world: this is an uncertain outcome. However, we can also consider certain outcomes: that is, bundles in which consumption is the same in all states of the world: that is, $c_1 = c_2$. Visually, this is a 45-degree line in $c_1-c_2$ space, which we might call the “Line of Certainty.”
The point where the indifference curve passing through a lottery intersects the line of certainty is interesting. This is occurs at the (common) value of consumption known as the certainty equivalent ($CE$): that is, the amount of money which, if you had it for sure, would give you the same amount of utility as the lottery.
For example, consider a lottery which pays $c_1 = 16$ and $c_2 = 64$ with equal probability $(\pi = \frac{1}{2})$, and suppose the utility function is $v(c) = \sqrt{c}$. Then the expected utility of that lottery is \(\textcolor{#e6550d}{\mathbb{E}[v(c)] = \frac{1}{2}\sqrt{16} + \frac{1}{2}\sqrt{64} = 6}\) The certainty equivalent of that lottery would be the amount $CE$ that would give the same utility: $\textcolor{#e6550d}{\sqrt{CE} = 6}$, or $\textcolor{#e6550d}{CE = 36}$. In other words: having $CE = 36$ for sure would yield the same utility as having an equal chance of $c_1 = 16$ and $c_2 = 64$.
Visually, we can see this in our two graphs. In the left graph, we can see that the brown dot is at the coordinates $(\mathbb E[c], \mathbb E[v(c)]) = (40,6)$, as before. The orange dot directly to the left of the brown dot represents the certainty equivalent: that is, its coordinates are $(CE, v(CE)) = (36,6)$. In the right graph, we can see that the point (36,36) lies at the point where the indifference curve passing through (16,64) intersects the 45-degree line $c_1 = c_2$:
Note that if you move the $r$ slider to $r = 1$, so the agent is risk neutral, that $CE = \mathbb E[c] = 40$: in other words, you are indifferent between the lottery (16 with probability $\frac{1}{2}$, 64 with probability $\frac{1}{2}$) and having 40 with certainty.
Risk Premium
The fact that for a risk-averse agent, the certainty equivalent is less than the expected value of the lottery (i.e., $CE < \mathbb E[c]$), has an economic implication: the difference between the CE and the expected value of the lottery is called the risk premium ($RP$).
You can think about the risk premium as the amount the agent would be willing to pay to avoid risk – that is, to buy the expected value of the lottery. In the example above, the CE is 36 and the expected value of the lottery is 40. So you can think of the agent as being willing to pay up to 4 to have 40 for sure. (In other words, if they paid 4 to have 40 with no risk, they would end up with 36 for sure, which would give them utility of 6.)
So the “banker” on Deal or No Deal thought Jessica’s certainty equivalent for the lottery she faced was less than $€561$K; i.e. that her risk premium was at least $€600K - €561K = €39$K. He was wrong; she refused, and took her chances.
Luckily for her, she won the million dollars.
Reading Quiz
That's it for today! Click here to take the quiz on this reading.