# Uncertainty and Risk

## Expected Utility and Risk Aversion

Recall from probability theory that if you have a random variable that takes on different possible values, the *expected value* of that variable is the weighted average of those values, where the weights are the probability of each value occurring.

For example, if $x = 16$ with probability $\frac{3}{4}$ and $x = 64$ with probability $\frac{1}{4}$, the expected value of $x$ is

$$\mathbb{E}[x] = \frac{3}{4} \times 16 + \frac{1}{4} \times 64 = 28$$

More generally, if we think about a lottery in which an agent has $c_1$ dollars with probability $\pi$ and $c_2$ dollars with probability $1 - \pi$, their *expected consumption* is

$$\mathbb{E}[c] = \pi c_1 + (1-\pi) c_2$$

The same logic may apply to their utility: that is, if we assume that they are perfectly rational people whose utility is the expected value of the within-state utility function $u(c)$ over all states of the world, then their *expected utility* is

$$\mathbb{E}[u(c)] = \pi u(c_1) + (1 - \pi) u(c_2)$$

Visually, if we plot the points $(c_1, u(c_1))$ and $(c_2, u(c_2))$, the point $(\mathbb{E}[c], \mathbb{E}[u(c)])$ lies fraction $\pi$ of the way along a line connecting those two points:

Notice that when $r < 1$, the line connecting $(c_1, u(c_1))$ and $(c_2, u(c_2))$ lies *below* the utility curve. In other words, the utility of consuming one’s expected consumption, $u(\mathbb{E}[c])$, is *greater than* the expected utility $\mathbb{E}[u(c)]$. The opposite is true when $r > 1$; and when $r = 1$, the consumer is indifferent between the lottery and the expected result of the lottery.

This leads to our formal definition of risk aversion: given a choice between facing a lottery (e.g., consume $c_1$ with probability $\pi$ and $c_2$ with probability $1-\pi$) and having the expected consumption from the lottery for sure (e.g., consume $\pi c_1 + (1-\pi) c_2$ with certainty):

- If a consumer gets more utility from the expected consumption, they are
*risk averse*. - If a consumer gets more utility from the lottery, they are
*risk loving*. - If a consumer is indifferent between the two, they are
*risk neutral*.

Visually, you can see this in the following diagram. Notice that the height of the purple dot is the utility from consuming the expected value of the lottery for sure – that is, $u(\mathbb{E}[c])$. The height of the orange dot is the expected utility of the lottery, $\mathbb{E}[u(c)]$. When the purple dot is higher, the consumer is risk averse; when the orange dot is higher, the consumer is risk loving. Change $r$ to see how the curvature of the utility function affects the risk aversion of the consumer:

Another way we can think about these kinds of preferences is to relate this graph to our usual “good 1 - good 2” space. The lottery $(c_1,c_2)$ is a point in this space. We can also plot the point $(\mathbb{E}[c],\mathbb{E}[c])$ – i.e., a point that would represent consuming the expected value of the lottery in both states of the world. If a consumer is risk averse, they prefer to consume $\mathbb{E}[u(c)]$ for sure than to face the lottery, so the point $(\mathbb{E}[u(c)],\mathbb{E}[u(c)])$ lies on a higher indifference curve:

Now that we have a good sense of what we mean by preferences over risk, let’s look at some of the ways consumers might try to improve their lot by paying to reduce their risk: that is, to move closer to consuming $c_1 = c_2$.