Lecture 18: The Principal-Agent Model and Moral Hazard
In the last lecture, we looked at a principal-agent model in which an agent had a discrete choice: exert effort, or shirk. In this lecture we’ll look at a model in which the agent chooses how much effort to exert.
We’ll look at this in two contexts. In the first, we’ll establish a baseline model of an agent working on behalf of a principal: the story, as with last time, will be that the agent’s probability of success is correlated with the amount of effort she puts in. In the second, we’ll adapt the model to look at a problem of moral hazard, in which the agent’s choice of “effort” is actually a choice of how careful to be in a risky situation. Specifically, we will look at how to price car insurance on a rental car so that the renter is both protected from loss, and encouraged to drive carefully.
Continuous choice of effort
Last time, we analyzed a principal-agent model with a single choice of effort: that is, exert effort ($e = 1$) or shirk ($e = 0$). In real life, of course, how much effort one puts forward is a continuous variable. Let’s assume that the variable $e$ can take on any value between 0 (no effort) and 1 (full effort), and use a similar framework to the one from last lecture to analyze:
- How much effort is efficient (maximizes the combined payoffs to the principal and agent)?
- If the principal offers base salary plus a bonus for success, how much effort will the agent put forward as a function of the bonus?
- What is the profit-maximizing contract for the principal to offer? And is it efficient?
To answer these questions, let’s think of a commission structure in which a publisher hires a salesperson to go door-to-door selling encyclopedias. (Believe it or not, this used to be a thing.) The publisher would pay a commission on every sale they made. Let’s assume the commission is a fraction $\theta$ of the value of the sale; that the sales rep has no outside option; and that the publisher pays no salary in addition to the commission.
Let’s assume the following:
- Each sale generates a profit of €80, so the agent’s payment from each sale would be $80\theta$ and the principal would receive $(1-\theta)80$
- The probability of success for each sale is Prob{success | $e$}$=\sqrt{e}$. Note that there are diminishing marginal returns to effort.
- The cost of effort is $c = 40$ per unit of effort exerted.
With all of this, we can write down the payoff to the principal and the agent, as functions of the amount of effort exerted: \(\begin{aligned} u_P(e) &= 80(1 - \theta)\sqrt{e} \\ u_A(e) &= 80\theta\sqrt{e} - 40e \end{aligned}\)
Efficient level of effort
If we add the payoffs of the principal and the agent, we can see that the total “social welfare” is just the total sales minus the cost of effort: \(u_P(e) + u_A(e) = 80(1 - \theta)\sqrt{e} + 80\theta\sqrt{e} - 40e = 80\sqrt{e} - 40e\) Therefore the optimal amount of effort is maximized at \({40 \over \sqrt{e}} = 40 \Rightarrow e^\star = 1\) So the efficient outcome in this case is for the sales rep to exert full effort $(e = 1)$. Note that this isn’t a general result…I’ve stacked the deck so the numbers worked out this way, by making the value of each sale exactly twice the cost per unit of effort!
Response to salary and commission structure
The amount of effort the agent puts forward, of course, will depend on their commission. They will choose $e$ to maximize \(u_A(e) = 80\theta \sqrt{e} - 40e\) which (taking the derivative, setting it equal to zero, you’ve got this part by now) gives us \(e^\star(\theta) = \theta^2\) Notice that if their commission was $\theta = 1$ (i.e. if they kept 100% of all sales) they would exert the efficient amount of effort. But…what commission is optimal for the publisher?
Solving for the optimal commission
The publishing company can anticipate the response of the salesperson, and factor this into their decision. Therefore, their payoffs, as a function of $\theta$, will be \(u_P(\theta) = 80(1 - \theta) \sqrt{e^\star(\theta)} = 80(1-\theta) \theta=80(\theta - \theta^2)\) his is maximized at $\theta = {1 \over 2}$ – in other words, from the principal’s point of view, the optimal contract splits the proceeds of each sale evenly between them and the agent.
Under this scheme, the sales rep will optimally choose $e = {1 \over 4}$, incurring a cost of $40e = 10$. They will succeed at making a sale $\sqrt{e} = {1 \over 2}$ of the time, and when they do, they’ll earn $80\theta = 40$; so their expected payoff is ${1 \over 2} \times 40 - 10 = 10$. The publisher will earn a profit of $80(1-\theta) = 40$ half the time, so their expected payoff is 20. All told, social welfare is 30.
How does this compare to the situation in which $e = 1$? In that case, a sale is made every time (earning 80) and the cost of effort is $40e = 40$, so the total welfare is $80-40=40$. However, if $\theta = 1$, all of that surplus would go to the sales rep, and none to the publisher – so, since the publisher is the one setting the commission, they accept less effort in exchange for more of the winnings.
Is there any way to solve this problem, and maximize the total surplus? Interestingly, yes, and it gave way to a scheme that actually bankrupted a lot of people who overestimated their sales skills. The publisher could simply sell the books to the sales rep, thereby eliminating the principal-agent problem altogether. It could even offer them incentives to sell books not only to customers, but to other sales reps, who would go out and sell, earning them a small additional commission. This kind of system is called a “multi-level marketing” business model…and if you learn nothing else from me, learn to steer clear of this kind of scam.
Moral Hazard and Insurance Deductibles
Let’s now think of a related problem – also one in which the agent has some control over probabilities, but now thinking about the risk of a loss rather than the probability of a gain.
When we introduced the notion of risk aversion, we talked a bit about certainty equivalence and the risk premium. Specifically, we said that if you had utility function over money given by $v(c) = \sqrt{c}$, and faced a lottery which pays $c_1 = €16$ and $c_2 = €64$ with equal probability, your expected utility of that lottery was \(\mathbb{E}[v(c)]={1 \over 2}\sqrt{16} + {1 \over 2}\sqrt{64} = 6 \text{ utils}\) This means your certainty equivalent for that lottery would have been €36, since having €36 for sure would have given you the same utility as the lottery – that is, $\sqrt{36} = 6$. Lastly, since the expected value of the lottery was \(\mathbb{E}[c] = {1 \over 2} \times €16 + {1 \over 2} \times €64 = €40\) we said your risk premium would have been $€40 - €36 = €4$.
Now let’s put a bit more of a story around this: suppose you have €64 now, and face a 50% chance of losing €48, which would leave you with only €16. This is exactly the same situation as before, only now we’re framing it as a loss.
Your expected loss is ${1 \over 2} \times €48 = €24$. Now, if I were a risk-neutral insurance company, I would be willing to offer you a deal: if you pay me an insurance premium $p$, I’ll fully insure you against your loss – that is, if you lose the €48, I will write you a check for €48. Now, what’s the maximum amount $p$ you’d be willing to pay me for that insurance policy?
We’ve already calculated this, in fact: you would be just as happy having €36 for sure as facing your current risk! So you’d be willing to pay up to $p = €64 - €36 = €28$. Since I’m risk neutral, I just look at the expected loss of €24; so if I charge you €28, I make a profit of €4. In other words, one way of thinking about your risk premium is that it’s the amount you’re willing to pay to insure yourself against a loss, above and beyond your expected loss.
Now let’s think about the moral hazard problem: we accepted that you had a 50% chance of loss, but suppose the probability of your loss is dependent on your behavior. Once you’re fully insured against a loss, though, you have no incentive to minimize your risk, because you’re covered either way. Suppose, then, that the act of being insured led you to behave in such a way that you lost the €48 with a probability of ${2 \over 3}$. In this case, your expected loss would be ${2 \over 3} \times €48 = €32$, so even if I charged you your full risk premium, I would lose money.
Solution: insurance deductibles and copayments
To solve this problem, insurance companies use technique called deductibles and copayments: the insured party has to pay the first $d$ dollars of loss, and then the insurance company will pay some fraction of the rest of the cost of the claim. For example, my own health care plan has a deductible of €3500 per year, and a copay of 20% after that. If I incur €2000 of medical expenses, I have to pay for all of them; if I have €4000, the insurance will only pay out €400 (80% of the amount above €3500), leaving me with a copay of €100 in addition to the €3500 deductible. In short, the insurance company isn’t offering to fully insure me – I reduce my overall risk, but not to zero. And this means I don’t have an incentive to go to the doctor if I just have a sniffle.
How can this work out mathematically? Let’s go back to our previous example of a potential loss of €48, only this time suppose the probability of loss, $\pi$, is a function of how much effort you put into being safe, $e$, where $0 \le e \le 1$. For simplicity, let’s assume that $\pi = 1 - e$, so that if you put no effort into being safe, you lose the €48 for sure; whereas if you put full effort into being safe, you have zero chance of loss. Finally, let’s assume that effort costs you $c(e) = 4e^2$.
First, let’s figure out how you would behave if you didn’t insure. Your utility would be \(u(e) = \pi(e) \sqrt{16} + (1- \pi(e)) \sqrt{64} - 4e^2 = 4(1-e) + 8e - 4e^2 = 4 + 4e - 4e^2\) Taking the derivative of this with respect to $e$ gives us $e^\star = {1 \over 2}$. So, if you couldn’t insure, you would put forward effort $e = {1 \over 2}$ at staying safe, and have a probability of loss of $\pi = 1 - e = {1 \over 2}$. Your utility here is 5 (6 from the lottery, minus 1 from the effort you put forward being careful).
Now let’s suppose I wanted to charge you some premium $p$ for full insurance: that is, regardless of how you behave, you would get a payoff of $64 - p$. This means your utility would be \(u(e) = \sqrt{64 - p} - 4e^2\) Trivially, you would exert zero effort, so you would have a loss 100% of the time, and the only price I could charge you would be 48. This would give you 16 for sure, and a utility of 4; so you would be worse off than if you hadn’t gotten any insurance at all. So that won’t work.
So, let’s figure out how this might work with a deductible. Finding the optimal contract on this sort of problem is cumbersome, so let’s just examine a concrete example: suppose I offer to charge you a premium of $p = 8$, with a deductible of $d = 36$. So, if you don’t experience a loss, you end up with $c_2 = 64 - 8 = 56$; and if you do have a loss, you have $64 - 8 - 36 = 20$. Put another way, you’re trading €8 in the good state of the world for €4 in the bad state of the world. Pretty crappy insurance, but are you better off?
With this insurance, you would exert effort \(e^\star = {1 \over 8}\left(\sqrt{56} - \sqrt{20}\right) \approx 0.3764\) So your utility would be \(u_A = 0.3764\sqrt{56} + (1 - 0.3764)\sqrt{20} - 4 \times 0.3764^2 \approx 5.0388\) which is slightly better than you were without insurance!
What about my expected profits? I collect 8 regardless of whether or not you experience the loss; and with probability 0.3764, I need to pay you 12. Therefore my profit is \(u_P = 8 - 0.3764 \times 12 = 3.4832\) So, we are both made better off!
Other applications
We’ll talk about them a lot in class, but you don’t have to spend your holiday break reading about them. See you on Tuesday!
Reading Quiz
That's it for today! Click here to take the quiz on this reading.