## Abstract

We consider a general formulation of the principal–agent problem with a lump-sum payment on a finite horizon, providing a systematic method for solving such problems. Our approach is the following. We first find the contract that is optimal among those for which the agent’s value process allows a dynamic programming representation, in which case the agent’s optimal effort is straightforward to find. We then show that the optimization over this restricted family of contracts represents no loss of generality. As a consequence, we have reduced a non-zero-sum stochastic differential game to a stochastic control problem which may be addressed by standard tools of control theory. Our proofs rely on the backward stochastic differential equations approach to non-Markovian stochastic control, and more specifically on the recent extensions to the second order case.

Original language | English (US) |
---|---|

Pages (from-to) | 1-37 |

Number of pages | 37 |

Journal | Finance and Stochastics |

Volume | 22 |

Issue number | 1 |

DOIs | |

State | Published - Jan 1 2018 |

## Keywords

- Contract theory
- Hamilton–Jacobi–Bellman equations
- Principal–agent problem
- Second order backward SDEs
- Stochastic control of non-Markov systems

## ASJC Scopus subject areas

- Statistics and Probability
- Finance
- Statistics, Probability and Uncertainty