Jean-Luc Vila. # Optimal consumption and portfolio choice with borrowing constraints online

. **(page 1 of 3)**

Online Library → Jean-Luc Vila → Optimal consumption and portfolio choice with borrowing constraints → online text (page 1 of 3)

Font size

^ \ J ^ i

^ V / o '

M,l,T UBRARIES - DEWEY

^

4d 2-^

no. >

(C'0 -

qc^

Zfeoe^

WORKING PAPER

ALFRED P. SLOAN SCHOOL OF MANAGEMENT

OPTIMAL COMSUMPTION AND PORTFOLIO CHOICE

WITH BORROWING CONSTRAINTS

Jean-Luc Vila*

and

Thaleia Zariphopoulou^

WP#3650-94

January 1994

MASSACHUSETTS

INSTITUTE OF TECHNOLOGY

50 MEMORIAL DRIVE

CAMBRIDGE, MASSACHUSETTS 02139

OPTIMAL COMSUMPTION AND PORTFOLIO CHOICE

WITH BORROWING CONSTRAINTS

Jean-Luc Vila*

and

Thaleia Zariphopoulou**

WP#3650-94 January 1994

* Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA

** Department of Mathematics and Business School. University of Wisconsin. Madison

OPTIMAL CONSUMPTION AND PORTFOLIO CHOICE

WITH BORROWING CONSTRAINTS

Jean-Luc Vila

Sloan School of Management, Massachusetts Institute of Technology

and

Thaleia Zariphopoulou

Department of Mathematics and Business School, University of Wisconsin, Madison

First Draft: August 1989

Current Version: January 1994

ABSTRACT

In this paper, we use stochastic dynamic programming to study the intertemporal consumption and

portfolio choice of an infinitely lived agent who faces a constant opportunity set and a borrowing

constraint. We show that, under general assumptions on the agent's utility function, optimal policies

exist and can be expressed as feedback functions of current wealth. We describe these policies in

details, when the agent's utility function exhibits constant relative risk aversion.

* We wish to thank Minh Chau, Neil Pearson, the associate editor and an anonymous referee for

their comments. Jean-Luc Vila wishes to acknowledge financial support from the International

Financial Services Research Center at the Sloan School of Management. Thaleia Zariphopoulou

acknowledges financial support from the National Science Foundation Grant DMS 9204866. Errors

are ours.

I. INTRODUCTION

In this paper, we consider the Portfolio-Qansumption problem of an infinitely lived agent in the

presence of a constant opportunity set and borrowing constraints. Using the method of Dynamic

Programming, we show that, under general assumptions on the agent's utility function, optimal

policies do exist and can be expressed as feedback functions of the investor's current wealth. Given

this existence result, we are able to describe how borrowing constraints affect the consumption and

investment decisions when the agent's relative risk aversion is constant.

Stochastic dynamic control has first been used by Merton (1969, 1971) to obtain an explicit solution

to the Portfolio-Consumption problem when the investment opportunity set is constant, the agent's

utility function belongs to the HARA class and when trading is unrestricted. More recently, Karatzas

et al. (1986) generalized Merton's results and obtained closed form solutions for general utility

functions.

Instead of using stochastic control methods, the so-called martingale approach has been alternatively

used by Pliska (1986), Cox and Huang (1989 & 1991), Karatzas et al. (1987) to study intertemporal

consumption and portfolio policies when markets are complete, which was also the case in the earlier

dynamic programming literature. The martingale technology consists in describing the feasible

consumption set by a single intertemporal budget equation and then solving the static consumption

problem in an infinite dimension Arrow-Debreu economy. The martingale approach is appealing to

economists for two reasons. First, it can be used to solve for the asset demand under very general

assumptions about the stochastic investment opportunity set. Second, and consequently, it can be

applied in a general equilibrium setting to solve for the equilibrium investment opportunity set (see

Duffie and Huang (1985), Huang (1987)).

4

Unfortunately, in the presence of market imperfections such as market incompleteness, short sale

constraints or transaction costs, the martingale approach looses much of its tractability'.

Consequently, many authors used dynamic programming to analyze the impact of these imperfections

on asset demand (see for example Constantinides (1986), Duffie and Zariphopoulou (1993),

Grossman and Laroque (1988), Grossman and Vila (1992), Fleming et al. (1989), Fleming and

Zariphopoulou (1991), Zariphopoulou (1993)).

The present paper extends this line of research to the dynamic problem of an infinitely lived agent

who faces two constraints. The first constraint is a limitation on his ability to borrow for the purpose

of investing in a risky asset, i.e, the market value of his investments in the risky asset, X,, must be less

than a exogenous function X(W,) of his wealth W^. In this paper we concentrate on the case

X(W,)=k(W,+L) where k and L are non-negative constants. The second constraint is the requirement

that the investor's wealth stays non-negative at all times, i.e., W,^0. Using the dynamic programming

methodology, we associate to the stochastic control problem a nonlinear partial differential equation,

namely the Bellman equation. We show that if the utility function satisfies some general regularity

conditions, the Bellman equation has a unique solution which is twice continuously differentiable.

Using a verification theorem, this solution turns out to be the indirect utility function (the so called

value function) which is therefore a smooth function of the current wealth, W,. Moreover, the optimal

consumption, Q and the optimal investment X, are obtained in feedback form through the first order

conditions from the Bellman equation. Because of the smoothness of the value function, the optimal

policies are respectively continuously differentiable (Q) and continuous (X,) functions of the wealth.

In the second part of the paper we study the particular case of a Constant Relative Risk Aversion

He and Pearson (1991) use a duality approach to apply the martingale technology to incomplete markets with short sale constraints.

Although their methodology carries a lot of insight, they do not solve explicitly for the optimal policies.

5

agent. In particular, we describe in detail the optimal policies and we compare them to the ones of

the unconstrained problem. In the absence of the borrowing constraint, the optimal investment is

X,=(^/Ao') W, where \i is the excess rate of return on the risky asset over the risk-free rate, o is the

volatility of the rate of return on the risky asset and A is the coefficient of relative risk aversion. We

show that the borrowing constraint causes the agent to be more conservative (i.e. to invest less in the

risky asset) even at points where the constraint is not binding. This result has to be contrasted with

a similar result in Grossman and Vila (1992) who show that, if the agent consumes only his final

wealth the borrowing constraint will make him more (less) conservative if the relative risk aversion,

A, is smaller (greater) than 1.

In the third part, we present an application of our analysis to an optimal growth problem in a

Robinson Crusoe economy with two linear technologies, one riskless and one risky. We assume that

the investment in the riskless technology must be non-negative. We show that this constraint causes

the average rate of return on capital in the economy to fall even during periods when the constraint

is not binding.

The purpose of the paper is twofold. First, we want to examine how borrowing constraints affect

consumption and portfolio decisions. Second, we want to illustrate how dynamic programming can

be used rigorously to obtain qualitative properties of optimal policies even when explicit solutions fail

to exist.

The powerful theory of viscosity solutions is used in this paper. The value function is first

characterized as the unique (constrained) viscosity solution of the Bellman equation. The

characterization of the value function as a viscosity solution is imperative because the associated

Bellman equation, which turns out to be fully nonlinear, might be degenerate (due to the constraints)

and such equations do not have in general smooth solutions. The unique characterization together

6

with the stability properties of viscosity solutions enable us to approximate the value function by a

sequence of smooth solutions of the regularized Bellman equation and identify the smooth limit-

function with the value function. Finally, the characterization of the value function as a constrained

viscosity solution is natural due to the presence of the (state) constraint of non-negative wealth (W,iO

a.e. Vt^O).

The methodology employed herein can be applied to several extensions and variations of the infinite

horizon problem. In particular the case of finite -horizon can be analyzed very similarly as it is

discussed in Section V. Also, if we allow investing to more than one stock, the Bellman equation,

although more complicated, can still be treated with the above method.

In a general setting the above methodology can be used to analyze a very wide range of

consumption/investment problems in the presence of market imperfections. More precisely, results

from the theory of viscosity solutions can be used to provide (i) analytic results for the value function

of problems in imperfect markets, for example, (see Fleming and Zariphopoulou (1991),

Zariphopoulou (1993)) Duffie and Zariphopoulou (1993), Duffie, Fleming and Zariphopoulou ( 1993),

as well as (ii) convergence of a large class of numerical schemes for the value function and the optimal

policies when closed form solutions cannot be obtained (see, for example, Fitzpatrick and Fleming

(1990), Tourin and Zariphopoulou (1993)).

The paper is organized as follows. The general model is presented in Section II. Section III deals with

the existence result. In Section IV, we analyze the case of a CRRA investor. Section V presents the

application to an optimal growth problem. Section VI contains concluding remarks.

II. THE MODEL

2.1. A Consumption-Portfolio Choice Problem

We consider the investment-consumption problem of an infinitely lived agent who maximizes the

expectation of a time-additive utility function. The agent can distribute his funds between two assets.

One asset is riskless with rate of return r (r2:0). The other asset is a stock with value P;. We assume

that the stock price obeys the equation

-^' = ir*\i)dt + adb (2.1)

P.

where the excess rate of return \i and the volatility a are positive constants. The process bt is a

standard Brownian Motion on the underlying probability space (Q,,^,CP). We assume that there are no

transaction costs involved in buying or selling these financial assets.

The assumption that the opportunity set is constant, is necessary for the tractability of the model. It

is, however, possible to allow the market coefficients r, \i and o to be stochastic processes themselves.

Unfortunately, this would increase dramatically the dimensionality of the problem. For example, if

râ€ž \ii, and o, are diffusion processes, the value function will depend on four state variables W, r, \i and

o.

The controls of the investor are the dollar amount X, invested in the risky asset and the consumption

rate C,. His total current wealth evolves according to the state equation

dW, = (rW,-C,)dt -(- iiX,dt -(- oX,dbâ€ž for t^O and Wo = W. (2.2)

The agent faces two constraints. First, his wealth must stay non-negative at any trading time.^ Second,

â– ^ee Dytjvig and Huang (1988) on the importance of this non-negativity constraint.

8

the optimal amount X, must never exceed the (exogenously given) amount X(W,).

In this paper, we will consider the case X(W) = k(W+L) where k and L are non-negative constants.

This formulation is general enough to encompass several interesting examples. For instance, if the

investor has access to a fixed credit line L, then X(W,)=W,+L. If the investor needs only to put

down a certain fraction f of his stock purchases and can borrow for the remaining fraction (1-f) at

the risk free rate r, then X(W,) = (l/f)W,. This case is treated only for the sake of exposition since the

methodology employed here can be used for the general case of trading constraints X, ^ X(W,) with

X being a concave function of wealth (see Zariphopoulou (1993).)

The control (X,, C,) is admissible if

i) (Xâ€ž Q) is a ^-progressively measurable process where ^,=o(b,; O^s^t) is the o-algebra

generated by the Brownian Motion b,.

ii) Q^Oa.e. (Vt^O)

ill) (Xt,Cj) satisfy the integrability conditions

JXMs0 provided that J"(W) is

finite. Indeed, J(W) is bounded from below by ju(rW) since the control (X,=0;C,=rWj) is admissible

and bounded from above by J'(W). The explicit derivation of J"(W) can be found in Karatzas et al.

(1986). For the purpose of this paper, we shall assume that J"(W) < +oo\ VW^iO.

The necessary conditions on the discount factor, the utility function and the market coefficients, can

be found in Karatzas et al. (1986). They are also discussed in Part IV of this paper.

A function is called C^(Q) if its first k-derivatives exist and are continuous functions in Q.

^Technically speaking, from Lemma 2.1., assuming that J*(W) is finite for every W is enough to guarantee the fmiteness of J(W).

10

The following lemma describes elementary properties of J(-).

Lemma 2.2: (i) J(-) is strictly concave.

(ii) J( â€¢) is strictly increasing.

(Hi) J(-) is continuous on [0, Â») with J(0) = -ju(0).

(iv) \imJ'{W) = +0O.

Proof: See Proposition 2.1 of Zariphopoulou (1993). â–

2J. The Bellman Equation

In this section, we completely characterize the value function and we derive the optimal policies. This

is done using dynamic programming which leads to a fully nonlinear, second order differential

equation (2.6) below, known as the Hamilton- J acobi-Bellman (HJB) equation. In Theorem 2.1, we

show that the value function is the unique C^(0, + oo) solution of the (HJB) equation. This will enable

us to find the optimal policies from the first order conditions in the (HJB) equation. They turn out

to be feedback functions of wealth and their optimality is established via a verification theorem (see

Fleming and Rishel (1975)).

Theorem 2.1: The value function J is the unique C{0, + oc)nC[0, Â») solution of the Bellman equation:

^/(Â»0 = max [-a'X'J"{W)*^J'iW)] ^ max [uiC)-CJ'iW)] ^rWJ'iW), (W>0)

with /(0)=!ii2).

Theorem 2.1 is the central result of our paper. The proof of this result is presented in some details in section

III.

Next, using the first order conditions in (2.6) and the regularity of the value function, we can derive the

optimal policies in a feedback form.

11

Theorem 2.2: The optimal policy (C',^]) is given in the feedback form C',=C'(W',) and X'=X'(W]) where

C'( '), X'(') are given by

C'(W) = (u'yUJ'(W)) and X'(W) = mm { H ^ ^^ \k{W^L)} (2-7)

cr -J" (W)

where W, is the (optimal) wealth trajectory given by (2.2) with C* and X", being used.

We conclude this section by stating a result which will play a crucial role in the sequel. For the definition of

viscosity solutions and their stability properties see Appendix A.

Theorem 2.3: The value function J is the unique (constrained) viscosity solution in the class of concave functions,

of the (HJB) equation (2.6).

The proof is rather lengthy and technical and it follows along the lines of Theorem 3.1 in Zariphopoulou

(1993) and, for the sake of presentation, it is omitted. General uniqueness results can be found in Ishii-Lions

(1991) although they cannot be directly applied here because the control set is not compact. For a general

overview of existence and uniqueness results we refer the reader to 'User's Guide" by Crandall, Ishii and Lions

(1992) and to the book 'Controlled Markov Processes and Viscosity Solutions' by Fleming and Soner (1993).

III. SMOOTH (C*) SOLUTIONS OF THE (HJB) EQUATION

In this section we present the proof of Theorem 2.1. Before we start the details of the proof, we discuss the

main ideas. The (HJB) equation (2.6) is second order fully nonlinear and (possibly) degenerate. The

degeneracy comes from the fact that the second order term '/2a'X^J"(W) may become zero. Therefore (2.6) is

not uniformly elliptic^ and we know that degenerate equations do not have, in general, smooth solutions (see,

for example, Krylov (1987)). Our goal is first to exclude this possibility by showing that the optimal X is

A one-dimensional differential equation is said to be uniformly elliptic (non degenerate) if the coefficient of the second-order

derivative ^ o^X^ for the Bellman equation (2.6) satisfies 00. Since the value function is concave, its first and second

derivative exist almost everywhere. Without loss of generality, we may assume that J (a) and J'(b) exist.

We want to show that the optimal X is bounded away from zero in [a,b]. Formally, the optimal X is either

k(W+L) or (/i/a^)(-J'(W)/J"(W)). In the second case we want to get a positive lower bound of

(/i/CT^)(-J'(W)/J"(W)). Since J'(W) is non-increasing and strictly positive, it is bounded from below by J'(b)>0.

Therefore, it suffices to find a lower bound for J"(W) in the interval [a,b]. Since we do not know how regular

J(Â«) is, we first approximate !(â€¢) by a sequence of smooth functions J' which are solutions of a suitably

regularized equation.

To this end, we consider the following regularized problem: Let b| be a Brownian Motion independent of b,.

The policy (Xt.Q) is admissible if:

i) (Xt,Q) is a ^^-progressively measurable process where .^ = a(bj; 00);

iu) (Xt,q) satisfy

t

r(X,')Ms < +CO and rCtds0) and

(i) O < X; < k(Wt+L) a.e. (Vt>0).

We denote by A' the set of admissible policies and define the value function J*(Â«) to be

J'(W) = sup E [ re-'"u(Ct)dt] . (â– ^â– ^^

The following two lemmas provide regularity properties for the value function J'.

13

Lemma 3.1: The value function J' is strictly concave and strictly increasing on [0, +Â«).

Proof: See Theorem 5.1 of Zariphopoulou (1993).

Lemma 3.2: The value function J'(W} is the unique smooth solution of the regularized Bellman equation

^'(W) = max [-(/{X^^e'\^)y' iW)+^iXJ*\]V)] * max[u{C)-Cr\lV)] *rWy{W) , IV>0

(3.3)

OiXik(W.L) 2

with /-(O) = li^ .

Proof. Note that equation (3.3) is uniformly elliptic and see Krylov (1987). â–

The next lemma says that the above sequence J' converges to J, as Â« goes to zero.

Lemma 3.3: J' converges to J locally uniformly as e goes to 0.

Proof: See appendix B. â–

We next prove that the first and second derivatives of J* are bounded away from zero uniformly in e. This

implies in particular that the optimal X in equation (3.3) is bounded away from zero.

Lemma 3.4: In any interval [a,b], there exists two positive constants R,=R,([a,b]) and R2=R2([a,bJ) independent

of (., such that, for lVe[a,b],

(i) J"(W)>R, (3.4)

(ii) \J'"{W} \

^ V / o '

M,l,T UBRARIES - DEWEY

^

4d 2-^

no. >

(C'0 -

qc^

Zfeoe^

WORKING PAPER

ALFRED P. SLOAN SCHOOL OF MANAGEMENT

OPTIMAL COMSUMPTION AND PORTFOLIO CHOICE

WITH BORROWING CONSTRAINTS

Jean-Luc Vila*

and

Thaleia Zariphopoulou^

WP#3650-94

January 1994

MASSACHUSETTS

INSTITUTE OF TECHNOLOGY

50 MEMORIAL DRIVE

CAMBRIDGE, MASSACHUSETTS 02139

OPTIMAL COMSUMPTION AND PORTFOLIO CHOICE

WITH BORROWING CONSTRAINTS

Jean-Luc Vila*

and

Thaleia Zariphopoulou**

WP#3650-94 January 1994

* Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA

** Department of Mathematics and Business School. University of Wisconsin. Madison

OPTIMAL CONSUMPTION AND PORTFOLIO CHOICE

WITH BORROWING CONSTRAINTS

Jean-Luc Vila

Sloan School of Management, Massachusetts Institute of Technology

and

Thaleia Zariphopoulou

Department of Mathematics and Business School, University of Wisconsin, Madison

First Draft: August 1989

Current Version: January 1994

ABSTRACT

In this paper, we use stochastic dynamic programming to study the intertemporal consumption and

portfolio choice of an infinitely lived agent who faces a constant opportunity set and a borrowing

constraint. We show that, under general assumptions on the agent's utility function, optimal policies

exist and can be expressed as feedback functions of current wealth. We describe these policies in

details, when the agent's utility function exhibits constant relative risk aversion.

* We wish to thank Minh Chau, Neil Pearson, the associate editor and an anonymous referee for

their comments. Jean-Luc Vila wishes to acknowledge financial support from the International

Financial Services Research Center at the Sloan School of Management. Thaleia Zariphopoulou

acknowledges financial support from the National Science Foundation Grant DMS 9204866. Errors

are ours.

I. INTRODUCTION

In this paper, we consider the Portfolio-Qansumption problem of an infinitely lived agent in the

presence of a constant opportunity set and borrowing constraints. Using the method of Dynamic

Programming, we show that, under general assumptions on the agent's utility function, optimal

policies do exist and can be expressed as feedback functions of the investor's current wealth. Given

this existence result, we are able to describe how borrowing constraints affect the consumption and

investment decisions when the agent's relative risk aversion is constant.

Stochastic dynamic control has first been used by Merton (1969, 1971) to obtain an explicit solution

to the Portfolio-Consumption problem when the investment opportunity set is constant, the agent's

utility function belongs to the HARA class and when trading is unrestricted. More recently, Karatzas

et al. (1986) generalized Merton's results and obtained closed form solutions for general utility

functions.

Instead of using stochastic control methods, the so-called martingale approach has been alternatively

used by Pliska (1986), Cox and Huang (1989 & 1991), Karatzas et al. (1987) to study intertemporal

consumption and portfolio policies when markets are complete, which was also the case in the earlier

dynamic programming literature. The martingale technology consists in describing the feasible

consumption set by a single intertemporal budget equation and then solving the static consumption

problem in an infinite dimension Arrow-Debreu economy. The martingale approach is appealing to

economists for two reasons. First, it can be used to solve for the asset demand under very general

assumptions about the stochastic investment opportunity set. Second, and consequently, it can be

applied in a general equilibrium setting to solve for the equilibrium investment opportunity set (see

Duffie and Huang (1985), Huang (1987)).

4

Unfortunately, in the presence of market imperfections such as market incompleteness, short sale

constraints or transaction costs, the martingale approach looses much of its tractability'.

Consequently, many authors used dynamic programming to analyze the impact of these imperfections

on asset demand (see for example Constantinides (1986), Duffie and Zariphopoulou (1993),

Grossman and Laroque (1988), Grossman and Vila (1992), Fleming et al. (1989), Fleming and

Zariphopoulou (1991), Zariphopoulou (1993)).

The present paper extends this line of research to the dynamic problem of an infinitely lived agent

who faces two constraints. The first constraint is a limitation on his ability to borrow for the purpose

of investing in a risky asset, i.e, the market value of his investments in the risky asset, X,, must be less

than a exogenous function X(W,) of his wealth W^. In this paper we concentrate on the case

X(W,)=k(W,+L) where k and L are non-negative constants. The second constraint is the requirement

that the investor's wealth stays non-negative at all times, i.e., W,^0. Using the dynamic programming

methodology, we associate to the stochastic control problem a nonlinear partial differential equation,

namely the Bellman equation. We show that if the utility function satisfies some general regularity

conditions, the Bellman equation has a unique solution which is twice continuously differentiable.

Using a verification theorem, this solution turns out to be the indirect utility function (the so called

value function) which is therefore a smooth function of the current wealth, W,. Moreover, the optimal

consumption, Q and the optimal investment X, are obtained in feedback form through the first order

conditions from the Bellman equation. Because of the smoothness of the value function, the optimal

policies are respectively continuously differentiable (Q) and continuous (X,) functions of the wealth.

In the second part of the paper we study the particular case of a Constant Relative Risk Aversion

He and Pearson (1991) use a duality approach to apply the martingale technology to incomplete markets with short sale constraints.

Although their methodology carries a lot of insight, they do not solve explicitly for the optimal policies.

5

agent. In particular, we describe in detail the optimal policies and we compare them to the ones of

the unconstrained problem. In the absence of the borrowing constraint, the optimal investment is

X,=(^/Ao') W, where \i is the excess rate of return on the risky asset over the risk-free rate, o is the

volatility of the rate of return on the risky asset and A is the coefficient of relative risk aversion. We

show that the borrowing constraint causes the agent to be more conservative (i.e. to invest less in the

risky asset) even at points where the constraint is not binding. This result has to be contrasted with

a similar result in Grossman and Vila (1992) who show that, if the agent consumes only his final

wealth the borrowing constraint will make him more (less) conservative if the relative risk aversion,

A, is smaller (greater) than 1.

In the third part, we present an application of our analysis to an optimal growth problem in a

Robinson Crusoe economy with two linear technologies, one riskless and one risky. We assume that

the investment in the riskless technology must be non-negative. We show that this constraint causes

the average rate of return on capital in the economy to fall even during periods when the constraint

is not binding.

The purpose of the paper is twofold. First, we want to examine how borrowing constraints affect

consumption and portfolio decisions. Second, we want to illustrate how dynamic programming can

be used rigorously to obtain qualitative properties of optimal policies even when explicit solutions fail

to exist.

The powerful theory of viscosity solutions is used in this paper. The value function is first

characterized as the unique (constrained) viscosity solution of the Bellman equation. The

characterization of the value function as a viscosity solution is imperative because the associated

Bellman equation, which turns out to be fully nonlinear, might be degenerate (due to the constraints)

and such equations do not have in general smooth solutions. The unique characterization together

6

with the stability properties of viscosity solutions enable us to approximate the value function by a

sequence of smooth solutions of the regularized Bellman equation and identify the smooth limit-

function with the value function. Finally, the characterization of the value function as a constrained

viscosity solution is natural due to the presence of the (state) constraint of non-negative wealth (W,iO

a.e. Vt^O).

The methodology employed herein can be applied to several extensions and variations of the infinite

horizon problem. In particular the case of finite -horizon can be analyzed very similarly as it is

discussed in Section V. Also, if we allow investing to more than one stock, the Bellman equation,

although more complicated, can still be treated with the above method.

In a general setting the above methodology can be used to analyze a very wide range of

consumption/investment problems in the presence of market imperfections. More precisely, results

from the theory of viscosity solutions can be used to provide (i) analytic results for the value function

of problems in imperfect markets, for example, (see Fleming and Zariphopoulou (1991),

Zariphopoulou (1993)) Duffie and Zariphopoulou (1993), Duffie, Fleming and Zariphopoulou ( 1993),

as well as (ii) convergence of a large class of numerical schemes for the value function and the optimal

policies when closed form solutions cannot be obtained (see, for example, Fitzpatrick and Fleming

(1990), Tourin and Zariphopoulou (1993)).

The paper is organized as follows. The general model is presented in Section II. Section III deals with

the existence result. In Section IV, we analyze the case of a CRRA investor. Section V presents the

application to an optimal growth problem. Section VI contains concluding remarks.

II. THE MODEL

2.1. A Consumption-Portfolio Choice Problem

We consider the investment-consumption problem of an infinitely lived agent who maximizes the

expectation of a time-additive utility function. The agent can distribute his funds between two assets.

One asset is riskless with rate of return r (r2:0). The other asset is a stock with value P;. We assume

that the stock price obeys the equation

-^' = ir*\i)dt + adb (2.1)

P.

where the excess rate of return \i and the volatility a are positive constants. The process bt is a

standard Brownian Motion on the underlying probability space (Q,,^,CP). We assume that there are no

transaction costs involved in buying or selling these financial assets.

The assumption that the opportunity set is constant, is necessary for the tractability of the model. It

is, however, possible to allow the market coefficients r, \i and o to be stochastic processes themselves.

Unfortunately, this would increase dramatically the dimensionality of the problem. For example, if

râ€ž \ii, and o, are diffusion processes, the value function will depend on four state variables W, r, \i and

o.

The controls of the investor are the dollar amount X, invested in the risky asset and the consumption

rate C,. His total current wealth evolves according to the state equation

dW, = (rW,-C,)dt -(- iiX,dt -(- oX,dbâ€ž for t^O and Wo = W. (2.2)

The agent faces two constraints. First, his wealth must stay non-negative at any trading time.^ Second,

â– ^ee Dytjvig and Huang (1988) on the importance of this non-negativity constraint.

8

the optimal amount X, must never exceed the (exogenously given) amount X(W,).

In this paper, we will consider the case X(W) = k(W+L) where k and L are non-negative constants.

This formulation is general enough to encompass several interesting examples. For instance, if the

investor has access to a fixed credit line L, then X(W,)=W,+L. If the investor needs only to put

down a certain fraction f of his stock purchases and can borrow for the remaining fraction (1-f) at

the risk free rate r, then X(W,) = (l/f)W,. This case is treated only for the sake of exposition since the

methodology employed here can be used for the general case of trading constraints X, ^ X(W,) with

X being a concave function of wealth (see Zariphopoulou (1993).)

The control (X,, C,) is admissible if

i) (Xâ€ž Q) is a ^-progressively measurable process where ^,=o(b,; O^s^t) is the o-algebra

generated by the Brownian Motion b,.

ii) Q^Oa.e. (Vt^O)

ill) (Xt,Cj) satisfy the integrability conditions

JXMs0 provided that J"(W) is

finite. Indeed, J(W) is bounded from below by ju(rW) since the control (X,=0;C,=rWj) is admissible

and bounded from above by J'(W). The explicit derivation of J"(W) can be found in Karatzas et al.

(1986). For the purpose of this paper, we shall assume that J"(W) < +oo\ VW^iO.

The necessary conditions on the discount factor, the utility function and the market coefficients, can

be found in Karatzas et al. (1986). They are also discussed in Part IV of this paper.

A function is called C^(Q) if its first k-derivatives exist and are continuous functions in Q.

^Technically speaking, from Lemma 2.1., assuming that J*(W) is finite for every W is enough to guarantee the fmiteness of J(W).

10

The following lemma describes elementary properties of J(-).

Lemma 2.2: (i) J(-) is strictly concave.

(ii) J( â€¢) is strictly increasing.

(Hi) J(-) is continuous on [0, Â») with J(0) = -ju(0).

(iv) \imJ'{W) = +0O.

Proof: See Proposition 2.1 of Zariphopoulou (1993). â–

2J. The Bellman Equation

In this section, we completely characterize the value function and we derive the optimal policies. This

is done using dynamic programming which leads to a fully nonlinear, second order differential

equation (2.6) below, known as the Hamilton- J acobi-Bellman (HJB) equation. In Theorem 2.1, we

show that the value function is the unique C^(0, + oo) solution of the (HJB) equation. This will enable

us to find the optimal policies from the first order conditions in the (HJB) equation. They turn out

to be feedback functions of wealth and their optimality is established via a verification theorem (see

Fleming and Rishel (1975)).

Theorem 2.1: The value function J is the unique C{0, + oc)nC[0, Â») solution of the Bellman equation:

^/(Â»0 = max [-a'X'J"{W)*^J'iW)] ^ max [uiC)-CJ'iW)] ^rWJ'iW), (W>0)

with /(0)=!ii2).

Theorem 2.1 is the central result of our paper. The proof of this result is presented in some details in section

III.

Next, using the first order conditions in (2.6) and the regularity of the value function, we can derive the

optimal policies in a feedback form.

11

Theorem 2.2: The optimal policy (C',^]) is given in the feedback form C',=C'(W',) and X'=X'(W]) where

C'( '), X'(') are given by

C'(W) = (u'yUJ'(W)) and X'(W) = mm { H ^ ^^ \k{W^L)} (2-7)

cr -J" (W)

where W, is the (optimal) wealth trajectory given by (2.2) with C* and X", being used.

We conclude this section by stating a result which will play a crucial role in the sequel. For the definition of

viscosity solutions and their stability properties see Appendix A.

Theorem 2.3: The value function J is the unique (constrained) viscosity solution in the class of concave functions,

of the (HJB) equation (2.6).

The proof is rather lengthy and technical and it follows along the lines of Theorem 3.1 in Zariphopoulou

(1993) and, for the sake of presentation, it is omitted. General uniqueness results can be found in Ishii-Lions

(1991) although they cannot be directly applied here because the control set is not compact. For a general

overview of existence and uniqueness results we refer the reader to 'User's Guide" by Crandall, Ishii and Lions

(1992) and to the book 'Controlled Markov Processes and Viscosity Solutions' by Fleming and Soner (1993).

III. SMOOTH (C*) SOLUTIONS OF THE (HJB) EQUATION

In this section we present the proof of Theorem 2.1. Before we start the details of the proof, we discuss the

main ideas. The (HJB) equation (2.6) is second order fully nonlinear and (possibly) degenerate. The

degeneracy comes from the fact that the second order term '/2a'X^J"(W) may become zero. Therefore (2.6) is

not uniformly elliptic^ and we know that degenerate equations do not have, in general, smooth solutions (see,

for example, Krylov (1987)). Our goal is first to exclude this possibility by showing that the optimal X is

A one-dimensional differential equation is said to be uniformly elliptic (non degenerate) if the coefficient of the second-order

derivative ^ o^X^ for the Bellman equation (2.6) satisfies 00. Since the value function is concave, its first and second

derivative exist almost everywhere. Without loss of generality, we may assume that J (a) and J'(b) exist.

We want to show that the optimal X is bounded away from zero in [a,b]. Formally, the optimal X is either

k(W+L) or (/i/a^)(-J'(W)/J"(W)). In the second case we want to get a positive lower bound of

(/i/CT^)(-J'(W)/J"(W)). Since J'(W) is non-increasing and strictly positive, it is bounded from below by J'(b)>0.

Therefore, it suffices to find a lower bound for J"(W) in the interval [a,b]. Since we do not know how regular

J(Â«) is, we first approximate !(â€¢) by a sequence of smooth functions J' which are solutions of a suitably

regularized equation.

To this end, we consider the following regularized problem: Let b| be a Brownian Motion independent of b,.

The policy (Xt.Q) is admissible if:

i) (Xt,Q) is a ^^-progressively measurable process where .^ = a(bj; 00);

iu) (Xt,q) satisfy

t

r(X,')Ms < +CO and rCtds0) and

(i) O < X; < k(Wt+L) a.e. (Vt>0).

We denote by A' the set of admissible policies and define the value function J*(Â«) to be

J'(W) = sup E [ re-'"u(Ct)dt] . (â– ^â– ^^

The following two lemmas provide regularity properties for the value function J'.

13

Lemma 3.1: The value function J' is strictly concave and strictly increasing on [0, +Â«).

Proof: See Theorem 5.1 of Zariphopoulou (1993).

Lemma 3.2: The value function J'(W} is the unique smooth solution of the regularized Bellman equation

^'(W) = max [-(/{X^^e'\^)y' iW)+^iXJ*\]V)] * max[u{C)-Cr\lV)] *rWy{W) , IV>0

(3.3)

OiXik(W.L) 2

with /-(O) = li^ .

Proof. Note that equation (3.3) is uniformly elliptic and see Krylov (1987). â–

The next lemma says that the above sequence J' converges to J, as Â« goes to zero.

Lemma 3.3: J' converges to J locally uniformly as e goes to 0.

Proof: See appendix B. â–

We next prove that the first and second derivatives of J* are bounded away from zero uniformly in e. This

implies in particular that the optimal X in equation (3.3) is bounded away from zero.

Lemma 3.4: In any interval [a,b], there exists two positive constants R,=R,([a,b]) and R2=R2([a,bJ) independent

of (., such that, for lVe[a,b],

(i) J"(W)>R, (3.4)

(ii) \J'"{W} \

Online Library → Jean-Luc Vila → Optimal consumption and portfolio choice with borrowing constraints → online text (page 1 of 3)