Online Library → Lars Peter Hansen → Econometric evaluation of asset pricing models → online text (page 1 of 4)

Font size

HD28

.M414

WORKING PAPER

ALFRED P. SLOAN SCHOOL OF MANAGEMENT

ECONOMETRIC EVALUATION OF

ASSET PRICING MODELS

by

Lars Peter Hansen

John Heaton

Erzo Luttmer

WP# 3606-93

August 1993

MASSACHUSETTS

INSTITUTE OF TECHNOLOGY

50 MEMORIAL DRIVE

CAMBRIDGE, MASSACHUSETTS 02139

ECONOMETRIC EVALUATION OF

ASSET PRICING MODELS

by

Lars Peter Hansen

John Heaton

Erzo Luttmer

WP# 3606-93 August 1993

M.I.T. LIBRARIES

OCT 1 1993

RECEIVED

Econometric Evaluation of Asset Pricing Models

August 1993

Lars Peter Hansen

University of Chicago, NBER and NORC

John Heaton

M.I. I. and NBER

Erzo Luttmer

Northwestern University

We thank Craig Burnside, Bo Honore, Andrew Lo, Marc Roston, Whitney Newey,

Jose Scheinkman, Jean-Luc Vila, Jiang Wang, Amir Yaron and especially Ravi

Jagannathan for helpful comments and discussions. We also received valuable

remarks from seminar participants at the 1992 meetings of the Society of

Economic Dynamics and Control, the 1992 NBER Summer Institute and at

Cornell, Duke, L.S.E. and Waterloo Universities. Finally, we gratefully

acknowledge the financial assistance of the International Financial Services

Research Center at M.I.T. (Heaton) and the National Science Foundation

(Hansen) and the Sloan Foundation (Luttmer).

Abstract

In this paper we provide econometric tools for the evaluation of

intertemporal asset pricing models using specification-error and volatility

bounds. We formulate analog estimators of these bounds, give conditions for

consistency and derive the limiting distribution of these estimators. The

analysis incorporates market frictions such as short-sale constraints and

proportional transactions costs. Among several applications we show how to

use the methods to assess specific asset pricing models and to provide

nonparametric characterizations of asset pricing anomalies.

I. Introduction

Frictionless market models of asset pricing imply that asset prices can

be represented by a stochastic discount factor or pricing kernel. For

example, in the Capital Asset Pricing Model (CAPM) the discount factor is

given by a constant plus a scale multiple of the return on the market

portfolio. In the Consumption-Based CAPM (CCAPM) the discount factor is

given by the intertemporal marginal rate of substitution of an investor. If

r is the net return on an asset and m is the marginal rate of substitution,

then the CCAPM implies that:

(l.n 1 = Â£[m(l+r)|^]

where ? is the information set of the investor today. More generally, if m

is the stochastic discount factor, today's price, nip), of an asset payoff,

p, tomorrow is given by:

(1.2) n(p) = Â£(mp|^) .

Thus a stochastic discount factor m "discounts" payoffs in each state of the

world and, as a consequence, adjusts the price according to the riskiness of

the payoff. From the vantage point of an empirical analysis, we envision the

stochastic discount factor as the vehicle linking a theoretical model to

observable implications.

Given a particular model for the stochastic discount factor, the

implications of (1.2) can be assessed by first taking unconditional

expectations, yielding

(1.3) Enip) = Eimp).

When m is observable (at least up to a finite-dimensional parameter vector)

by the econometrician, a test of (1.2) can be performed using a time series

of a vector of portfolio payoffs and prices by examining whether the sample

analogs of the left and right sides of (1.2) are significantly different from

each other. Examples of this type of procedure can be found in Hansen and

Singleton (1982), Brown and Gibbons (1985), MacKinlay and Richardson (1991)

and Epstein and Zin (1991).

While tests such as these can be informative, it is often difficult to

interpret the resulting statistical rejections. Further, these tests are not

directly applicable when there are market frictions such as transactions

costs or short-sale constraints. For example, when an asset cannot be sold

short, (1.2) is replaced with the pricing inequality:

(1.4) 7t(p) 2: E{mp\9) .

Finally, these tests can not be used when the candidate discount factor

depends on variables unavailable to the econometrician.

As an alternative to testing directly pricing errors using (1.3), we

consider a different set of tests and diagnostics using the

specification-error bounds of Hansen and Jagannathan (1993), and the

volatility bounds of Hansen and Jagannathan (1991). We also consider

extensions of these tests and diagnostics, developed by He and Modest (1992)

and Luttmer (1993), that handle transactions costs, short-sale restrictions

and other market frictions. We develop an econometric methodology to provide

consistent estimators of the specification-error and volatility bounds.

Further, we develop asymptotic distribution theory that is easy to implement

and that can be used to make statistical inferences about asset pricing

models and asset market data using the bounds. The specification-error and

volatility bounds, along with the econometric methodology that we develop,

can be applied to address several related issues.

The specification-error bounds of Hansen and Jagannathan (1993) can be

used to examine a discount factor proxy that does not necessarily correctly

price the assets under consideration (see also Bansal, Hsieh and Viswanathan

1992 for an application). This is important since formal statistical tests

of many particular models of asset pricing imply that the hypothesis that

their pricing errors are zero is a very low probability event. Since these

models are typically very simple, it is perhaps not surprising that they do

not completely capture the complexity of pricing in financial markets. The

specification-error bounds give measures of the maximum pricing error made by

the discount factor proxy. This provides a way to assess the usefulness of a

model even when it is technically misspecif led. Further, this tool can

easily accommodate market frictions such as transactions costs and short-sale

constraints.

Given a vector of asset payoffs and prices, (1.3) typically does not

uniquely determine m. Instead there is a whole family of m' s that will

work. Any parametric model for m imposes additional restrictions on that

family, often sufficient to identify a unique stochastic discount factor.

Rather than imposing these extra restrictions, Hansen and Jagannathan (1991)

showed how asset market data on payoffs and prices can be used to construct

feasible sets for means and standard deviations of stochastic discount

factors. The boundary points of these regions provide lower bounds on the

volatility (standard deviation) indexed by the mean. He and Modest (1992)

and Luttmer (IS 3) showed how to extend this analysis to the case where some

of the assets are subject to transactions costs or short-sales constraints.

These fea- Ue sets of means and standard deviations of the stochastic

discount factor can be used to isolate those aspects of the asset market data

that are most informative about the stochastic discount factor. One way to

do this is to ask whether the volatility bound becomes significantly sharper

as more asset market data is added to the analysis. This would help one

assess the incremental importance of additional security market data in an

econometric analysis without having to limit a priori the family of

stochastic discount factors. More generally, it is valuable to have a

characterization of the sense in which an asset market data set is puzzling

without having to take a precise stand on the underlying valuation model.

When testing a particular model of asset pricing in which the candidate

m is specified, it is often useful to examine whether the candidate is in the

feasible region. Moreover, when diagnosing the failures of a specific model,

it is valuable to determine whether the candidate discount factor is not

sufficiently volatile or wh \er it is other aspects of the Joint

distribution of asset payoffs and the candidate discount factor that are

problematic.

As we remarked previously, sometimes it is not possible to construct

direct observations of m, making pricing-error tests infeasible. However,

it may still be possible to calculate the moments of a stochastic discount

factor implied by a model which can then be compared to the volatility

bounds. For example in Heaton (1993) a consumption-based CAPM model is

examined in which the cous-umption lata is time averaged and preferences are

such that a simple linearization of the utility function can not be done to

consistency result for estimators of the arbitrage bounds. Those

uninterested in the consistency results, but who are interested in the

calculations necessary for conducting statistical inference, need only read

Sections III. A., III.C and III.D before moving to Section IV.

In Section IV we present several applications and extensions of our

results each of which can be read independently after reading Sections II,

III. A, III.C and III.D. In Section IV. A we discuss the sense in which the

entire feasible set of means and standard deviations for the stochastic

discount factor can be estimated. Section IV. B provides a discussion of

tests of whether the volatility bound becomes sharper with additional asset

market data. Section IV. C shows how to use the volatility bounds to test

models of the discount factor. Finally in Section IV. D we extend the

specification-error bound to a case where there are parameters of the

discount factor proxy that are unknown and must be estimated. Section V

contains some concluding remarks.

II. General Model and Bounds

Our starting point is a model in which asset prices are represented by

a stochastic discount factor or pricing kernel. To accommodate security

market pricing subject to transactions costs, we permit there to be

short-sale constraints for a subset of the securities. Although a

short-sale constraint is an extreme version of a transactions cost, other

proportional transactions costs such as bid-ask spreads can also be handled

with this formalism. This is done as in Foley (1970), Jouni and Kallal

(1992) and Luttmer (1993) by constructing two payoffs according to whether a

security is purchased or sold. A short-sale constraint is imposed on both

artificial securities to enforce the distinction between a buy and a sell,

and a bid-ask spread is modeled by making the purchase price higher than the

sale price.

Suppose the vector of security market payoffs used in an econometric

analysis is denoted x with a corresponding price vector q. The vector of

X is used to generate a collection of payoffs formed using portfolio weights

in a closed convex cone C of IR :

(2.1) P = {p : p = a' X for some a e C>.

The cone C is constructed to incorporate all of the short-sale constraints

imposed in the econometric investigation. If there are no price distortions

induced by market frictions, then C is r". More generally, partition x into

two components: x' = [x"',xÂ°'] where x" contains the k components whose

prices are not distorted by market frictions and x contains the I components

subject to short-sale constraints. Then the cone C is formed by taking the

Cartesian product of R and the nonnegative orthant of R .

Let q denote the random vector of prices corresponding to the vector x

of securities payoffs. These prices are observed by investors at the time

assets are traded and are permitted to be random because the prices may

reflect conditioning information available to the investors. Since it is

difficult to model empirically this conditioning information, we instead work

with the average or expected price vector Eq.

While information may be lost in our failure to model explicitly the

conditioning information of investors, some conditioning information can be

incorporated in the following familiar ad hoc manner. Suppose some of the

security payoffs used in an econometric analysis are one-period stock or

bond-market returns with prices equal to one by construction. Additional

synthet ic payoffs can be formed by an econometrician by taking one of the

original returns, say x , and multiplying it by a random variable, say z, in

the conditioning information set of economic agents. The corresponding

constructed payoff is then x z with a price of z. Hence the price of the

synthetic payoff is random even though the price of original security is

constant. If x is subject to a short-sale constraint, then z should be

nonnegative.

The vehicle linking payoffs to average prices is a stochastic discount

factor. To represent formally this link and provide a characterization of a

stochastic discount factor, we introduce the dual of C, which we denote C .

This dual consists of all vectors in IR whose dot product with every element

n â€¢

of C is nonnegative. For instance, when C is all of R , C consists only of

the zero vector. More generally, if x can be partitioned in the manner

described previously, the elements of C are of the form {0,^')' where g is

nonnegative.

A stochastic discount factor m is a random variable that satisfies the

pricing relation:

(2.2) Eq - Emx e C .

To interpret this relation, first consider the case in which C is r". Then

there are no market frictions and we have linear pricing. In this case

relation (2.2) is the familiar pricing equality because C has only one

element, namely the zero vector. Consider next the case in which x can be

partitioned into the two components described previously. Partition q

comparably, and relation (2.2) becomes:

(2.3) Â£q" - Eflix" =

Eq^ - Emx^ a 0.

The inequality restriction emerges because pricing the vector of payoffs x^

subject to short-sale constraints must allow for the possibility that these

constraints bind and hence contribute positively to the market price vector.

I I. A: Maintained Assumptions

There are three restrictions on the vector of payoffs and prices that

are central to our analysis. The first is a moment restriction, the second

is equivalent to the absence of arbitrage on the space of portfolio payoffs,

and the third eliminates redundancy in the securities.

For pricing relation (2.2) to have content, we maintain:

2

Assumption 2.1: E\x\ < a, E\q\ < oo.

2

Assumption 2.2: There exists an m > satisfying (2.2) such that Em < oo.

The positivity component of Assumption 2.2 can often be derived from

the Principle of No-Arbitrage (e.g., see Kreps 1981, Prisman 1986, Jouni and

Kallal 1992 and Luttmer 1993). The Principle of No-Arbitrage specifies that

the smallest cost associated with any payoff that is nonnegative and not

identically equal to zero must be strictly positive. Notice that from

(2.2), a stochastic discount factor m satisfies:

(2.4) a' Eimx) s a' Eq for any a e C,

which shows that Assumption 2.2 implies the Principle of No-Arbitrage

(applied to expected prices).

Next we limit the construction of x by ruling out redundancies in the

securities:

â€¢ â€¢ â€¢

Assumption 2.3: If a'x = a 'x and oc' Eq = a ' Eq for some a and a in C, then

a = a .

In the absence of transaction costs, Assumption 2.3 precludes the

possibility that the second moment matrix of x is singular. Otherwise,

there would exist a nontrivial linear combination of the payoff vector x

that is zero with probability one. In light of (2.2), the (expected) price

of this nontrivial linear combination would have to be zero, violating

Assumption 2.3. To accommodate securities whose purchase price differs from

the sale price, we permit the second moment matrix of the composite vector x

to be singular. Assumption 2.3 then requires that distinct portfolio

weights used to construct the same payoff must have distinct expected

1

prices.

II. B: Minimum Distance Problems

There are two problems that underlie most of our analysis. Let M

denote the set of all random variables with finite second moments that

satisfy (2.2), and let M* be the set of all nonnegative random variables in

M. In light of Assumption 2.2, both sets are nonempty. Let y denote some

"proxy" variable for a stochastic discount factor that, strictly speaking,

does not satisfy relations (2.2). Following Hansen and Jagannathan (1993),

10

we consider the following two ad hoc least squares measures of

misspecif ication:

(2.5) 8^ = min Â£[(y - m)^] ,

meM

and

(2.6) 5^ = minjliy - m)^]

meM

When the proxy y is set to zero, the minimization problems collapse to

finding bounds on the second moment of stochastic discount factors as

constructed by Hansen and Jagannathan (1991), He and Modest (1992) and

Luttmer (1993). In particular, the bounds derived in Hansen and Jagannathan

(1991) are obtained by setting y to zero and solving (2.5) and (2.6) when

there are no short-sale constraints imposed (when C is set to IR ) ; the bound

derived in He and Modest is obtained by solving (2.5) for y set to zero; and

the bound derived by Luttmer (1993) is obtained by solving (2.6) for y set to

zero. These second moment bounds will subsequently be used in deriving

feasible regions for means and standard deviations. Clearly, the second

moment bound implied by (2.6) is no smaller than that implied by (2.5) since

it is obtained using a smaller constraint set.

Next consider the case in which the proxy y is not degenerate. Hansen

and Jagannathan (1993) showed that the least squares distance between a

proxy and the set M of (possibly negative) stochastic discount factors has

an alternative interpretation of being the maximum pricing error per unit

norm of payoffs '" P, where the norm of a payoff is the square root of its

11

second moment. When the constraint set is shrunk to M as in problem (2.6),

the dual interpretation takes account of potential pricing errors for

hypothetical derivative claims. While Hansen and Jagannathan (1993)

abstract from short-sale constraints in their analysis, pricing-error

2

interpretations are applicable more generally.

II. C: Conjugate Maximization Problems

In solving the least squares problems (2.5) and (2.6) and in our

development of econometric methods associated with those problems, it is

most convenient to study the conjugate maximization problems. They are

given by

(2.7) 5^ = maxiEy^ - Eliy - x'a.)^] - Zol' Eq) ,

aeC

and

(2.8) S^ = max {Ey^ - Â£[(y - x'a)*^] - 2a' Eg}

aeC

where the notation h denotes max{h,0}. The conjugate problems are obtained

by introducing Lagrange multipliers on the pricing constraints (2.2) and

exploiting the familiar saddle point property of the Lagrangian. The a' s

then have interpretations as the multipliers on the pricing constraints.

The conjugate problems in (2.7) and (2.8) are convenient because the

choice variables are finite-dimensional vectors whereas the choice variables

in the original least squares problems are random variables that reside in

possibly infinite-dimensional constraint sets. The specifications of the

12

conjugate problems are justified formally in Hansen and Jagannathan (1993)

and Luttmer (1993). Of particular interest to us is that the criteria for

the maximization problems are concave in a and that the first-order

conditions for the solutions are given by:

(2.9) Eq - Â£[(y - x'a)x] e C*

in the case of problem (2.7) and

(2.10) Eq - Â£[(y - x'a)*x] e C*

along with the respective complementary slackness conditions

(2.11) a'Eq - a'Â£[(y - x'a) x] = 0,

and

(2.12) a'Eq - a'Â£[(y - x'a)*xl = .

In fact, optimization problem (2.7) is a standard quadratic programming

problem. Interpreting the first-order conditions for these problems, observe

that associated with a solution to problem (2.7) is a random variable m = {y

- x'a) in M and associated with a solution to problem (2.8) is a nonnegative

random variable m = (y - x'a) in M . These random variables are the unique

(up to the usual equivalence class of random variables that are equal with

probability one) solutions to the original least squares problems.

Since Assumption 2.3 eliminates redundant securities and the random

13

variable (y - x'a) is uniquely determined, the solution a to conjugate

problem (2.7) is also unique. This follows because the value of the

criterion must be the same for all solutions, implying that they all must

have the same expected price a'Eq. The solution to conjugate problem (2.8)

may not be unique, however. In this case the truncated random variable (y -

x'a) is uniquely determined, as is the expected price a'Eq. On the other

hand, the random variable (y - x'a) is not necessarily unique, so we can not

exploit Assumption 2.3 to verify that the solution a is unique. As we will

now demonstrate, the set of solutions is convex and compact.

The convexity follows immediately from the concavity of the criterion

function and the convexity of the constraint set. Similarly, the set of

solutions must be closed because the constraint set is closed and the

criterion function is continuous.

Boundedness of the set of solutions can be demonstrated by investigating

the tail properties of the criterion functions. We consider two cases:

directions 9 for which 9'x is negative with positive probability and

directions 9 for which 9'x is nonnegative. To study the former case we take

2

the criterion in (2.8) and divide it by 1 + |a| . For large values of |a|

the scaled criterion is approximately:

(2.13) - Â£[(-x'9)*^] where 9 = a/[l + |a|^]^'^^ .

Hence |9| is approximately one for large values of Ia|. Moreover, 9'x is a

payoff in P. Consequently, the unsealed criterion will decrease (to -co)

quadratically for large values |a|.

Consider next directions 9 for which 9'x is nonnegative. From

Assumption 2.2 and relation (2.4) we have that

14

(2.14) Â£fli(G'x) s e'Eq

for some m that is strictly positive with probability one. Hence e'Eq must

be strictly positive unless G'x is identically zero. However, when G'x is

identically zero, it follows from Assumption 2.3 and inequality (2.14) that

e'Eq is still strictly positive.

For directions 9 for which the payoff e'x is nonnegative, we study the

2 1/2

tail behavior of the criterion after dividing by (1 + |al ) , which yields

approximately - e'Eq for large values of |a|. Hence in these directions the

the unsealed criterion must diminish (to -m) at least linearly in |a|. Thus

in either case, we find that the set of solutions to conjugate problem (2.8)

is bounded.

For some but not all of the results in the subsequent sections, we will

need for there to exist a unique solution to conjugate problem (2.8). Since

the set of solutions is convex, local uniqueness implies global uniqueness.

To display a sufficient condition for local uniqueness, let x denote the

component of the composite payoff vector x for which the pricing relation is

satisfied with equality:

(2.15) Â£mx* = Eq'

where q is the corresponding price vector. Also, let lf~>QÂ» be the the

indicator function for the event {fli>0>. A sufficient condition for local

uniqueness is that

Assumption 2.4: Ex x '^i~yn\ is nonsingular.

15

To see why this is a valid sufficient condition, observe that from the

complementary slackness condition (2.12), m is given by (y - x '3) for some

vector p. Consequently,

(2.16) Eq' = Â£yl^~^Q^ - Â£(xV' 1^-^^^ )3

When the matrix Â£(x x '1,~^â€ž.) is nonsingular, we can solve (2.16) for S.

\in>vr

II. D: Volatility Bounds and Restrictions on Meains

The second moment bounds described in the previous subsection can be

converted into standard deviation bounds via the formulas:

(2.17) 0- = [5^ - (Â£m)^]^^^

? = [6^ - (Â£m)^]^^^

"2 ~2

where 5 and 5 are constructed by setting the proxy to zero. When P

contains a unit payoff, Em is also equal to the average price of that payoff

and hence is restricted to be between the sale and purchase prices of the

unit payoff. However, data on the price of a riskless payoff is often not

available, so that it is difficult to determine Em. In these circumstances,

bounds can be obtained for each choice of Em by adding a unit payoff to P

(augmenting x with a 1) and assigning a price of v to that payoff (augmenting

Â£q with v). In forming the augmented cone, there should be no short sale

constraints imposed on the additional security and hence no new price

distortions should be introduced. The price assignment v is equivalent to a

mean assignment for m. Mean-specific volatility bounds can then be obtained

16

using (2.7). (2.8) and (2.17).

The Principle of No-Arbitrage puts a limit on the admissible values of

v. V e \ ,v ] where X is the lower arbitrage bound and i; is the upper

'^

arbitrage bound. These bounds are computed using formulas familiar from

.M414

WORKING PAPER

ALFRED P. SLOAN SCHOOL OF MANAGEMENT

ECONOMETRIC EVALUATION OF

ASSET PRICING MODELS

by

Lars Peter Hansen

John Heaton

Erzo Luttmer

WP# 3606-93

August 1993

MASSACHUSETTS

INSTITUTE OF TECHNOLOGY

50 MEMORIAL DRIVE

CAMBRIDGE, MASSACHUSETTS 02139

ECONOMETRIC EVALUATION OF

ASSET PRICING MODELS

by

Lars Peter Hansen

John Heaton

Erzo Luttmer

WP# 3606-93 August 1993

M.I.T. LIBRARIES

OCT 1 1993

RECEIVED

Econometric Evaluation of Asset Pricing Models

August 1993

Lars Peter Hansen

University of Chicago, NBER and NORC

John Heaton

M.I. I. and NBER

Erzo Luttmer

Northwestern University

We thank Craig Burnside, Bo Honore, Andrew Lo, Marc Roston, Whitney Newey,

Jose Scheinkman, Jean-Luc Vila, Jiang Wang, Amir Yaron and especially Ravi

Jagannathan for helpful comments and discussions. We also received valuable

remarks from seminar participants at the 1992 meetings of the Society of

Economic Dynamics and Control, the 1992 NBER Summer Institute and at

Cornell, Duke, L.S.E. and Waterloo Universities. Finally, we gratefully

acknowledge the financial assistance of the International Financial Services

Research Center at M.I.T. (Heaton) and the National Science Foundation

(Hansen) and the Sloan Foundation (Luttmer).

Abstract

In this paper we provide econometric tools for the evaluation of

intertemporal asset pricing models using specification-error and volatility

bounds. We formulate analog estimators of these bounds, give conditions for

consistency and derive the limiting distribution of these estimators. The

analysis incorporates market frictions such as short-sale constraints and

proportional transactions costs. Among several applications we show how to

use the methods to assess specific asset pricing models and to provide

nonparametric characterizations of asset pricing anomalies.

I. Introduction

Frictionless market models of asset pricing imply that asset prices can

be represented by a stochastic discount factor or pricing kernel. For

example, in the Capital Asset Pricing Model (CAPM) the discount factor is

given by a constant plus a scale multiple of the return on the market

portfolio. In the Consumption-Based CAPM (CCAPM) the discount factor is

given by the intertemporal marginal rate of substitution of an investor. If

r is the net return on an asset and m is the marginal rate of substitution,

then the CCAPM implies that:

(l.n 1 = Â£[m(l+r)|^]

where ? is the information set of the investor today. More generally, if m

is the stochastic discount factor, today's price, nip), of an asset payoff,

p, tomorrow is given by:

(1.2) n(p) = Â£(mp|^) .

Thus a stochastic discount factor m "discounts" payoffs in each state of the

world and, as a consequence, adjusts the price according to the riskiness of

the payoff. From the vantage point of an empirical analysis, we envision the

stochastic discount factor as the vehicle linking a theoretical model to

observable implications.

Given a particular model for the stochastic discount factor, the

implications of (1.2) can be assessed by first taking unconditional

expectations, yielding

(1.3) Enip) = Eimp).

When m is observable (at least up to a finite-dimensional parameter vector)

by the econometrician, a test of (1.2) can be performed using a time series

of a vector of portfolio payoffs and prices by examining whether the sample

analogs of the left and right sides of (1.2) are significantly different from

each other. Examples of this type of procedure can be found in Hansen and

Singleton (1982), Brown and Gibbons (1985), MacKinlay and Richardson (1991)

and Epstein and Zin (1991).

While tests such as these can be informative, it is often difficult to

interpret the resulting statistical rejections. Further, these tests are not

directly applicable when there are market frictions such as transactions

costs or short-sale constraints. For example, when an asset cannot be sold

short, (1.2) is replaced with the pricing inequality:

(1.4) 7t(p) 2: E{mp\9) .

Finally, these tests can not be used when the candidate discount factor

depends on variables unavailable to the econometrician.

As an alternative to testing directly pricing errors using (1.3), we

consider a different set of tests and diagnostics using the

specification-error bounds of Hansen and Jagannathan (1993), and the

volatility bounds of Hansen and Jagannathan (1991). We also consider

extensions of these tests and diagnostics, developed by He and Modest (1992)

and Luttmer (1993), that handle transactions costs, short-sale restrictions

and other market frictions. We develop an econometric methodology to provide

consistent estimators of the specification-error and volatility bounds.

Further, we develop asymptotic distribution theory that is easy to implement

and that can be used to make statistical inferences about asset pricing

models and asset market data using the bounds. The specification-error and

volatility bounds, along with the econometric methodology that we develop,

can be applied to address several related issues.

The specification-error bounds of Hansen and Jagannathan (1993) can be

used to examine a discount factor proxy that does not necessarily correctly

price the assets under consideration (see also Bansal, Hsieh and Viswanathan

1992 for an application). This is important since formal statistical tests

of many particular models of asset pricing imply that the hypothesis that

their pricing errors are zero is a very low probability event. Since these

models are typically very simple, it is perhaps not surprising that they do

not completely capture the complexity of pricing in financial markets. The

specification-error bounds give measures of the maximum pricing error made by

the discount factor proxy. This provides a way to assess the usefulness of a

model even when it is technically misspecif led. Further, this tool can

easily accommodate market frictions such as transactions costs and short-sale

constraints.

Given a vector of asset payoffs and prices, (1.3) typically does not

uniquely determine m. Instead there is a whole family of m' s that will

work. Any parametric model for m imposes additional restrictions on that

family, often sufficient to identify a unique stochastic discount factor.

Rather than imposing these extra restrictions, Hansen and Jagannathan (1991)

showed how asset market data on payoffs and prices can be used to construct

feasible sets for means and standard deviations of stochastic discount

factors. The boundary points of these regions provide lower bounds on the

volatility (standard deviation) indexed by the mean. He and Modest (1992)

and Luttmer (IS 3) showed how to extend this analysis to the case where some

of the assets are subject to transactions costs or short-sales constraints.

These fea- Ue sets of means and standard deviations of the stochastic

discount factor can be used to isolate those aspects of the asset market data

that are most informative about the stochastic discount factor. One way to

do this is to ask whether the volatility bound becomes significantly sharper

as more asset market data is added to the analysis. This would help one

assess the incremental importance of additional security market data in an

econometric analysis without having to limit a priori the family of

stochastic discount factors. More generally, it is valuable to have a

characterization of the sense in which an asset market data set is puzzling

without having to take a precise stand on the underlying valuation model.

When testing a particular model of asset pricing in which the candidate

m is specified, it is often useful to examine whether the candidate is in the

feasible region. Moreover, when diagnosing the failures of a specific model,

it is valuable to determine whether the candidate discount factor is not

sufficiently volatile or wh \er it is other aspects of the Joint

distribution of asset payoffs and the candidate discount factor that are

problematic.

As we remarked previously, sometimes it is not possible to construct

direct observations of m, making pricing-error tests infeasible. However,

it may still be possible to calculate the moments of a stochastic discount

factor implied by a model which can then be compared to the volatility

bounds. For example in Heaton (1993) a consumption-based CAPM model is

examined in which the cous-umption lata is time averaged and preferences are

such that a simple linearization of the utility function can not be done to

consistency result for estimators of the arbitrage bounds. Those

uninterested in the consistency results, but who are interested in the

calculations necessary for conducting statistical inference, need only read

Sections III. A., III.C and III.D before moving to Section IV.

In Section IV we present several applications and extensions of our

results each of which can be read independently after reading Sections II,

III. A, III.C and III.D. In Section IV. A we discuss the sense in which the

entire feasible set of means and standard deviations for the stochastic

discount factor can be estimated. Section IV. B provides a discussion of

tests of whether the volatility bound becomes sharper with additional asset

market data. Section IV. C shows how to use the volatility bounds to test

models of the discount factor. Finally in Section IV. D we extend the

specification-error bound to a case where there are parameters of the

discount factor proxy that are unknown and must be estimated. Section V

contains some concluding remarks.

II. General Model and Bounds

Our starting point is a model in which asset prices are represented by

a stochastic discount factor or pricing kernel. To accommodate security

market pricing subject to transactions costs, we permit there to be

short-sale constraints for a subset of the securities. Although a

short-sale constraint is an extreme version of a transactions cost, other

proportional transactions costs such as bid-ask spreads can also be handled

with this formalism. This is done as in Foley (1970), Jouni and Kallal

(1992) and Luttmer (1993) by constructing two payoffs according to whether a

security is purchased or sold. A short-sale constraint is imposed on both

artificial securities to enforce the distinction between a buy and a sell,

and a bid-ask spread is modeled by making the purchase price higher than the

sale price.

Suppose the vector of security market payoffs used in an econometric

analysis is denoted x with a corresponding price vector q. The vector of

X is used to generate a collection of payoffs formed using portfolio weights

in a closed convex cone C of IR :

(2.1) P = {p : p = a' X for some a e C>.

The cone C is constructed to incorporate all of the short-sale constraints

imposed in the econometric investigation. If there are no price distortions

induced by market frictions, then C is r". More generally, partition x into

two components: x' = [x"',xÂ°'] where x" contains the k components whose

prices are not distorted by market frictions and x contains the I components

subject to short-sale constraints. Then the cone C is formed by taking the

Cartesian product of R and the nonnegative orthant of R .

Let q denote the random vector of prices corresponding to the vector x

of securities payoffs. These prices are observed by investors at the time

assets are traded and are permitted to be random because the prices may

reflect conditioning information available to the investors. Since it is

difficult to model empirically this conditioning information, we instead work

with the average or expected price vector Eq.

While information may be lost in our failure to model explicitly the

conditioning information of investors, some conditioning information can be

incorporated in the following familiar ad hoc manner. Suppose some of the

security payoffs used in an econometric analysis are one-period stock or

bond-market returns with prices equal to one by construction. Additional

synthet ic payoffs can be formed by an econometrician by taking one of the

original returns, say x , and multiplying it by a random variable, say z, in

the conditioning information set of economic agents. The corresponding

constructed payoff is then x z with a price of z. Hence the price of the

synthetic payoff is random even though the price of original security is

constant. If x is subject to a short-sale constraint, then z should be

nonnegative.

The vehicle linking payoffs to average prices is a stochastic discount

factor. To represent formally this link and provide a characterization of a

stochastic discount factor, we introduce the dual of C, which we denote C .

This dual consists of all vectors in IR whose dot product with every element

n â€¢

of C is nonnegative. For instance, when C is all of R , C consists only of

the zero vector. More generally, if x can be partitioned in the manner

described previously, the elements of C are of the form {0,^')' where g is

nonnegative.

A stochastic discount factor m is a random variable that satisfies the

pricing relation:

(2.2) Eq - Emx e C .

To interpret this relation, first consider the case in which C is r". Then

there are no market frictions and we have linear pricing. In this case

relation (2.2) is the familiar pricing equality because C has only one

element, namely the zero vector. Consider next the case in which x can be

partitioned into the two components described previously. Partition q

comparably, and relation (2.2) becomes:

(2.3) Â£q" - Eflix" =

Eq^ - Emx^ a 0.

The inequality restriction emerges because pricing the vector of payoffs x^

subject to short-sale constraints must allow for the possibility that these

constraints bind and hence contribute positively to the market price vector.

I I. A: Maintained Assumptions

There are three restrictions on the vector of payoffs and prices that

are central to our analysis. The first is a moment restriction, the second

is equivalent to the absence of arbitrage on the space of portfolio payoffs,

and the third eliminates redundancy in the securities.

For pricing relation (2.2) to have content, we maintain:

2

Assumption 2.1: E\x\ < a, E\q\ < oo.

2

Assumption 2.2: There exists an m > satisfying (2.2) such that Em < oo.

The positivity component of Assumption 2.2 can often be derived from

the Principle of No-Arbitrage (e.g., see Kreps 1981, Prisman 1986, Jouni and

Kallal 1992 and Luttmer 1993). The Principle of No-Arbitrage specifies that

the smallest cost associated with any payoff that is nonnegative and not

identically equal to zero must be strictly positive. Notice that from

(2.2), a stochastic discount factor m satisfies:

(2.4) a' Eimx) s a' Eq for any a e C,

which shows that Assumption 2.2 implies the Principle of No-Arbitrage

(applied to expected prices).

Next we limit the construction of x by ruling out redundancies in the

securities:

â€¢ â€¢ â€¢

Assumption 2.3: If a'x = a 'x and oc' Eq = a ' Eq for some a and a in C, then

a = a .

In the absence of transaction costs, Assumption 2.3 precludes the

possibility that the second moment matrix of x is singular. Otherwise,

there would exist a nontrivial linear combination of the payoff vector x

that is zero with probability one. In light of (2.2), the (expected) price

of this nontrivial linear combination would have to be zero, violating

Assumption 2.3. To accommodate securities whose purchase price differs from

the sale price, we permit the second moment matrix of the composite vector x

to be singular. Assumption 2.3 then requires that distinct portfolio

weights used to construct the same payoff must have distinct expected

1

prices.

II. B: Minimum Distance Problems

There are two problems that underlie most of our analysis. Let M

denote the set of all random variables with finite second moments that

satisfy (2.2), and let M* be the set of all nonnegative random variables in

M. In light of Assumption 2.2, both sets are nonempty. Let y denote some

"proxy" variable for a stochastic discount factor that, strictly speaking,

does not satisfy relations (2.2). Following Hansen and Jagannathan (1993),

10

we consider the following two ad hoc least squares measures of

misspecif ication:

(2.5) 8^ = min Â£[(y - m)^] ,

meM

and

(2.6) 5^ = minjliy - m)^]

meM

When the proxy y is set to zero, the minimization problems collapse to

finding bounds on the second moment of stochastic discount factors as

constructed by Hansen and Jagannathan (1991), He and Modest (1992) and

Luttmer (1993). In particular, the bounds derived in Hansen and Jagannathan

(1991) are obtained by setting y to zero and solving (2.5) and (2.6) when

there are no short-sale constraints imposed (when C is set to IR ) ; the bound

derived in He and Modest is obtained by solving (2.5) for y set to zero; and

the bound derived by Luttmer (1993) is obtained by solving (2.6) for y set to

zero. These second moment bounds will subsequently be used in deriving

feasible regions for means and standard deviations. Clearly, the second

moment bound implied by (2.6) is no smaller than that implied by (2.5) since

it is obtained using a smaller constraint set.

Next consider the case in which the proxy y is not degenerate. Hansen

and Jagannathan (1993) showed that the least squares distance between a

proxy and the set M of (possibly negative) stochastic discount factors has

an alternative interpretation of being the maximum pricing error per unit

norm of payoffs '" P, where the norm of a payoff is the square root of its

11

second moment. When the constraint set is shrunk to M as in problem (2.6),

the dual interpretation takes account of potential pricing errors for

hypothetical derivative claims. While Hansen and Jagannathan (1993)

abstract from short-sale constraints in their analysis, pricing-error

2

interpretations are applicable more generally.

II. C: Conjugate Maximization Problems

In solving the least squares problems (2.5) and (2.6) and in our

development of econometric methods associated with those problems, it is

most convenient to study the conjugate maximization problems. They are

given by

(2.7) 5^ = maxiEy^ - Eliy - x'a.)^] - Zol' Eq) ,

aeC

and

(2.8) S^ = max {Ey^ - Â£[(y - x'a)*^] - 2a' Eg}

aeC

where the notation h denotes max{h,0}. The conjugate problems are obtained

by introducing Lagrange multipliers on the pricing constraints (2.2) and

exploiting the familiar saddle point property of the Lagrangian. The a' s

then have interpretations as the multipliers on the pricing constraints.

The conjugate problems in (2.7) and (2.8) are convenient because the

choice variables are finite-dimensional vectors whereas the choice variables

in the original least squares problems are random variables that reside in

possibly infinite-dimensional constraint sets. The specifications of the

12

conjugate problems are justified formally in Hansen and Jagannathan (1993)

and Luttmer (1993). Of particular interest to us is that the criteria for

the maximization problems are concave in a and that the first-order

conditions for the solutions are given by:

(2.9) Eq - Â£[(y - x'a)x] e C*

in the case of problem (2.7) and

(2.10) Eq - Â£[(y - x'a)*x] e C*

along with the respective complementary slackness conditions

(2.11) a'Eq - a'Â£[(y - x'a) x] = 0,

and

(2.12) a'Eq - a'Â£[(y - x'a)*xl = .

In fact, optimization problem (2.7) is a standard quadratic programming

problem. Interpreting the first-order conditions for these problems, observe

that associated with a solution to problem (2.7) is a random variable m = {y

- x'a) in M and associated with a solution to problem (2.8) is a nonnegative

random variable m = (y - x'a) in M . These random variables are the unique

(up to the usual equivalence class of random variables that are equal with

probability one) solutions to the original least squares problems.

Since Assumption 2.3 eliminates redundant securities and the random

13

variable (y - x'a) is uniquely determined, the solution a to conjugate

problem (2.7) is also unique. This follows because the value of the

criterion must be the same for all solutions, implying that they all must

have the same expected price a'Eq. The solution to conjugate problem (2.8)

may not be unique, however. In this case the truncated random variable (y -

x'a) is uniquely determined, as is the expected price a'Eq. On the other

hand, the random variable (y - x'a) is not necessarily unique, so we can not

exploit Assumption 2.3 to verify that the solution a is unique. As we will

now demonstrate, the set of solutions is convex and compact.

The convexity follows immediately from the concavity of the criterion

function and the convexity of the constraint set. Similarly, the set of

solutions must be closed because the constraint set is closed and the

criterion function is continuous.

Boundedness of the set of solutions can be demonstrated by investigating

the tail properties of the criterion functions. We consider two cases:

directions 9 for which 9'x is negative with positive probability and

directions 9 for which 9'x is nonnegative. To study the former case we take

2

the criterion in (2.8) and divide it by 1 + |a| . For large values of |a|

the scaled criterion is approximately:

(2.13) - Â£[(-x'9)*^] where 9 = a/[l + |a|^]^'^^ .

Hence |9| is approximately one for large values of Ia|. Moreover, 9'x is a

payoff in P. Consequently, the unsealed criterion will decrease (to -co)

quadratically for large values |a|.

Consider next directions 9 for which 9'x is nonnegative. From

Assumption 2.2 and relation (2.4) we have that

14

(2.14) Â£fli(G'x) s e'Eq

for some m that is strictly positive with probability one. Hence e'Eq must

be strictly positive unless G'x is identically zero. However, when G'x is

identically zero, it follows from Assumption 2.3 and inequality (2.14) that

e'Eq is still strictly positive.

For directions 9 for which the payoff e'x is nonnegative, we study the

2 1/2

tail behavior of the criterion after dividing by (1 + |al ) , which yields

approximately - e'Eq for large values of |a|. Hence in these directions the

the unsealed criterion must diminish (to -m) at least linearly in |a|. Thus

in either case, we find that the set of solutions to conjugate problem (2.8)

is bounded.

For some but not all of the results in the subsequent sections, we will

need for there to exist a unique solution to conjugate problem (2.8). Since

the set of solutions is convex, local uniqueness implies global uniqueness.

To display a sufficient condition for local uniqueness, let x denote the

component of the composite payoff vector x for which the pricing relation is

satisfied with equality:

(2.15) Â£mx* = Eq'

where q is the corresponding price vector. Also, let lf~>QÂ» be the the

indicator function for the event {fli>0>. A sufficient condition for local

uniqueness is that

Assumption 2.4: Ex x '^i~yn\ is nonsingular.

15

To see why this is a valid sufficient condition, observe that from the

complementary slackness condition (2.12), m is given by (y - x '3) for some

vector p. Consequently,

(2.16) Eq' = Â£yl^~^Q^ - Â£(xV' 1^-^^^ )3

When the matrix Â£(x x '1,~^â€ž.) is nonsingular, we can solve (2.16) for S.

\in>vr

II. D: Volatility Bounds and Restrictions on Meains

The second moment bounds described in the previous subsection can be

converted into standard deviation bounds via the formulas:

(2.17) 0- = [5^ - (Â£m)^]^^^

? = [6^ - (Â£m)^]^^^

"2 ~2

where 5 and 5 are constructed by setting the proxy to zero. When P

contains a unit payoff, Em is also equal to the average price of that payoff

and hence is restricted to be between the sale and purchase prices of the

unit payoff. However, data on the price of a riskless payoff is often not

available, so that it is difficult to determine Em. In these circumstances,

bounds can be obtained for each choice of Em by adding a unit payoff to P

(augmenting x with a 1) and assigning a price of v to that payoff (augmenting

Â£q with v). In forming the augmented cone, there should be no short sale

constraints imposed on the additional security and hence no new price

distortions should be introduced. The price assignment v is equivalent to a

mean assignment for m. Mean-specific volatility bounds can then be obtained

16

using (2.7). (2.8) and (2.17).

The Principle of No-Arbitrage puts a limit on the admissible values of

v. V e \ ,v ] where X is the lower arbitrage bound and i; is the upper

'^

arbitrage bound. These bounds are computed using formulas familiar from

Online Library → Lars Peter Hansen → Econometric evaluation of asset pricing models → online text (page 1 of 4)