Kei Takeuchi.

# On a problem of fixing the level of independent variables in a linear regression function

. (page 1 of 2)
Font size
,01iKUM!VER5i*V
COc..saNI- INSTITUTE -^-.
25lMercerS.. New York, NY. 10012 JUHC 1968

JUL 1 1968

Courant Institute of
Mathematical Sciences

On a Problem of Fixing the
Level of Independent Variables
in a Linear Regression Function

Kei Takeuchi

Prepared under Contract N00014-67-A-0467-0004
with the Office of Naval Research NR 042-206

Distribution of this document is unlimited.

New York University

URANT INSTITUTE - LliRARY
/AwosrSt. New York, N.Y. 10012

NR 042-206 IMM 367

June 1968

New York University
Courant Institute of Mathematical Sciences

ON A PROBLEM OF FIXING THE LEVEL OF INDEPENDENT
VARIABLES IN A LINEAR REGRESSION FUNCTION

Kel Takeuchi

This report represents results obtained at the Courant Institute
of Mathematical Sciences, New York University. This work was
supported in part by the Office of Naval Research, Contract No.
NOOOl4-67-A-0467-0004 .

Reproduction in whole or in part is permitted for any purpose of
the United States Government.

Distribution of this document is unlimited.

N6W rcmK UNIVmmrrm

ABSTRACT
Suppose that a linear regression model

Y = p'x + U

is given. We want to fix x so as to make E(y) = p'x as
near to some prescribed level c as possible.

Asymptotic consideration leads to a solution of the
type

/^ G M p

X

where p. is the least square estimator of ^, and a is the

o
unbiased estimator of o = V(U) .

Under the assumption of normality for the distribution

of I], an exact formula for the first two moments of the error

P'x is given, and by expanding the formula for the mean square

error, it is recommended that k be chosen to be equal to

max (5-PjO)j where p is the dimension of the x vector.

-11-

0. Introduction

In the practical application of statistical techniques
to industrial or business problems, it often happens that we
want to find an appropriate level of the "policy" or "control"
variable (or vector of control variables) z, so that some
quantity Y may be as near to some prescribed level c as
possible .

If the relation of Y and Z is described by a stochastic
model which includes some unknown parameter 9, and if we have
a set of observations on Y and 7,, then a problem of mathematical
statistics, or of statistical decision, can be formulated. We
want to find a "policy" function Z based on the observations
so that the error may be stochastically as small as possible.

If E(Y) = ri = f{0,z) and if Y - E(Y) is independent of
the past observations, we want to make the mean square

E0(c-f(0,-^))2

as small as possible.

Usually, if some estimator of is given, we are inclined
to determine z so as to satisfy f(9,z) = c. But there still
remain two problems! (i) Generally, there are infinitely many
combinations of values of z which satisfy the above. So we
have to determine z in order to minimize the error due to the

A

statistical error of 0. (ii) In some cases, it does not follow
that determining z to satisfy the above is a good policy even
if is, in some sense, an optimum or efficient estimator.

-1-

Consequently^ we need a more detailed analysis of the
situation than the mere translation of estimation into
some kind of decision theoretic language.

In most practical situations the model is given in a
linear regression set-up^ i.e.^

E(y) = P'x.

We shall discuss the general problem in the asymptotic
case, in Section 1, and then more detailed considerations
will be given in the regression case in Sections 2-k, where
exact small sample results are established.

We will discuss the problem in both non-Bayesian and
Bayesian terms, and we will show that in some situations the
Bayes solution with respect to pseudo density, usually
adopted in regression situations as in Raiffa and Schlaifer.
[3], and also in Zellner and Ghetty [k], is inadmissible at
least asymptotically.

-2-

1. Asymptotic Considerations

Suppose that we are to make

(1.1) Ti = f(0,z)

as near as possible to the prescribed constant c.

In (l.l) it is assumed that is a p-dimensional vector
of unknown parameters, and z is a q-dimensional vector of
"control" or "policy" variables which we can fix at any level
in a set G. We assume that for any 9, there is at least one
z^ e C such that f(9jZ^) = c.

Suppose that we have a set of observations X = (X-, ... X ),
with density function p(x,0) with respect to some a-finite
measure M-. We can assume that our problem is to choose a
measurable function Z = Z(Xj from the space of the range of
X into C. We want to make the mean square error

(1.2) E(f(0,z) -c)2

as small as possible.

We shall assume the following.

(1) f(9,z) is continuously diff erentiable with respect to 9 and z.

It is difficult to derive general results for small
sample set-ups, so that we shall consider the asymptotic
case in which it is assumed that

(2) Z(x) is distributed with mean z(9) and variance-covariance
matrix ^^ of smaller order than 1.

Then

-3-

f{9,^) - c = f(0.i) - G + (||)' _(^-i) + R .

where df/Sz denotes the vector of partial derivatives and
where R is stochastically of smaller order than the preceding
terms. From (1.5) we have the formula for the asymptotic mean
and asymptotic mean square of the error as

E(f(0,z) - c) ^ f{Q,z) - c

E(f(9,§) - c)2 ^ (f(9,i) -c)2 + (^)' ^ (||).

The asymptotic mean or asymptotic variance may not be
equal to the asymptotic values of the mean or variance, but
we shall now consider the former as our criterion.

We shall say a function z is asymptotically consistent
if

f(9,z(e)) = c, for all 9,

and a function z which minimizes

[ z=z (9)

among all asymptotically consistent functions asymptotically
best consistent or, for short, ABC.

We shall obtain some lower bound for (I.5) under the
assumption that

(3) The density function p(x, 0) is continuously dif f erentiable
with respect to 6 almost everywhere on x, and

-4-

r | |p(x,0+A9) - p(x,9) Pe^^'^),,^ ,^ ^.
lim I r ^ ^ — '^ — ^^^—2 — z. _ 11 p(x,0) d|i. =

A0->9 ^ p(x,9) A9 P(x,9)

where Pg(x, 0) = dp(x, 9)/B9.
Then, from (1.4),

^f df ^z

d9 dz ^9 - ^ ^

where 3z/09 is the Jacobian matrix of derivatives. From the
well known Cramer-Rao theorem,

where Ig is the Fisher information matrix. From (1.6) and (I.7),

is obtained.

Hence, a lower bound for the asymptotic variance of the
asymptotically consistent fiinction is given by

f(0,S)=c

z = z

If a function z attains this lower bound asymptotically, we shall
say that it is asymptotically efficient.

In practical situations, it often happens that there
exists an estimator 9 which is asymptotically efficient in the
sense that it is asymptotically unbiased and its asymptotic
variance-covariance matrix is equal to iZ . Let us consider

-5-

a function z - z(9) of 9^ which is determined so as to minimize

under the condition that f{9,z{9)) = c. If we assume that

(4) z (0) is continuously dif ferentiable with respect to 0,
then we can expand

f(9,z') = f(0,f (e)) = (If)' (r(0)-^(e)) +R

""^ z=z(0)

M.' /^

s*

z=z (9)

since the relation (1.6) must also hold for z* .
Hence J

z=z^ej

which is actually equal to (1.8). Consequently^ z is
asymptotically equivalent to the ABC function, and we shall
also call it an A EC function based on 9. In the usual
situation where some regularity conditions hold true, we can
take the maximujii likelihood estimator for 9.

A different approach would "be that of Baye s.
If we ass-ume a prior distribution for 9, we can obtain a
posterior distribution for 9_, and we must simply determine z

-6-

so as to minimize the expectation of

with respect to the posterior distribution of 9,

We shall denote by 6 the random variable distributed
according to the posterior distribution of 9. Then the
problem is to minimize

(1.10) E{f(e,z)-c3 .

It often happens in an asymptotic situation (see
Le Cam [2]), that 9 is distributed with mean nearly

/N ^-1

equal to 0, and the variance covariance matrix 19 ,
given 9, where 9 is the maximum likelihood estimator.
In that case,

(1.11) E{f(0,z)-c}2 = (f(^,z)-c)2 + (||)' ^ i;^^(|§) .

9=9 9

Hence the Bayes solutions is nearly equal to the value
of z which minimizes (l.ll) Therefore, we shall call a
function z, which is determined to minimize the right side
of (l.ll), asymptotic Bayes or, for short, an AB function.

Since the second term will be small for some range of z,
f(0,z)- c will be approximately equal to zero for the AB
function; hence, it will be approximately equavalent to
the ABO function based on 0.

In subsequent sections, we shall investigate the

properties of the AMC or AB procedures for more specific cases

in more detail.

-7-

2. The Linear Regression Case

Now we shall apply the discussion of the preceding
section to the problem of linear regression.
Assume that

(2.1) Y = p'x + U

where p and x are p-dimensional vectors of the parameters and
the "control" variables^ and the error U is assumed to be
normally distributed. We want to fix the level of x in
order that

E(y) = fO,x) = P'x

may be as near to a constant c as possible.

We assume that we have a set of independent observations

of size n on X and Y^ and suppose that we have the least

A a2 2

square estimator p for ^, and o for a calculated from the

observations. Since it is well known that [fi,o ) forms a

sufficient statistic in this case, we can determine x based

solely on p and o .

In this situation, it is well known that the information

matrix for p is equal to the moment matrix M of x, divided

2
by o , hence the ABC function is determined to minimize

x'm'-'-x

At

under the condition that p x = c, and it is easily seen that
it is given by

-8-

A

^ c M p

(2.2) X

and the asymptotic mean square error is given by
(2.3)

2

o

It is a problem whether the asymptotic value above is
really the limit of the mean square error. As will be shown
in the next section^ it is true for the case when p _> 5;, but
if p ;< 2, the mean square error is always infinity.

The asymptotic Bayes solution is given by minimizing

,Ai .2 , -1 a2
(p X - c) + x'M X a ,

the solution of which is given by

(2.3) J= {M-l#+gp')-'c.g = ,,J^^ .

It should be remarked that the two solutions given in
(2.1) and (2. 5) differ only in the scale factor.

If we take as the prior density for (3 and a the pseudo-
density of the form

P(P,a)oc -

o

A y^2

then the posterior distribution of p, given p and a , is known

to be a multivariate t-distribution ([1]). Hence^ the posterior
mean and posterior variance are equal to ^, and qM~ a /(q-2),
respectively, so that the Bayes solution is, as was shown by
Zellner and Chetty [h],

-9-

A

-^i '^ a a

q-2
which is nearly equivalent to (2.3) when q is large.

In some cases ^ the independent value x cannot take
arbitrary values. Often it is required that some linear
restriction

(2.5) Ax = Id

be satisfied. In (2.5) we assume that A is an r x p matrix
of rank r (r < p) and b is a constant r-dimensional vector.

Then,, for ABC x is determined to minimize x'M~ x^
under the condition p x = c as well as [2.^], and it is
given by

(2.6) X = M(A' A + M-p) ,

where A is a vector of r-dimensional Lagrangian multipliers,
and M- is another Lagrangian multiplier. Both A and [i are
determined from

p ' MA ' A + |J.p ' Mp = c ,

AMA ' A + (XAMp = b ,
from which it is derived that

-1 "^

A = (AMA' ) (b - |i.AMp)

[i =

c - p'MA' (AMA' ) -'-b

p'Mp- p'MA' (AMA' )"-^AMp

-10-

Hence

(p'Mp)b - cAMp
X =

pMp -P'MA' (AMA' )'"-^AMp
Similarly, the AB fimction has the form

(2.7) X = -:.^-^ ^(A'A+t^p)

A special example of such a restriction is the case when
a constant term is included in the model, i.e.

(2.8) E(Y) = p'x - Pq + p|x^ .

where p„ is a scalar. In this case, we can write

1
^1

X =

X-

i.e., the first component of x is always equal to 1.
We shall redefine the parameter as

(2.9) E(Y) = Po + ^1^1 >

so that for the sample values the mean vector of x-. is
equal to zero. Then

/l

M = I

\ M

where M is the moment matrix of x. , or the matrix of the
second order moment of x. around the mean.

-11-

For simplicity of notation^ we shall omit the tilde of

(2.9) and it immediately follows that the ABC solution is
determined so as to minimize x-Im x-, under the condition

A A,

that Pq + P-iX-, = c. I.e.,

(c-Y)Mp

(2.10) X = —^—K .

PiMP3_

— /\

where Y = p^^ is the mean of Y values in the sample. If the

mean vector of x-, is not equal to 0, then x-, in (2.10) should
be replaced for x-, - x-. , where x-, is the mean vector.

Similarly, for the AB function, x-, is determined as

(c- Y)Mp

(2.11) X = X +:^^^ ^

-12-

3. Exact Formula for the First Two Moments of the Error

So far the justification for particular fnnctions is
given in asymptotic terms. In this section we shall consider
the finite sample property of the procedures of the previous
section.

We shall consider the following quantity

p'Mp + a^

for an ABC function a = in (3.1) and for an AB function

2 /^2
a = a . More generally, we shall consider the case where

? 2
a - ka and k is a positive constant.

Without loss of generality, we can assume that c = 1,

and we consider the first two moments of the error, i.e.,

E(p'x-l) and E(p'x-l)^ .

For the moment we shall assume that a is a constant,

2
which corresponds to the case when a is known. Under the

A

normality assumption, it is well known that p'Mp is distributed

2 A /^ , 2

normally with mean p ' Mp and variance p'Mpa . Also p'Mp/a xs
distributed according to a non-central chi-square distribution

with p degrees of freedom and with the noncentrality parameter

, 2
equal to t = p'Mp/a .

2
Let us define

A

R ' MR A A 2

Z = P yP and Y = p'Mp - Z .

Then it can easily be shown that Z is normally distributed

-13-

2 / 2
with mean * and variance a , and Y/a is distributed

independently of Z according to a chi-square distribution with

p-1 degrees of freedom (assuming p > 2). Hence,

E(^) = E{ 1— ^)

Y+Z +a 2

(5.2) cooo p-3 -y±l^4)-

/ y \ 2 2a , ,
(^) e dy dz

2 2 ;f^ p/P-1\ „^ 2

In order to calculate this value, we shall define a function
\,b(s) Of s > Oby

?" ?^ z 1 V P"5 - -^|±^ +bz

(5.5)H^^(s) =J J ^ ^_1^ (|) e dydz,

Then

Differentiating (5.3) with respect to s (it is easy to see
that differentiation under the integral sign is permitted in
this case), we have

2

' 2s (y+z +a ) y^rr r(ii2-)

Pzl _ y+z^ +bz -^

2 Jl - D^ ^(f) e dy dz- ^^ (s)

2s^ ^^ V^? r(P^) 2 2s 2 a,b

, p/2 -1 ,2 /„ 2

1J s^^ b s /2 _^ , ^

— -^ ^ -^;^\,b^^^-

-14-

Thus we arrive at a differential equation

5-5) H' + — p- H , V = p e

a,b(s) ds^ a,b(sj 2

The solution of (5.5) is easily shown to be

a,b(s)

a^/2s ^gp/2 -1 b^s/2-a^/2s

[ o e ds + c] ,

where c is the integration constant.

Since H , / \ — > as s — > 0, we have

a /2s p bt

Finally, in view of (3.4), we have

^ p/2 -1 b^t/2 -a^/2t

e dt

4>2-a2 2 2 2

E(P'") = p+2 J (|) ^ d*

(5.7) p °

- ^ +1 g-T T t gT

J^p/2-1 ^2 2t ^^ ^

2/2 2/2

where T=(i)/a,g=a/a.

It is easy to see that formula (5.7) can be applied also
for the case p = 1 provided a > 0, and if p = 1 and a = 0,

EO'x) = E(^)

does not exist. Since the expression can be written as

. .-P/'-^' -V2 ' p/2-1 1-1(1-1)
(3.8) E(p'x)=-^^ 2 ^ J^ e^ 2 t ^^^

-15-

E(P'x) is monotone decreasing in a (a ^ O).
When p ^ 2, and a = 0^ we have

-p/2 + 1 _^y2 ^ p/2-1 t/2

(3.9) E(p'x) = ^^ 2 ^ J * e dt ,

which is easily shown to be monotone increasing in r, to tend
to zero when t tends to zero^ and to tend to one as t tends
to infinity.

Hence, it is established that

E(p'x) - 1

2
is always negative, monotone increasing in t and hence m * ,

and decreasing in a or in a . Thus the bias of the controlled

level is always negative, and the absolute value of the bias

is smaller for ABC than for AB, or for Bayes procedures.

When p = 1, (3.9) can be regarded as the limit of E(p'x) as

a — > 0. Hence, similar results hold true also in this case

only with slight modifications.

As for the second order moment

E(P'x)^ _ E( 2 2

2 - '^y 2 2'

(t)"^ Y+Z +a

2

00 00 p p-^ _ y-t-jz-^)

' 1 (J) ' e' 2^" dy dz

2, 2,2 ^^ r(2-l\rrV '2'

-ex, (y+- +a^r V^ n^)^

■16-

We shall define

^a,b(^)

(3.10)

00 00

/ /

-00

1

- lisyi^+sfi+bz

• e dy dz .

P-3

Then

(3.11)

E0-^)2

*2 2

.2 ^- 2^2+2^
4) e

'a,

-2.

(o-^)

Differentiating (3.10) twice with respect to s^ we have

G„ As) =

ds

2 a,b

OO 00 p-3

'T J J ' ^ refill) '2'

(3-12)

/ J

-00

V.2 2
p _ P ^ a s
1 /b"^ ^ 1 N 2 2s " 2

s

Hence, if we put

k2 2
_ £_ _o b a s

Q(s) = J (b2+ s)s 2 e^^ ' 2 ^3 ^

we have

4 G

a,b(s)

J Q(s) ds + c-jS + CpS

= s Q(s) - J s Q' (s) ds + c-j^s + CgS

2 2
_ E._ 2 — - ^ ^

= s J (b^+ s)s 2 e^^' ^ ds -

-17-

2 2

p -, b_ a s

/-u2 N 2 " 2s " 2 ,

(b + s)s e ds + G-,s + CpS _,

where c^ and Cp are integration constants. Finally it can

be shown (considering the limiting case as s — > oo and

s -^ 0) that

2 2
oo P p - s t

4g ,(s) = const. - f (s-t)(b^+t)t "^ e^^ ^ dt
a^ u
s 2 2

1 /s P _ p ^ 'fc _ ^

V.2 r /I , w 1 , , N , 2 ^ 2 2t ,,

= b s (^ - tj^-^ + tjt e dt .

"

Thus, in view of (3.1l)j, we have

2 2 2 2 2

EO'x)^ = -^^e 2a2+2a2 P (/_,)( a ^, ^,2 ^2^ 2t,^

4a

p+5

(3.15)

02

- ^+ 1 _ T-g T P _ p i _ Hi

= ^-^f e 2 J (t- t)(l+t)t2 e^ 2^ dt,

2 2 2 2
where t = 4) /a , a = a /ct as in (3. 7 ) .

The integral in (3. 15) is monotone decreasing in a which

can be shown quite similarly as in (5.7). Equation (5. 15)

holds true for all p if a > 0, and for p >^ 5 if a = 0, and

, A. 2

if p <_ 2, eO'x) tends to infinity as a — > 0.

Thus for the mean square

EO'S -1)2 = {E(f3'^)2-l3 - 2{EO'x) -1} ,

the first term decreases while the second term increases as
a increases.

-18-

/.2 ^

As for the more general situation where a = co , where o is

o

an estimator of a and independent of p, (3-7) and (3.I3) can

2

be regarded as formulae for the conditional moments given a ,

hence unconditional moments can be given by taking expectations

2
of these with respect to o .

a2 2
Assume that a = Wa /q^ where W is a chi-square variable

with q degrees of freedom and is independent of p. Assume

2 -2
that a = ka or a = kW/q^ where k >^ is a constant.

Then, from (3.7)3 the unconditional expectation of p'x

is given by

"P/^^^ -T/2 '' ?'p/2-l t/2 -k(T-t)w/2qt
E(p'x) = 2 e J J t e e

-L ^ ^/'^ -^ -w/2

(:5-) e dt dw

"•^'" T D/2-1

t/2

.-P/2^^ -T/2 I ,^-1
(t + Mlztl)

provided that either p >_ 2 or k > 0.

Similarly, from (3.I3),

-p/2 + 1 ^/„ T ^-2 . /„

(3.15) E(p.x)^ ^T i'^' J (-t)(^^^)^ ^% e^'^' dt ,

^ q ^

assuming p > 3 or k > 0.

-19-

4. Asymptotic Expansion when t Is Large .

2
Generally^ t = p'Mp/a can be expected to be large, so

that It Is worthwhile to consider the limiting case when ^ is

large.

Ass-umlng '^ Is large,

_ 1

T-t

E(P'x) = IJ

J (^)

p+q _i

ts 2

q;;^

dt

P+q _i
(1 - d I)

2

where d = 1 - —

q

i_

2

{l - (p + q -2) ir + (p+ q-2)(p + q-4) -^ + ...}

8t'

t 2 t^ -■'^/2

X {l + q d ,5- + q(q + 2)d^ — ^ + •••) e dt

2T

8t^

I J [l-(p+k-2)^ + {(p+q-2)(p+q-4)-2qd(p+q-2)

,2i t

-t/2

+ q(q+2)d^) -^ + ...] e dt

8t^

= 1 - i (p+k- 2) + ^ {(p+k-2)(p+k-4)-2k(l- ^)} + o(4,)

T

q'

■20-

Similarly,

p+q p

T ^/, , 1 tv /, tx 2 '^

(1 '^)

E(p.2)'=^J ^^ q)g e dt

T
^ J t(l+i-|){l-(p+q-4)|^ + (p+q-l|)(p+q-6)-^ +.. ]
^^

t 9 t^ -t/2

X {i+q d ^ + q(q+2)d^ ^ +. . . ) e dt

= 1 J [t + ili:ii - (p+k-4) |! - (P+.-4) t^iirt)

+ {(p+q-4)(p+q-6) - 2q d (p+q-4 ) + q(q+2) d^) — ^ +,..}e ' dt

3 . -t/2

8t^

= 1 - I - 2(p+k-4) i + 10(p+k-4) -^

+ 4 {(p+k-4)(p+k-6) - 2k(l- ^)3 + o(^)

1 - i {2(p+k) -53 + 4 {(p+k-4)(p+k- I) - 2k(l- ^)} + o(J^)

HencGj we have finally

-21-

E(p'x - 1)^ = E(p'5)^ - 2 E(p'x) + 1

1 + i_ {(p+k-4)2» 2k(l- |)3 + o(^) ,

provided either p _> 3 or k > 0.

It should "be remarked that in (4.1) the first term is
independent of all the parameters p^q^k^ and actually equal
to the asymptotic value given in (2.3) • Hence, at least when
T is large enough, the choice of k does not affect the
situation very much provided only that when p = 1 or 2,
k must not be put to 0.

Tlie second term is minimized when

(4.2) k = max ( ^^-% , ) .

Hence, we can recommend as at least asymptotically good
strategy to adopt the value of k given by (4.2) or, assuming
q is large, a simpler one, i.e.,

k=max(5-p, O) .

When the constant term is included in the model, it is
necessary from (2,10) and (2.11) that we consider a function
of the type

(4.^) X = X + (c- y

P' M p+ ka

-22-

where M is the moment matrix of x values around the mean.
Then under the assumptions of normality, y and p, a
are independent, and it is shown that

(4.5) EO'\$ -c) = O'x -c) E( , ^^y - 1) ;

P' M p +ko

(4.6) EO.J -of. (p. J -ofEi ^t'y ^^ - 1)+ in . p:^\, )';

P' M p+ ka"^ P'Mp+ka^

and the moments of p'MPAP' Mp+ka } are given from the

2

same formulae thus far obtained only by substituting p'Mp/a

for T. Hence, the only difference which matters comes from

the second term of (4.6), which we can assume, however, to be

2 ,
nearly equal to a /n when t is large, and the influence of k

to this term may be negligible when n is large.

-25-

References

[1] Dunnett, G. W. and Sobel, M. "A bivariate generalization

of Student's t-distribution with tables for special cases."
Biometrlka, vol. hi, ±95^, 153-69.

[2] Le Cam, L. "On some asymptotic properties of maximum
likelihood estimates and related Bayes estimates."
University of California Publ. Statist. 1, 277-330.

[3] Raiffa, H. and Schlaifer, R. Applied Statistical Decision
Theory. Harvard University, I96I.

[H] Zellner, A. and Chetty, V. K. "Prediction and decision
problems in regression models from the Bayesian point of
view." Jour. Amer. Stat. Assoc. 60, 1965jj 608-616.

-24-

Security Classification

DOCUMENT CONTROL DATA - R&D

(Security clasaiticalion ot title, body ot abstract and indGxittg anr^otation musf be entered w/ion the overall report ta c tassilied)

1 ORIGINATIN G ACTIVI'^Y (Corporate author)

Courant Institute of Mathematical Sciences
New York University

la RCPORT SECURITY C L ASS I F I C A T I ON

not classified

26 GROUP

none

3 REPORT TITLE

On a Problem of Fixing the Level of Independent Variables
In a Linear Regression Function

4. DESCRIPTIVE NOTES (Type ot report and Inclusive dates)

Technical Report May 196 8

5 AUTH0RC5J (Last namo. Ilrst natne. initial)

Takeuchl, Kel

6 REPO RT DATE

May 1968

7« TOTAL NO OF PAGES

24

7b NO OF REFS

)a CONTRACT OR GRANT NO.

N00014-67-A -0467-0004

b. PROJECT NO.

NR 04?-?06

9a ORIGINATOR'S REPORT NUMBERCS;

IMM ^67

9 6- OTHER REPORT NOCS.) (A ny other numbers that may be assigned
this report)

none

10 AVAILABILITY/LIMITATION NOTICES

Distribution of this document is unlimited

11. SUPPLEMENTARY NOTES

none

U.S. Navy, Office of Naval Research
207 West 24th St., New York, N.Y.

13 ABSTRACT

Suppose that a linear regression model

Y = p'x + U

is given. We want to fix x so as to make E(Y) = P'x as near to some
prescribed level c as possible.

Asymptotic consideration leads to a solution of the type

X

cMp

A A

>2 '

.2 ,.

P'Mp + krr
where P Is the least square estimator of P, and a"^ is the unbiased

estimator of a^ = V(U).

Under the assumption of normality for the distribution of U, an
exact formula for the first two moments of the error p'x Is given,
and by expanding the formula for the mean square error it is
recommended that k be chosen to be equal to max (5-p.O). where p is
the dimension of the x vector.

DD /^N^*?. 1473

25

Security Classification

Security Classification

14.

KEY WORDS

fiOLE

INSTRUCTIONS

\. ORIGINATING ACTIVITY: Enter the name and address
of the contractor, subcontractor, grantee. Department of De-
fense activity or other organization (corporate author) issuing
the report.

2a. REPORT SECURITY CLASSIFICATION: Enter the over-
all security classification of the report. Indicate whether

1
2  3  4  ...  2

 Using the text of ebook On a problem of fixing the level of independent variables in a linear regression function by Kei Takeuchi active link like:read the ebook On a problem of fixing the level of independent variables in a linear regression function is obligatory. Leave us your feedback.