Jorge Nocedal. # Projected Hessian updating algorithms for nonlinearly constrained optimization online

. **(page 2 of 4)**

Online Library → Jorge Nocedal → Projected Hessian updating algorithms for nonlinearly constrained optimization → online text (page 2 of 4)

Font size

Choose a convergence tolerance TOL.

Choose ^Qj^Q. where B is a t x n matrix.

Evaluate f^.go.c^.AQ.

Factor A = [Y 7.]

Set k -*- 0.

^peat

Solve \ Py = ~ and 6 > such that if II x^-x^H < e and

II B. - Z (xj.)'W(x^,X j(.)ll < 5, then Algorithm 3.1 is well defined and

'\ -^ x^ at a one-step n-superiinear rate.

Proof : The Jacohian of (2.10) is given by (2.13), which is easilv

seen to beLipschitz continuous at x = x^. Furthermore, (2. 13) reduces

to (2.11) at X = X*, where it is nonsinguiar by Assunption 1 (see the

remarks following (2.12)). The procf follows almost immediately from

the results given by Dennis and More (1977, p. 58). The onlv

modification necessary is to check that the partial Rrovden update,

using the exact A(x) instead of an approximation, retains the

superlinear convergence property. This is easilv verified.

e

One interesting aspect of the oartiai Brovden method is that X

does not appear anywhere in the statement of the algorithm. The

nultlpliers are involved only implicitly, in the form of the Jacobian

(2.11). However, (3. 2d) is not the only possible definition of y^

giving a t X n update with Q-superlinear convergence. In particular,

-21-

it Is clear from the results In the next section that reolacine; (3. 2d)

by any of (4.1d,e or f) will give a Q-superlLnearlv convergent

algorithm. The proof is substantially longer, since these definitions

of y. do not result from applying a Rrovden or partiaJ. Broyden method

to any system of equations such as (2,iO),

Let us now make some more general remarks. We define a method to

be a linearized cons traint quasi-Newton method for solving (l.i) if it

consists of the iteration:

A(x^)^

P =

"zCx^>'fg(x^)'

c(x^)

(3.3)

"â– k+l ~ '^k

Xu + P

where we assume only that R ["^^x)) = W(A(x)'} with Z(x) continuously

dif f erentiable in the region of "lltiterest and where 1^%} is any sequence

of t X n matrices. Xfe have from the discussion in !^ection 2 that the

Newton iteration (2.4), or enuivalently (2 . 6) or ( 2. 7) , and the

Newton-type iteration (2.12), are linearized constraint quasi-Newton

methods. The same is true of the methods of Han, Tapia and Powell,

which replace W in (2.4) bv a suitable approximation, and the partial

Brovden nethod given by Algorithm 3.1. Now an iteration of the form

(3.3), which is intended to solve (l.i), may equivaiently be '/iewed as

a method to solve the nonlinear system (2.10). We may therefore apply

the Dennis and More characterization of o-superlinear convergence (see

Dennis and More (1977, p. 51)j and immediately obtain, using (2.11):

Theorem 3.2: Let .\ssunption 1 hold. If a ilnearizeri constraint

auasl-Newton nt; thod (3.3) produces a sequence {x^^} conver0, both oapers suggest using the

Â°oweli (1978a) 'lodif Ication of the BFGS update to guarantee the pronertv

of inherited positive def initeness. ^^ollowing Powell's results, they

show that under certain conditions, which include the assumptions that

X. -Â»â– x^ and that the approximation matrices remain bounded, a two-step

O-superlinear rate of convergence is achieved.

Let us examine definitions (a) and (b) more closely. It is

_2q-

cieariy possible to construct examples where H s, II is verv small (or

zero) while U y, II , using any of (d) through (

Choose ^Qj^Q. where B is a t x n matrix.

Evaluate f^.go.c^.AQ.

Factor A = [Y 7.]

Set k -*- 0.

^peat

Solve \ Py = ~ and 6 > such that if II x^-x^H < e and

II B. - Z (xj.)'W(x^,X j(.)ll < 5, then Algorithm 3.1 is well defined and

'\ -^ x^ at a one-step n-superiinear rate.

Proof : The Jacohian of (2.10) is given by (2.13), which is easilv

seen to beLipschitz continuous at x = x^. Furthermore, (2. 13) reduces

to (2.11) at X = X*, where it is nonsinguiar by Assunption 1 (see the

remarks following (2.12)). The procf follows almost immediately from

the results given by Dennis and More (1977, p. 58). The onlv

modification necessary is to check that the partial Rrovden update,

using the exact A(x) instead of an approximation, retains the

superlinear convergence property. This is easilv verified.

e

One interesting aspect of the oartiai Brovden method is that X

does not appear anywhere in the statement of the algorithm. The

nultlpliers are involved only implicitly, in the form of the Jacobian

(2.11). However, (3. 2d) is not the only possible definition of y^

giving a t X n update with Q-superlinear convergence. In particular,

-21-

it Is clear from the results In the next section that reolacine; (3. 2d)

by any of (4.1d,e or f) will give a Q-superlLnearlv convergent

algorithm. The proof is substantially longer, since these definitions

of y. do not result from applying a Rrovden or partiaJ. Broyden method

to any system of equations such as (2,iO),

Let us now make some more general remarks. We define a method to

be a linearized cons traint quasi-Newton method for solving (l.i) if it

consists of the iteration:

A(x^)^

P =

"zCx^>'fg(x^)'

c(x^)

(3.3)

"â– k+l ~ '^k

Xu + P

where we assume only that R ["^^x)) = W(A(x)'} with Z(x) continuously

dif f erentiable in the region of "lltiterest and where 1^%} is any sequence

of t X n matrices. Xfe have from the discussion in !^ection 2 that the

Newton iteration (2.4), or enuivalently (2 . 6) or ( 2. 7) , and the

Newton-type iteration (2.12), are linearized constraint quasi-Newton

methods. The same is true of the methods of Han, Tapia and Powell,

which replace W in (2.4) bv a suitable approximation, and the partial

Brovden nethod given by Algorithm 3.1. Now an iteration of the form

(3.3), which is intended to solve (l.i), may equivaiently be '/iewed as

a method to solve the nonlinear system (2.10). We may therefore apply

the Dennis and More characterization of o-superlinear convergence (see

Dennis and More (1977, p. 51)j and immediately obtain, using (2.11):

Theorem 3.2: Let .\ssunption 1 hold. If a ilnearizeri constraint

auasl-Newton nt; thod (3.3) produces a sequence {x^^} conver0, both oapers suggest using the

Â°oweli (1978a) 'lodif Ication of the BFGS update to guarantee the pronertv

of inherited positive def initeness. ^^ollowing Powell's results, they

show that under certain conditions, which include the assumptions that

X. -Â»â– x^ and that the approximation matrices remain bounded, a two-step

O-superlinear rate of convergence is achieved.

Let us examine definitions (a) and (b) more closely. It is

_2q-

cieariy possible to construct examples where H s, II is verv small (or

zero) while U y, II , using any of (d) through (

Online Library → Jorge Nocedal → Projected Hessian updating algorithms for nonlinearly constrained optimization → online text (page 2 of 4)