Docstoc

Homework2

Document Sample
Homework2 Powered By Docstoc
					CSE 543T Algorithms for Nonlinear
Optimization: Homework 2
Due: March 8, Tuesday, 10am

1. Problem 2.1.6. (10%)

2. Problem 2.1.12, part (a). (10%)

3. Problem 3.1.1.a) and 3.1.1.b) (10%)

4. Experimental study of unconstrained optimization methods.

TAO has implemented a number of unconstrained optimization methods,
including Limited Memory Variable Metric (LMVM) Method (a quasi-
Newton method), Conjugate Gradient Methods, and Newton's Methods.
Study and compare these methods.

   a) LMVM is a quasi-Newton method that requires only function value
      and gradient information. This algorithm is the default solver of TAO,
      and can be selected using the TaoMethod tao_lmvm. LMVM keeps a
      limited history of previous points and previous gradients to
      approximate the second-order information, i.e. the Hessian. The
      number of solutions and gradients at previous iterations that are kept
      to approximate the Hessian can be set by with the command line
      argument -tao_lmm_vectors <int>; is the default value. Increasing
      the number of vectors results in a better Hessian approximation and
      can decrease the number of iterations required to compute a solution
      to the optimization problem. However, as the number of vectors
      increases, more memory is consumed and each direction calculation
      takes longer to compute.

   b) Use rosenbrock.c, Rastrigin, or any other unconstrained problem in
      examples/ to study this tradeoff. You need to first modify the
      problem file to use a sufficiently large problem size (n) so that the
      timing is substantial.
Try different -tao_lmm_vectors <int> (for example <int>=1, 2, 3, 4, 5, 10,
20, 50, 100, …) and report the number of iterations, CPU time per iteration,
and total CPU time. What is the optimal TaoLMVMSetSize (in terms of
total CPU time) for your problem? What kind of tradeoff can you observe?
Does the observation make sense to you? (10%)

(b) Three variations of conjugate gradient methods currently supported in
TAO are the Fletcher-Reeves method, the Polak-Ribiare method, and Polak-
Ribiare-Plus method, which can be specified using the command line
argument tao_cg_type <fr,pr,prp>, respectively. Each of these conjugate
gradient methods incorporates automatic restarts when successive gradients
are not sufficiently orthogonal. TAO measures the orthogonality by dividing
the inner product of the gradient at the current point and the gradient at the
previous point by the square of the Euclidean norm of the gradient at the
previous point. (See eq.(1.174) on Page 140 of the textbook).

When the absolute value of this ratio is greater than eta, the algorithm
restarts using the gradient direction. The parameter eta can be set using the
command line argument -tao_cg_eta <double>; 0.1 is the default value.

Pick one from the three methods, use the same unconstrained optimization
problem you have used for (a), and try different eta (e.g. eta= 1, 1e-1, 1e-
2, ..., 1e-08, …). For each eta, report the number of iterations and the total
CPU time. What is the optimal eta? What observations can you make about
the performance when eta is too large or too small, respectively? (15%)

(c) Using the same unconstrained optimization problem you have used for (a)
and (b), study the asymptotic performance of LMVM and conjugate gradient
(any one of the three variants) when the problem size (n) is very large.
Which method seems to scale better as the problem becomes larger? Show
experimental data to support your conclusion. (10%)

(d) In TAO, write a quadratic unconstrained optimization problem defined in
(1.137). Choose your own Q matrix (do not use Q=I and make sure it is
positive definite) and b vector (do not use b=0). In the problem file, you
should have a parameter n to specify the number of variables. For
n=10,100,1000, apply one of the three conjugate gradient methods in (b)
using the default eta. For each n, report the number of iterations taken and
the number of restarts that have been invoked. Does the statistics agree with
the theoretical properties of conjugate gradient methods? Explain why.
(15%)

Hint: i) By default the TAO solvers run silently without displaying
information about the iterations. The user can initiate monitoring with the
command

 int TaoSetMonitor(TAO_SOLVER solver,
           int (*mon)(TAO_SOLVER tao,void* mctx),
           void *mctx);

The routine mon indicates a user-defined monitoring routine and mctx
denotes an optional user-defined context for private data for the monitor
routine. The routine set by TaoSetMonitor() is called once during each
iteration of the optimization solver. Hence, the user can employ this routine
for any application-specific computations that should be done after the
solution update. The option -tao_monitor activates the default monitoring
routine, TaoDefaultMonitor() .

ii) Visit http://www-unix.mcs.anl.gov/tao/docs/manual/manual.html for
more help.

5. Experimental study of bound-constrained optimization methods.

(a) For two test problems plate2.c and jbearing2.c (available at http://www-
    unix.mcs.anl.gov/tao/documentation/examples.html), evaluate the
    following methods: Newton Trust Region, Gradient Projection-Conjugate
    Gradient Method , Interior Point method, Limited Memory Variable
    Metric Method. A list of the TaoMethod names for these methods can be
    found at
http://www-unix.mcs.anl.gov/tao/docs/manualpages/solver/TaoCreate.html

For each problem and each solver, list the CPU time, number of iterations,
and final objective function value. (10%)

(b) For jbearing.c, the default setting is user.nx = 50; user.ny = 50.
Study the scalability of the four methods by increasing both user.nx and
user.ny (e.g., to 500, 5000, 50000, …). Draw conclusions on which solver
scales the best, and which the worst. Show experimental data to support your
conclusion. (10%)

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:2
posted:10/27/2011
language:English
pages:3