Downhill Simplex Method

Some time ago, Leonard Howell asked me if I had an APL program that implements the Nelder-Mead downhill simplex method for finding the minimum of an n-dimensional function. In response, I adapted an algorithm from the book Numerical Recipes in C to APL. I periodically get requests from other APLers for the same program, ergo this web page. (Also, people can't figure out how to use the program, so I've added an example.)

The authors of NRC claim copyright to the programs in their book and to any "source-code-to-source-code" translations into other languages (though not to "reëxpressions" of the ideas in their programs). My APL code looks a whole lot different than their C program and uses some different techniques, but I couldn't tell you whether I've "reëxpressed" their ideas or merely translated them. To avoid violating their copyright, you should buy a copy of their book before using my DSMIN program. This is something I can heartily endorse--the book is a great source of useful numerical algorithms that belongs on every mathematical programmer's bookshelf. The authors' practical and humorous attitude make it a delight to read. Plus, you'll need the book for instructions on choosing the arguments and caveats on the algorithm.


From: Jim Weigang 
To: Leonard.W.Howell.Jr@msfc.nasa.gov (Leonard Howell)
Date: Tue, 12 Dec 1995 23:52:14 -0500
Subject: Minimization

Here's an implementation of the Nelder and Mead algorithm.  Except for a
couple of strand assignments and a nested right argument, it's written
in standard APL.  #IO is assumed to be 1.  You may want to revise the
program somewhat to limit the number of iterations or use a different
stopping criterion (e.g., something based on changes in the X matrix).

   Because I created the program by translating and paraphrasing C code
in the Numerical Recipes book, I believe the authors might claim they
own this code.  Use it at your own risk.

                                                Jim






Some users have had trouble figuring out how to use DSMIN (and it even takes me a while to figure it out), so here's an example. As described in the comments, the minimization function should be monadic, accept a matrix having an x-vector in each row, and return a vector of y-values. Here's a suitable function:



The y value is just the sum of two terms that have minima at x1=3 and x2=0. The argument to the function should be a matrix with each row containing an x1,x2 pair, as in:



The minimization function's name (as a character string) is the left argument to DSMIN; the right argument is the initial simplex (n+1 points in n-space) and the stopping criterion.



The result is the final simplex, with the point having the lowest y-value in the first row. So the minimum found by DSMIN is at x1=2.999998186 and x2=-0.000005463061221, which is 3 and 0 to about five decimal places. (The stopping criterion is the fractional change in y-value, so it doesn't guarantee a specific precision in the x-values.) The NRC authors advise calling the function a second time with the first vertex being replaced by the minimum found in the first call:



DSMIN can also be used to maximize a function. Simply negate the result of your FCN1, either in the definition or by putting a minus sign before the function name in the left argument of DSMIN.

Any questions?


Note: The APL code on this page can be extracted without you having to retype it.
See "Downloading Code" for instructions.




JimW's Home Page