Monday, September 28, 2009

amalgam and dream

recently came across some interesting work by jasper vrugt. AMALGAM is a self-adaptive global optimization algorithm. basically you start with a latin hypercube to get some initial sampling and feed those points to generate a couple of new points from each of a set of different global optimizers such as genetic, swarm, etc. after you sample that second generation of points, you can evaluate the quality of those optimizers on how well they pick new points for that type of problem. for each successive generation, you can sample more points from the optimizers that have done well. strikes me as similar to cover's optimal portfolio theory in the way it weights the winners. DREAM (differential evolution adaptive metropolis) is a sampling algorithm that has some advantages over markov chain monte carlo (mcmc) methods while maintaining desirable statistical properties that i don't really understand yet. useful for optimization and parameter fitting for high-dimensional inverse problems.

No comments: