A matlab code for minimizing a multivariate function whose partial derivatives are available by Carl Rasmussen is downloadable from here. The routine looks pretty efficient, at least on the classical Rosenbrock function.

Showing posts from May, 2007

I am wondering if this compressibility notion could be used in learning theory or function approximation? Consider e.g. classification problems so that the output of $f$ is $\{0,1\}$. In order to prevent the trivial solution we may require that $x$ be compressed to some $y$ such that for some fixed (efficiently computable) function $g$, $f(g(y))=f(x)$. We do not require…