On Mon, 6 Jul 2009, Riccardo (Jack) Lucchetti wrote:
Hm, you got me thinking on this one. In fact, I suspect that
this is a bug that's been there for more than 2 years. Allin,
please check if what I'm saying makes sense to you. Our original
BFGS implementation was ripped from somewhere else (I believe it
was R, but I may be wrong)...
Yes, from R.
and it was conceived as a _minimiser_ rather than a _maximiser_.
Then, we decided to flip the sign, so to speak... The "d" check,
however, stayed the same. So, if my analysis is right, the
"right" patch would be [to change the sign of d].
You're absolutely right! I've now fixed that in CVS.
However, the reason we haven't noticed this until now is that in
fact it makes very little difference, other than in the sort of
funny case represented by Christoph's script.
I've checked the large number of mle, gmm and arma test scripts
that I have to hand. In no case did the final maximized
likelihood differ between the "bad d" and "good d" runs, to the
digits we print. In some cases the parameter estimates differed,
but only among the trailing digits that are anyway essentially
random for numerical optimization problems (they change with
compiler version/options and other environmental factors).
One difference was in the number of steps taken to convergence.
As one might expect, some problems showed faster convergence
with d defined correctly. But others showed slower convergence.
On net, I think there was a performance gain.
So, certainly this is a worthwhile fix, but people needn't worry
about gretl producing bad MLE results for the last couple of
years! We may have taken a few too many iterations -- or at
worst, given a (rare) error when no error was called for.
Allin.