Another query about mle. As before, I have been experimenting with
stochastic frontier models and I have found that mle behaves entirely
differently with two formulations that seem to be equivalent.
Version 1:
mle logl=ln(cnorm(e*lambda/ss))-(ln(ss)+0.5*(e/ss)^2)
scalar ss = sqrt(su^2 + sv^2)
scalar lambda=su/sv
series e=y-lincomb(xlist,b)
params b su sv
end mle --hessian --verbose
Version 2
mle logl=llnow
scalar ss = sqrt(su^2 + sv^2)
scalar lambda=su/sv
series e=y-lincomb(xlist,b)
series llnow=ln(cnorm(e*lambda/ss))-(ln(ss)+0.5*(e/ss)^2)
params b su sv
end mle --hessian --verbose
Both report identical log-likelihoods & gradients at iteration 1, but
version 2 blows up at iteration 2 while version 1 continues
properly. Technically, the reason for the blow-up in version 2 is
that it uses a step length of 1, whereas version 1 uses a step length
of 1e-7. I cannot understand the reason for the different behaviour
but in this case it is critical.
Gordon Hughes