Thanks for this. I waited until I run various cases before providing feedback. Both approaches work fine in general... but not in a specific model I work with now. And I cannot understand why, because even in the simple bivariate normal example, the qqq construct includes parameters under estimation in any case. So I would really appreciate any idea why the following does not work (it gives immediately a numerical derivative value of exactly zero for the rho correlation parameter irrespective of starting value, which is suspicious, and stays there for ever, and after some iterations it aborts with a message "matrix is not positive-definite") . I am not saying that it necessarily has to do with the code, it may be the model's problem. But the thing is,  if I write out explicitly the likelihood (since this is the bivariate case), without using the qqq construct, the code does provide results, this is why I am at a loss here.

<hansl>

[data...]

[starting values for the parameters...]

matrix R2 = {1,rho; rho,1}

mle logl = check ? -0.5*ldet(R2) -0.5*qqq  + 0.5*((invcdf(N,CDF))^2)  +  log(B1+B2) :NA
   
   series resCor = Depv - Regrs*kVec

    series B1 = (resCor/vu)+ ...

    series B2 = (resCor/vu) +...

series CDF = cnorm(resCor/vu) + mix*B1 - (1-mix)*B2

     matrix q = {W, invcdf(N,CDF)}
    matrix qq = q*inv(R2)
    series qqq = sumr(q.*qq)
   

scalar check = (vu>0)   && (mix>0) && (mix < 1)   && (abs(rho)<1)
params kVec vu mix rho 
end mle  

</hansl>


Alecos Papadopoulos
PhD Candidate
Athens University of Economics and Business, Greece
School of Economic Sciences
Department of Economics
https://alecospapadopoulos.wordpress.com/
 
On 16/2/2018 09:22, gretl-users-request@lists.wfu.edu wrote:
A minimal modification of your script that fixes this is

<hansl>
nulldata 128
series N1 = normal()
series N2 = normal()

scalar s1 = 1
scalar s2 = 1
scalar c12 = 0.5
matrix q = {N1, N2}

mle logl = check ? -0.5 * (qqq + ldet(S)) : NA
     matrix S = {s1^2,c12;c12,s2^2}
     scalar check = (s1>0) && (s2>0) && (abs(c12)<1)
     matrix qq = q * inv(S)
     series qqq = sumr(q.*qq)
     params s1 s2 c12
end mle

# check
Shat = q'q ./ $nobs
print Shat
eval sqrt(diag(Shat))
</hansl>

Having said this, though, there's a more general point, on the check for 
positive definiteness of S. If you reparametrise the problem, you can make 
S pd by construction simply optimising over the elements of the Cholesky 
factor of S rather than S itself. This is a very nice trick, which saves 
you lots of intermediate computation, works quite well in practice and is 
trivial to generalise to any number of columns of q. For example:

<hansl>
nulldata 128
series N1 = normal()
series N2 = normal()

scalar s1 = 1
scalar s2 = 1
scalar c12 = 0.5
matrix q = {N1, N2}
matrix S = I(2)
matrix theta = vech(S)

mle logl = -0.5 * (qqq + ldet(S))
     matrix K = lower(unvech(theta))
     matrix S = K*K'
     matrix qq = q * invpd(S)
     series qqq = sumr(q.*qq)
     params theta
end mle

# check
Shat = q'q ./ $nobs
eval cholesky(Shat)
</hansl>

Hope this helps.
-------------------------------------------------------
   Riccardo (Jack) Lucchetti
   Dipartimento di Scienze Economiche e Sociali (DiSES)

   Universit? Politecnica delle Marche
   (formerly known as Universit? di Ancona)

   r.lucchetti@univpm.it
   http://www2.econ.univpm.it/servizi/hpp/lucchetti