Concerning mle, I like to test and check results independently and I
came across
a problem.
I can't send all details because this is research in progress and my
colleague
did not give green light to do so. But I'll describe what is going on
as best as I can.
I enlarged IGARCH process with additional variable call it gamma1
so that alpha+beta+gamma1=1, and I did 3D plot in Mathematica for
log like. function and found maximum to be at
beta=0.912565
gamma1=0.0106716
I rechecked it with the FORTRAN and result was the same,
since this was the only maximum. Then I run modified
script from the mentioned example in the guide. Gretl produced
the following result for initial gamma1=0.01
for --robust --lbfgs options:
estimate std. error t-ratio p-value
-------------------------------------------------------
beta 0,916221 0,0350035 26,18 5,10e-151 ***
gamma1 0,0175130 0,0164981 1,062 0,2885
----------
for --lbfgs only:
Tolerance = 1e-014
Function evaluations: 16
Evaluations of gradient: 16
Model 3: ML, using observations 1-15073
ll = check ? -0.5*(log(h) + (ep^2)/h) : NA
Standard errors based on Outer Products matrix
estimate std. error t-ratio p-value
--------------------------------------------------------
beta 0,916221 0,00147268 622,1 0,0000 ***
gamma1 0,0175130 0,000790365 22,16 8,71e-109 ***
------------
with default mle options
Tolerance = 1e-014
Function evaluations: 81
Evaluations of gradient: 14
Model 5: ML, using observations 1-15073
ll = check ? -0.5*(log(h) + (ep^2)/h) : NA
Standard errors based on Outer Products matrix
estimate std. error t-ratio p-value
---------------------------------------------------------
beta 0,932768 0,00109127 854,8 0,0000 ***
gamma1 0,00513404 0,000340124 15,09 1,76e-051 ***
------------------
if I changed initial gamma1 results varied a lot. Imposed conditions
were
scalar check = (gamma1>0) && (beta>0) &&
((beta+gamma1)<1)
Can anyone supply explicit code to determine GARCH(1,1) coefficients
with different method (OLS, MLE via R ...) for the simple example from
the user guide.
open djclose
series y = 100*ldiff(djclose)
scalar mu = 0.0
scalar omega = 1
scalar alpha = 0.4
scalar beta = 0.0
mle ll = check ? (-0.5*(log(h) + (e2)/h)) : NA
series e = y - mu
series h = var(y)
series h = omega + alpha*(e(-1))2 + beta*h(-1)
scalar check = (alpha>0) && (beta>0)
params mu omega alpha beta
end mle
Thanks really a lot, any help is appreciated since
I really like gretl as a research tool.
All the best,
Davor
On 30.11.2009. 8:58, Riccardo (Jack) Lucchetti wrote:
On Sun, 29 Nov 2009, Allin Cottrell wrote:
;-) This is a subtle bug (if it's a bug; I'm
not sure) with an
easy workaround. With the initialization
scalar alpha = 0.4
scalar beta = 0
The "check" condition -- alpha>0 && beta>0 -- is violated
on the
first iteration. Therefore the formula for "ll" comes down to
ll = NA
This generates a scalar value, which is not allowed in the mle
context. The fix is to initialize such that the check is
satisfied on the first round, e.g.
scalar beta = 0.001
Not really relevant here, but the constraints on alpha and beta in a
garch(1,1) model are not exactly the same. Alpha needs to be strictly
positive (otherwise the model is unidentified), but beta can be 0, so
initialising it at 0 is perfectly legitimate. The check should be
written as (alpha>0) && (beta>=0). Sorry for being
pedantic.
Riccardo (Jack) Lucchetti
Dipartimento di Economia
Università Politecnica delle Marche
r.lucchetti@univpm.it
http://www.econ.univpm.it/lucchetti
_______________________________________________
Gretl-users mailing list
Gretl-users@lists.wfu.edu
http://lists.wfu.edu/mailman/listinfo/gretl-users