2007/10/5, Riccardo (Jack) Lucchetti <r.lucchetti(a)univpm.it>:
 >>On Fri, 5 Oct 2007, yinung CYCU wrote:
 >>Could you please try to see what happens by appending the --verbose
 switch
 >>to mle? 
Yes. Please see below and thanks.
Yi-Nung Yang
Chung Yuan Christian University, Taiwan
========================================
gretl version 1.6.5
Current session: 2007/10/05 10:28
? open djclose
Read datafile c:\progra~1\userdata\gretl-1.6.5\data\misc\djclose.gdt
periodicity: 5, maxobs: 2528,
observations range: 1980/01/02-1989/12/29
Listing 2 variables:
  0) const      1) djclose
? series y = 100*ldiff(djclose)
Generated series y (ID 2)
? scalar mu = 0.0
Generated scalar mu (ID 3) = 0
? scalar omega = 1
Generated scalar omega (ID 4) = 1
? scalar alpha = 0.4
Generated scalar alpha (ID 5) = 0.4
? scalar beta = 0.5
Generated scalar beta (ID 6) = 0.5
? mle ll = -0.5*(log(h) + (e^2)/h)
? series e = y - mu
? series h = var(y)
? series h = omega + alpha*(e(-1))^2 + beta*h(-1)
? params mu omega alpha beta
? end mle --verbose
Using numerical derivatives
Iteration 1: Log-likelihood = -1561.46147443
Parameters:      0.00000      1.0000     0.40000     0.50000
Gradients:        89.979     -296.97     -181.84     -396.60
Iteration 2: Log-likelihood = -1472.96367506 (steplength = 0.00032)
Parameters:     0.028793     0.90497     0.34181     0.37309
Gradients:        53.018     -274.09     -163.79     -366.04
Iteration 3: Log-likelihood = -1447.77404259 (steplength = 0.00032)
Parameters:    -0.064045     0.85520     0.32585     0.30662
Gradients:        231.08     -235.92     -149.60     -315.06
Iteration 4: Log-likelihood = -1444.66210893 (steplength = 0.008)
Parameters:    -0.048367     0.80389     0.50576     0.23805
Gradients:        219.45     -197.62     -190.94     -263.91
Iteration 5: Log-likelihood = -1400.66087235 (steplength = 0.04)
Parameters:    -0.020901     0.76422     0.39267     0.18469
Gradients:        177.60     -130.52     -156.37     -174.31
Iteration 6: Log-likelihood = -1378.72010975 (steplength = 0.2)
Parameters:    0.0018002     0.75301     0.26579     0.16997
Gradients:        133.04     -86.134     -62.471     -115.03
Iteration 7: Log-likelihood = -1375.43661347 (steplength = 1)
Parameters:     0.088888     0.69792     0.23498    0.095992
Gradients:       -74.989      112.68      41.097      150.49
Iteration 8: Log-likelihood = -1370.89509923 (steplength = 1)
Parameters:     0.064386     0.73628     0.22325     0.14732
Gradients:       -10.259     -30.131      17.878     -40.239
Iteration 9: Log-likelihood = -1370.84179073 (steplength = 1)
Parameters:     0.053937     0.72411     0.26523     0.13111
Gradients:        15.082     -5.7174     -39.239     -7.6353
Iteration 10: Log-likelihood = -1370.28526304 (steplength = 1)
Parameters:     0.056917     0.72388     0.23809     0.13085
Gradients:        7.8871      4.6573      2.7641      6.2198
Iteration 11: Log-likelihood = -1370.26428037 (steplength = 0.00032)
Parameters:     0.059441     0.72537     0.23898     0.13284
Gradients:        1.6561    -0.82962    -0.38197     -1.1079
Iteration 12: Log-likelihood = -1370.26343966 (steplength = 0.00032)
Parameters:     0.060046     0.72515     0.23888     0.13254
Gradients:       0.17639   -0.021055    0.026967   -0.028467
Iteration 13: Log-likelihood = -1370.26343939 (steplength = 0.00032)
Parameters:     0.060046     0.72515     0.23889     0.13253
Gradients:       0.17676   -0.018224    0.011653   -0.024374
Iteration 14: Log-likelihood = -1370.26343279 (steplength = 1)
Parameters:     0.060119     0.72501     0.23890     0.13262
Gradients:    -0.0018076  0.00079581  -0.0017053   0.0013415
Iteration 14: Log-likelihood = -1370.26343279 (steplength = 0.2)
Parameters:     0.060119     0.72499     0.23890     0.13263
Gradients:    -0.0018076  0.00079581  -0.0017053   0.0013415
--- FINAL VALUES:
Iteration 14: Log-likelihood = -1370.26343279 (steplength = 0.00032)
Parameters:     0.060118     0.72499     0.23890     0.13263
Gradients:    -0.0018076  0.00079581  -0.0017053   0.0013415
Tolerance = 1.81899e-012
Function evaluations: 53
Evaluations of gradient: 14
Model 1: ML estimates using the 2526 observations 80/01/04-89/12/29
ll = -0.5*(log(h) + (e^2)/h)
Standard errors based on Outer Products matrix
      PARAMETER       ESTIMATE          STDERROR      T STAT   P-VALUE
  mu                    0.0601181        0.0200801     2.994   0.00275 ***
  omega                 0.724990    233390             0.000   1.00000
  alpha                 0.238901         0.00564566   42.316  <0.00001 ***
  beta                  0.132635    174763             0.000   1.00000
  Log-likelihood = -1370.26
  Akaike information criterion (AIC) = 2748.53
  Schwarz Bayesian criterion (BIC) = 2771.86
  Hannan-Quinn criterion (HQC) = 2757