At last, I have known where
NLS diagnostics is from
Recall, this well specified model
with well behaved polynomial roots
and no autocorrelation
I hove, initial values are checked
and adjusted for roots to be outside
of the unit circle
Oleh
28 жовтня 2018, 20:52:10, від "Riccardo (Jack) Lucchetti"
<r.lucchetti(a)univpm.it>:
On Sun, 28 Oct 2018, Riccardo (Jack) Lucchetti wrote:
On Sun, 28 Oct 2018, oleg_komashko(a)ukr.net wrote:
> Current scaling factor makes mean ~10
> with 1 196 sample mean is ~10^-8
> Hence, scaling factor ~10^18
I just pushed to git a one-liner that seems to solve the first issue you
raised. Now
<hansl>
open bad_data.gdt
smpl 1 194
arima 3 0 0; 1 0 0; diff_series const y_one y_two
</hansl>
produces sensible results. Allin: the fix is rather trivial, but please take
a look.
As for the second problem, I guess the solution is quite easy but I'd wait
for Allin's opinion before committing any code to git.
In some cases, we initialise ARMA via NLS; however, we use NLS as if we
were estimating a "real" model, and therefore we employ our usual (rather
strict) convergence criterion. Of course, there's no need to be picky
about convergence, since nls is just meant to provide sensible starting
values. If you modify the nls_toler setting before running the arima
command, things go back to normal; example:
<hansl>
open bad_data.gdt
smpl 1 194
series sty=diff_series/sd(diff_series)
list zli = y_one y_two
set nls_toler 1.0e-5
y = sty+6.48
arima 3 0 0; 1 0 0; y 0 zli
</hansl>
Now the question is: should we raise NLS tolerance by default, when it's
used for ARMA initialisation, or do we have a better strategy?
-------------------------------------------------------
Riccardo (Jack) Lucchetti
Dipartimento di Scienze Economiche e Sociali (DiSES)
Università Politecnica delle Marche
(formerly known as Università di Ancona)
r.lucchetti(a)univpm.it
http://www2.econ.univpm.it/servizi/hpp/lucchetti
-------------------------------------------------------