some 'coint' interface details (and questions)
by Sven Schreiber
Hi,
I'm looking at the 'coint' (Engle-Granger test) command and
corresponding GUI interface more closely right now and have a couple of
questions.
- In a list message in 2013 Allin wrote that the test-down option now
has "MAIC" by default, and "MBIC" also exists. There is nothing in the
documentation about it, pointing to the adf command which talks about
"AIC" and "BIC" instead. So is that information obsolete, or what is the
situation? BTW, in the GUI there is no way to influence the test down
method.
- Related to the test-down thing: In the coint output the number of
finally included lagged differences is very much hidden. While the max
lag order is printed out, the optimal lag order AFAICS is only
implicitly given as the numerator d.o.f. in the corresponding F-test. I
suggest that this is made as explicit as in the adf output.
- adf and coint2 have a --seasonals option, but 'coint' doesn't,
according to the doc and the missing GUI item. Any particular reason why?
That's it so far. Thanks!
Sven
5 years, 10 months
arima: can not catch how it works now
by oleg_komashko@ukr.net
Dear Allin,
>From the previos discussion
I came to think
arima uses standardize(y) + 1
But it is not so:
open greene5_1.gdt
set bfgs_verbskip 999
pops = (pop-mean(pop))/sd(pop)+1
catch arima 0 0; 1 0 ; pop --verbose
arima 0 0; 1 0; pop --x-12-arima
arima 0 0; 1 0; pops --x-12-arima
arima 0 0; 1 0; pops
What is current scaling (if any)?
Oleh
5 years, 10 months
arima: initial value problem
by oleg_komashko@ukr.net
Dear Allin,
To see the problem, use the attached data file
and run the chunk of code below
<hansl>
#####################################################
###### run 2 chancks separately
open very_bad_data.gdt
# let we decide that series y
# is staionary if kpss p-value
# is greater than 0.01
# this means that
# arima 2 1; y
# is by no means apriory misspecified
n = $nvars - 1
scalar number_of_non_convergence = 0
loop i= 1..n -q
nami = sprintf("var%d",i)
kpss -1 @nami
catch arima 2 1; @nami
err = $error
if err
number_of_non_convergence +=1
endif
endloop
printf "\nThe total number of variable in the datafile = %d\n", n
printf "\nThe number of non-convergent arma(2,1) models = %d\n", n
##########################################################
##########################################################
test1 = randint(1,81)
testnami = sprintf("var%d",test1)
arima 2 1; @testnami --verbose
eval mean(@testnami)
arima 2 1; @testnami --x-12-arima
### what to compare:
# 1) final values of arma parameters
# with --x-12-arima estimates
# 2) compare initial constant with
# (i) final value of constant
# (ii) mean of the series
# (iii) with --x-12 constant estimate
# it seems there are serious problems
# with initial value for constant
<hansl>
Oleh
5 years, 10 months
arima ... -nc very quick to realize and robust proposals
by oleg_komashko@ukr.net
Dear Allin,
When iterations are finished with no convergence
for arima .... -nc without exogenous variables
arima should compute minimal abs. root
if it is less than 1.05 , rather than give an
error message it should output the
estimated, supplied with warning of
the following types:
1)
(i) bad root(s)
(ii) bad gradient norm
2)
(i) bad root(s)
(ii) bad tolerance
3)
(i) bad root(s)
(ii) bad gradients
(iii) bad tolerance
In all cases of --nc non-convergece
I have encountered only cases 1) and 3)
Every time the final parameter values
had 4 or more same digits with --x-12
and better final loglik
So, no risk: no changes when it worked good
2 minutes of coding
Robustness, supplied with clear bad roots
diagnistics
5 for max grad norm is very good in almost cases
but for arima --nc it is also somewhat ad hoc
Scaling by 10^6 or 10^-6 is not exotic:
$1 and $10^6 units; one human being or
1 mln people unit etc
This can change final norm values by
several dozens times
One's more:
2 minutes of fork
No risk
Oleh
5 years, 10 months
Building gretl on WSL (Windows subsystem for Linux)
by Sven Schreiber
Hi,
I've read about the support that Windows 10 (64bit only I think) offers
to run Linux non-graphical programs. Today I tested successfully that it
is possible to build the Linux version of gretl from source on Windows
10 like this:
1) Install WSL as per Microsoft's instructions
(https://docs.microsoft.com/de-de/windows/wsl/install-win10). This is
quick, but reboot required.
2) Download and install a Linux distro in the format suitable for WSL. I
used Debian (stretch) from here: https://aka.ms/wsl-debian-gnulinux, but
on the page https://docs.microsoft.com/de-de/windows/wsl/install-manual
there are also other links.
3) Inside this Debian quasi-install, edit the /etc/apt/sources.list file
to add to the existing lines the corresponding source entries starting
with deb-src.
4) Do "sudo apt build-dep gretl", which then takes almost 1GB of disk space.
5) Download the gretl source archive, for example with wget. (Remember
the whole thing is command-line only, not GUI programs are possible.)
6) Unpack it with "tar xf gretl<.....>xz"
7) Go into this new directory and do "./configure" and then "make" as usual.
This worked for me without errors.
Of course it's not really necessary to cross-compile a Linux build on
Windows. But perhaps it would also be possible to use this Linux
subsystem to compile for the Windows target platform. Then this method
could replace the MinGW/Msys way of doing it. (Following Allin's
instruction in the past I was also able to build on Windows with MinGW,
but I believe with the WSL it might be easier.)
FWIW,
sven
5 years, 11 months
arima: non-stdx estimation and z-statistics
by oleg_komashko@ukr.net
Dear Allin,
The situation below is
by far not exotic
<hansl>
open greene5_1.gdt
series g = 100*diff(realgdp)/realgdp
series d_u = diff(unemp)
x = g
list zz = 0 x(0 to -2)
eval mean(g)
# simply Okun's law with lags
# with correction for autocorrelation
arima 0 1; d_u const x x_1 x_2 -q
zstat0 = $coeff./$stderr
arima 0 1; d_u const x x_1 x_2 -q --stdx
zstat0s = $coeff./$stderr
eval zstat0./zstat0s
# recall 100*diff(realgdp)/realgdp
x = g/100
list zz = 0 x(0 to -2)
eval mean(g)
arima 0 1; d_u const x x_1 x_2 -q
zstat0 = $coeff./$stderr
arima 0 1; d_u const x x_1 x_2 --stdx -q
zstat0s = $coeff./$stderr
eval zstat0./zstat0s-1
# this is equivalent of rescaling someting
# from dollars to thousands of dollars
x = g/1000
list zz = 0 x(0 to -2)
eval mean(g)
arima 0 1; d_u const x x_1 x_2 -q
zstat0 = $coeff./$stderr
arima 0 1; d_u const x x_1 x_2 --stdx -q
zstat0s = $coeff./$stderr
eval zstat0./zstat0s-1
# if a model is misspecified
# we can observe small but
# non-neglectable difference
# even doing trvial data rescaling
x = g/100
list zz = 0 x(0 to -3)
eval mean(g)
arima 3 1; d_u zz -q
zstat0 = $coeff./$stderr
arima 3 1; d_u zz --stdx -q
zstat0s = $coeff./$stderr
eval zstat0./zstat0s-1
<hansl>
Oleh
5 years, 11 months
arima ... --nc: a short non-convergence investigation
by oleg_komashko@ukr.net
Dear Allin,
# It seems, there are no problems
# with libgretl arima ... -nc
# It seems, there are problems
# with stopping rules
#***********************************************************
scalar enormous = 2*$huge
set bfgs_maxgrad enormous
open greene5_1.gdt
logs realcons
ldiff realcons
# surprisingly small
eval mean(ld_realcons)/sd(ld_realcons)
arima 1 1 1; l_realcons --nc -q
eval $coeff
lnl0 = $lnl
eval $model.roots
eval lnl0
modtest --autocorr
arima 1 1 1; l_realcons --nc --x-12-arima -q
eval $coeff
lnl1 = $lnl
printf "\nlnl difference (libgretl - x13): %g\n", lnl0 - lnl1
arima 1 1 1; 0 0 1; l_realcons --nc -q
eval $coeff
lnl0 = $lnl
eval $model.roots
eval lnl0
arima 1 1 1; 0 0 1; l_realcons --nc --x-12-arima -q
eval $coeff
lnl1 = $lnl
printf "\nlnl difference (libgretl - x13): %g\n", lnl0 - lnl1
arima 1 1 1; 1 0 0; l_realcons --nc -q
eval $coeff
lnl0 = $lnl
eval $model.roots
eval lnl0
arima 1 1 1; 1 0 0; l_realcons --nc --x-12-arima -q
eval $coeff
lnl1 = $lnl
printf "\nlnl difference (libgretl - x13): %g\n", lnl0 - lnl1
#
# ***************************************************************
#
# final gradient norm simply depends on scaling
set bfgs_maxgrad 5
set bfgs_verbskip 10000
catch arima 1 0 1; ld_realcons --nc --verbose
sld = ld_realcons/sd(ld_realcons)
catch arima 1 0 1; sld --nc --verbose
#*******************************************************************
Oleh
5 years, 11 months
ADF p-value glitch?
by Sven Schreiber
Hi,
please consider this:
<hansl>
open wgmacro.gdt
adf 0 log(income) --ct # gives pos. test stat
eval urcpval($test, $nobs, 1, 3)
eval urcpval(abs($test), $nobs, 1, 3) # same
# just memo, irrelevant:
eval urcpval($test, 0, 1, 3) # asy
</hansl>
The issue is that the AR coeff is slightly explosive (1.003), which is
not in the rejection region of the one-sided DF test. So the test stat
is positive instead of negative, but the p-value given by Gretl is
strictly <1 (0.9971). I know that philosophically a one-sided test is
actually trickier than we tell the students, so I don't mind whether the
"correct" result is 1 or NA, but I'm pretty sure it shouldn't be <1.
It seems that Gretl just uses the absolute value of the test stat and
thus doesn't differentiate between the left and the right side.
cheers,
sven
5 years, 11 months
(Re:) Bundle within bundle - icon view save ...
by Sven Schreiber
[Ioannis, I'm starting a new thread for this new issue. You apparently
hit reply to the ADF glitch thread, which messes up the thread ordering]
<Ioannis>:
It seems that there is a bug when you try to save a bundle that is
already contained in a bundle using icon view.
Although a window opens (with a proposed name for the bundle to be
created) and returns no error when you click OK, the bundle is not created.
</Ioannis>
If I understand correctly, you mean that you have a bundle in the icon
view, you double-click on it (or right-click and then "view"), a window
opens which shows the contents of this bundle, including another nested
bundle ('fooinside' in your example).
Then you click the button 'save bundle content' and select the nested
bundle 'fooinside'. In the new dialog you enter some new name for the
to-be-saved bundle. After 'OK' this new bundle does not appear in the
session icon view.
Yes, this happens here, too.
And BTW, when I do "eval foo.fooinside" it works, but Gretl talks about
"bundle anonymous". Not sure if this is correct, since this bundle was
given a name alright.
thanks,
sven
5 years, 11 months
a small issue with $model.ainfo for arima
by oleg_komashko@ukr.net
Dear all,
On viewing $model bundle content
ainfo is indicated as list while it
looks like 1X9 matrix, a kind of
{p,q,P,Q,x5,x6,d,D,$pd}
I have always obtained x5=p,x6=q
It seems, it should be indicated as a matrix
Also, what are x5 and x6?
Oleh
5 years, 11 months