Dear Allin,
I am sorry that the two lines I sent in my original post do not replicate
the issue in the current snapshot as they were run in an older gretl where
MAIC was used as the test-down criteria and selected a smaller lag value
for test-down in LRM.
Sven explained my point perfectly. If you run ADF for LRY (not LRM) as
follows, you'll notice that AIC below the regression output is different
from the AIC for k=0 in the test-down procedure. This is because the
regression is run on the full-sample whereas the test-down is done for the
lag 5 compatible sample.
adf 5 LRY --c --test-down --verbose
k = 5: AIC = -217.928
k = 4: AIC = -219.841
k = 3: AIC = -220.730
k = 2: AIC = -222.395
k = 1: AIC = -224.395
k = 0: AIC = -225.512
Augmented Dickey-Fuller test for LRY
including 0 lags of (1-L)LRY
(max was 5, criterion AIC)
sample size 54
unit-root null hypothesis: a = 1
test with constant
model: (1-L)y = b0 + (a-1)*y(-1) + e
1st-order autocorrelation coeff. for e: 0.195
estimated value of (a - 1): -0.0484706
test statistic: tau_c(1) = -1.00236
p-value 0.7463
Dickey-Fuller regression
OLS, using observations 1974:2-1987:3 (T = 54)
Dependent variable: d_LRY
coefficient std. error t-ratio p-value
-------------------------------------------------------
const 0.291153 0.287768 1.012 0.3163
LRY_1 −0.0484706 0.0483563 −1.002 0.7463
AIC: -241.464 BIC: -237.486 HQC: -239.93
Now if you look at the step 2 of the coint you ran above, you will notice
that the regression output is different and it belongs to the one for
test-down sample.
I might be wrong, but I think adf was changed at some point to behave like
this. Perhaps coint was forgotten along the way :)
Best,
Koray
On Wed, Jun 3, 2015 at 6:43 PM, Sven Schreiber <svetosch(a)gmx.net> wrote:
Am 03.06.2015 um 14:51 schrieb Allin Cottrell:
> On Wed, 3 Jun 2015, Koray Simsek wrote:
>
>> adf runs the test-down on the common data set (N-1-maxlag) for lag
>> selection, but reports the "optimal" lag ADF results on the full data
set
>> (N-1-bestlag).
>>
>> coint reports the ADF results directly from the test-down run for the
>> "optimal" lag (N-1-maxlag).
>>
>
> I'm not sure I understand this. For reference I'm appending a script and
> its output. I'm seeing identical single-variable ADF tests on the
> variable LRM in the context of "adf" and "coint". In both cases
we start
> with the specified max of 5 lags and find that AIC is minimized at 5
> lags, so we lose 6 observations.
>
I think Koray's point applies to the case where the maxlag and the best
lag differ, whereas in your example they're equal. When they're different
there are two different possible samples, one that starts at t0+maxlag and
one that starts at t0+bestlag. His point was that adf and coint choose
different samples then. (If I understood him correctly.)
cheers,
sven
_______________________________________________
Gretl-users mailing list
Gretl-users(a)lists.wfu.edu
http://lists.wfu.edu/mailman/listinfo/gretl-users