Dear Allin,
I am sorry that the two lines I sent in my original post do not replicate the issue in the current snapshot as they were run in an older gretl where MAIC was used as the test-down criteria and selected a smaller lag value for test-down in LRM.
Sven explained my point perfectly. If you run ADF for LRY (not LRM) as follows, you'll notice that AIC below the regression output is different from the AIC for k=0 in the test-down procedure. This is because the regression is run on the full-sample whereas the test-down is done for the lag 5 compatible sample.
adf 5 LRY --c --test-down --verbose
k = 5: AIC = -217.928
k = 4: AIC = -219.841
k = 3: AIC = -220.730
k = 2: AIC = -222.395
k = 1: AIC = -224.395
k = 0: AIC = -225.512
Augmented Dickey-Fuller test for LRY
including 0 lags of (1-L)LRY
(max was 5, criterion AIC)
sample size 54
unit-root null hypothesis: a = 1
test with constant
model: (1-L)y = b0 + (a-1)*y(-1) + e
1st-order autocorrelation coeff. for e: 0.195
estimated value of (a - 1): -0.0484706
test statistic: tau_c(1) = -1.00236
p-value 0.7463
Dickey-Fuller regression
OLS, using observations 1974:2-1987:3 (T = 54)
Dependent variable: d_LRY
coefficient std. error t-ratio p-value
-------------------------------------------------------
const 0.291153 0.287768 1.012 0.3163
LRY_1 −0.0484706 0.0483563 −1.002 0.7463
AIC: -241.464 BIC: -237.486 HQC: -239.93
Now if you look at the step 2 of the coint you ran above, you will notice that the regression output is different and it belongs to the one for test-down sample.
I might be wrong, but I think adf was changed at some point to behave like this. Perhaps coint was forgotten along the way :)
Best,
Koray