Dear All,

I've been trying to figure out why the test-down option behaves differently in coint and adf. In the help file, it is explained that the approach in adf is used in coint as well, but the output is inconsistent with this for the initial DF steps in coint (steps 1 and 2).

I would expect both functions to run the adf on a subset of the data (for comparability reasons, I presume) that is consistent with the maximum lag.

Initially, this appears to be the case as the MAIC values for both adf and coint are identical. BUT after finding the number of lags with minimum MAIC, adf reports the results for the full set wheres coint reports the results for the subset of data where original estimation is done.

I noticed a few threads in which the logic of --test-down was discussed (whether the maximum possible data is used for each lag vs the current set-up) but could not really find anything with respect to this particular difference in adf and coint. I am not sure whether this is a bug or not.

If you would like to replicate this situation, please load denmark.gdt and then run

adf 5 LRM --c --test-down --verbose
coint 5 LRM LRY --test-down --verbose

Although the MAIC values are the same in both calls, adf reports the ADF regression on T=53 and coint reports the ADF regression on T=49.

Which output refers to the one with the minimum MAIC? Is this a bug or is there a specific reason behind this choice? My gretl version is 1.9.92.

Best,
Koray