Am 03.06.2015 um 14:51 schrieb Allin Cottrell:
On Wed, 3 Jun 2015, Koray Simsek wrote:
>
> adf runs the test-down on the common data set (N-1-maxlag) for lag
> selection, but reports the "optimal" lag ADF results on the full data set
> (N-1-bestlag).
>
> coint reports the ADF results directly from the test-down run for the
> "optimal" lag (N-1-maxlag).
I'm not sure I understand this. For reference I'm appending a script and
its output. I'm seeing identical single-variable ADF tests on the
variable LRM in the context of "adf" and "coint". In both cases we
start
with the specified max of 5 lags and find that AIC is minimized at 5
lags, so we lose 6 observations.
I think Koray's point applies to the case where the maxlag and the best
lag differ, whereas in your example they're equal. When they're
different there are two different possible samples, one that starts at
t0+maxlag and one that starts at t0+bestlag. His point was that adf and
coint choose different samples then. (If I understood him correctly.)
cheers,
sven