I have a minor question concerning how sample size affects ADF unit root test results in gretl and other econometric software.

Let me give you an example:

I have a series of T=51, for which ADF test results are all the same in gretl, R (urca) and JMulTi. Yet when I use a subsample of T=20 (last twenty observations), test statistics obtained with gretl are significantly different from those of R and JMulTi (both the latter, however, produce the same output).  Of course, I use the same deterministic term option and lag lenght for those comparisons.

Is there any particular reason for that situation? Or is it me doing something wrong?

(I attach data file for those, who'd like to replicate my issue)