Hi,
I suspect that the lrvar function (long-run variance estimation) is
slightly wrong in the panel case. Consider the following example:
<hansl>
function matrix lrvarFE(series x, int b "bandwidth")
N = max($unit)
out = 0
loop i = 1..N
smpl i i --unit
errorif(abs(mean(x)) > 1e-10, "whoops, found non-zero within mean")
out += lrvar(x, b)
endloop
return out / N
end function
# -- test case
open grunfeld
# do an explicit FE demeaning first, for easier comparison
panel kstock const
series kres = $uhat
eval lrvarFE(kres, 5) # 2.0608e+005
# comparison
eval lrvar(kres, 5) # 177215.62
# the following wrong on purpose:
eval lrvar({kres}, 5) # same result as the previous line
</hansl>
Explanation: The last calculation (which is wrong) converts the kres
series directly into a matrix. Given gretl's panel storage format of
stacked time-series, this means that the values for all units are simply
stacked in the resulting single column. Then the Bartlett kernel of the
lrvar() function partly connects the different panel units, which is
spurious and distorts the result.
Given that gretl's result in the pen-ultimate line is identical to that,
I guess that's what happens.
Instead, the output of my lrvarFE function is what I believe to be the
correct one.
The similar lrcovar() function is not directly affected, because it
takes a matrix instead of (a list of) series as input, and so it's the
caller's responsibility to handle the data correctly. (Although it would
be nice if it accepted a list, but that's a different issue.)
But I could imagine that the filter functions could suffer from a
similar problem (bkfilt, bwfilt, hpfilt, perhaps pergm?, kdensity?); but
haven't checked that.
Does all that sound right?
Sven