Am 14.09.2017 um 23:50 schrieb Allin Cottrell:
However, in relation to your example it appears that maybe it's
designed
to prevent abrupt changes in weight at the ends of the lag range. As you
compute the weights they jump from and to zero at the end-points; as
Ghysels computes them the profile is smoother.
Well, there will _always_ be abrupt jumps to zero, namely at lag N+1. If
anybody wants to have a smooth function, no problem with me, but then
IMHO they shouldn't call this "last lag zero" if that's not what it
does.
We have inferred (or guessed) some of the implicitly sane restrictions
on the thetas: e.g. not (th2 == 1 && th1 != 1), because 0^0 == 1, as
well as (th2 > th1 || th2 == 1). Then I think this should be imposed at
least on the starting values (easy) and perhaps also during the
optimization (maybe not so easy).
Given the advertised spec of "last lag zero", frankly the original
implementation just looks buggy to me right now, and by implication
gretl's imitation as well. At the very least the user could be warned
that "last lag" sometimes refers to N but sometimes might mean just N+1.
But again, having zero at N+1 is of course trivially true.
Actually meeting the advertised condition of "zero last
lag" -- on
Ghysels' method -- requires that the coefficients are progressively
shrinking as we near the maximum lag. And that is achieved only if
theta[2] is substantially greater than theta[1]. (For example, theta[1]
= 1, theta[2] >= 2.)
Any ideas what the R package or Eviews (>= 9.5) do? The Eviews
documentation was not so illuminating in that detail, but I haven't
checked the programming reference yet, maybe there's something there.
thanks,
sven