On Fri, 15 Sep 2017, Sven Schreiber wrote:
Am 15.09.2017 um 14:19 schrieb Allin Cottrell:
> On Fri, 15 Sep 2017, Sven Schreiber wrote:
>
>> Am 14.09.2017 um 23:50 schrieb Allin Cottrell:
>>
>>> However, in relation to your example it appears that maybe it's designed
>>> to prevent abrupt changes in weight at the ends of the lag range. As you
>>> compute the weights they jump from and to zero at the end-points; as
>>> Ghysels computes them the profile is smoother.
>>
>> Well, there will _always_ be abrupt jumps to zero, namely at lag N+1.
>
> The jump will not be very abrupt if the prior coefficients are declining,
> which seems to be taken for granted.
Well, except in other cases that we have looked at, like th1==2 and th==1
(which doesn't have to do with the eps thing, of course), where both
implementations jump from (the max weight) 0.1 at N to 0 at N+1.
My personal takeaway is that this beta0/betann is unreliable and/or
misleading by construction. I'd rather go with the other beta or perhaps with
the Almons (not to mention umidas).
One more comment from me: by "seems to be taken for granted" above I
meant that it seems to be taken for granted that suitable beta
hyperparameters for MIDAS are going to exhibit theta2 sufficiently
large in relation to theta1 as to ensure that the weights decline at
higher lags (possibly after an initial hump). And in that case the
"zero last lag" (two-param) version really does give a zero last lag.
If, on the contrary, the params are such that the weights are flat or
even increasing at higher lags, then "zero last lag" does not get
imposed by Ghysels' algorithm, but I guess that means there's
misspecification: not enough lags have been included.
Allin