Allin Cottrell schrieb:
On Wed, 5 Sep 2007, Sven Schreiber wrote:
[re. the denominator used in calculating VECM standard errors]
I could look up what PcGive does in the
> documentation tomorrow if you like.
Thanks, that would he helpful, since PcGive has established itself
as something of a standard in this area. (I'm less concerned
about emulating what Eviews does.)
Ok, so it seems that PcGive 10 uses the root of T-c in the denominator,
where c is the "average number of estimated parameters per equation,
rounded towards zero".
Let's see if we can verify that for our specific case: Comparing
standard errors between gretl and PcGive for the same beta coefficient
gives me a value of:
c=9
And we have:
4*4=16 short-run coefficients (lagged differences),
4 alpha coefficients,
5 beta coefficients, of which 1 is just a normalization, so actually 4, and
3*4=12 coefficients for seasonal dummies.
------
36 / 4 = 9 coefficients per equations.
Looks good!
Sven, Jack and others: while you were away, I put up a page that
contains an account of where I got to with VECM restrictions,
along with some sample scripts, at
http://ricardo.ecn.wfu.edu/gretl/
Any comments/criticisms would be very welcome.
Also looks good, but since the date there is August, I guess the
problems with initial values for restricted estimation in your earlier
email are still there, right?
Browsing the PcGive documentation, I also saw that they do not impose
the normalization restrictions in the maximization stage. They actually
remove them before estimation, then estimate, and in the end reimpose
the normalization, saying that it's more robust.
-sven