Allin Cottrell schrieb:
On Thu, 6 Sep 2007, Sven Schreiber wrote:
> Browsing the PcGive documentation, I also saw that they do not impose
> the normalization restrictions in the maximization stage. They actually
> remove them before estimation, then estimate, and in the end reimpose
> the normalization, saying that it's more robust.
Ah, that's interesting. I was thinking myself that might be
a sensible thing to do. I'll have to think about how the details
would go.
To add to that, in terms of the formulation
vec(\beta) = H * \phi + h
vec(\alpha') = G * \theta
only if:
(1) alpha has only exclusion restrictions (I'd say that would be only
zero rows or distinct unit vector rows in G)
(2) a non-zero entry in h gives a fixed beta element (i.e. a
corresponding zero row in H, meaning no further contribution from phi;
which they call scale-homogeneous)
do they actually do the scale removal.
So I guess a first shot at the algorithm would be:
1. find the smallest (or largest?) fixed element in a cointegration
relation (beta column)
2. replace this non-zero entry in h by 0
3. insert a new unit vector row/column "cross" in H, which replaces the
respective zero row but increases the column count by 1 (if you know
what I mean here)
4. do the switching
5. divide each beta column by the estimate in the position of the
to-be-normalized element times the desired normalization value
6. adjust the covariance matrix accordingly (details are left as an
exercise to the reader ;-)
If we're lucky this would make the choice of starting values a little
less important, right?
-sven