On Thu, 5 Feb 2009, Allin Cottrell wrote:
In response to Sven's request not so long ago I enabled
specification of particular lags of the y variables in VARs (e.g.
you could have a VAR with lags 1 and 3, skipping 2).
I said I'd follow up by allowing that sort of thing for VECMs too,
but unless I'm getting mixed up (quite possible) this seems
substantially more difficult.
Consider the gappy VAR (deterministic terms and error omitted for
simplicity):
y_t = A_1 y_{t-1} + A_3 y_{t-3}
The VECM representation is, I think,
\delta y_t = \Pi y_{t-1} + G_1 \delta y_{t-1}
+ G_2 \delta y_{t-2}
where \Pi = A_1 + A_3 - I
G_1 = -A_3
G_2 = -A_3
That, not only have we "dropped a lag" (as per usual), but now
there's an implied restriction on the G_i matrices. First
question: have I gone off the rails here? Second Q: if not, how
would we handle this and would it be worth the trouble?
Not easy, in general. I guess we have to separate several cases.
* Cointegration ignored
As Sven said, as long as all the equations have the same lag structure, we
can merrily keep using OLS. However, if the individual equations don't
have all the same structure, then it's like saying that you're estimating
a VAR with some paramter constraints: OLS would still be consistent, but
for efficiency we'd have to go for some GLS variant (this is beautifully
explained in Lütkepohl's big book).
* Cointegration taken into account
Let me remind that Johansen-style VECM estimation is itself, implicitly,
just the estimation of an ordinary VAR, subject to the peculiar constraint
that the A(1) matrix should be singular (with pre-specified rank). The
beauty of Johansen's algorithm is that the VECM reparametrisation allows
for a simple and elegant solution in the just-identified case; the
over-identified case, as we know, requires numerical optimisation, which,
AS WE KNOW, may be far from trivial.
In the "gapless" case, once you've got \beta in your hand, you're pretty
much done. However, as Allin rightly remarked, the VAR->VECM conversion is
painless only when you consider a "gapless" VAR. In the "gappy" case,
to
do a clean job, you'd have to consider the restrictions on A(1) and the
restrictions on the individual A_1, A_2 etc matrices _jointly_, which
could become a daunting optimisation task.
Now, I don't remember a single paper where this is done, and I don't know
(but I doubt it) if the industry standard (PcGive/PcFiml) allows you to
tame such a beast by point-and-click. What I've seen done in several
papers is one of these alternatives; both have in common that \beta is
assumed known (perhaps, pre-estimated somehow):
a) plug \beta in the VECM and run FIML/GLS/whatever to handle the
restrictions on the G_i matrices
b) impose a "gappy" structure on the G_i matrices and don't bother
analysing what the resulting constraints on the A_i matrices look like.
Alternative (b) may be preferable if what you're interested in is a
_statistical_ representation of your data. However, (a) is more common in
works where the constraints on the A_i stem from some theoretical
consideration (eg works on the New Keynesian Phillips Curve)
Sorry for being so verbose!
Riccardo (Jack) Lucchetti
Dipartimento di Economia
Università Politecnica delle Marche
r.lucchetti(a)univpm.it
http://www.econ.univpm.it/lucchetti