While playing with large datasets, I found that the "omit" command wouldn't
work for a particular model: I got a "Data error" message. Some investigation
led me to discover (I didn't know before) that, in order for the omit command
to work, you need to have the same number of observations for both the
unrestricted and the restricted model. Same goes for "add".
The reason for this is that the test that gets computed by these commands
involves comparing the two models, which is ok. However, for large datasets,
this is a very serious limit: when you have many variables and thousands of
observations the likelihood of having missing values in some of the
added/omitted variables is very high.
Now, this can be bypassed via "restrict", which does a Wald-style test, but if
you want to exclude dozens of variables at the same time (think of industry
dummies, for instance), it's not the handiest of tools.
Therefore, my proposal (which probably involves some work, but not much): how
about appending a --wald option to "omit", by which we carry out the test by
the "restrict" machine instead of the default?
Riccardo "Jack" Lucchetti
Dipartimento di Economia
FacoltĂ di Economia "G. FuĂ "
Ancona