On Sun, 18 Nov 2012, Sven Schreiber wrote:
On 11/18/2012 03:43 AM, Allin Cottrell wrote:
> On Sat, 17 Nov 2012, Lee Adkins wrote:
>
> You're estimating the model subject to a restriction that is
> violently at odds with the data (F(6, 1073) = 463) and you're
> stressing the numerical apparatus to breaking point.
>
> For comparison, here's the estimation done manually using William
> Greene's formulae:
>
...
> Here, the computed variance of the restricted estimator has a
> negative diagonal entry, so you get a NaN among the standard errors.
This is a great opportunity to repeat what I wrote in the slightly
different context of the Hausman test
(
http://econ.schreiberlin.de/schreiberresearch.html#hausman): If you get
such weirdness such as negative estimated variances, chances are that
the restriction should be rejected, which is exactly the opposite of
what Greene and others have been recommending (by setting the test
statistic to zero).
With respect to gretl's behavior: Some warning message as used to be the
case with the Hausman test would be nice I guess. What I didn't
understand from Lee's output is why there were values displayed for the
constant term if everything else fails.
Well, it's not really that "everything else fails". The point
estimates are OK, and the test statistic, and also the variance of
the intercept (as can be verified by estimating the constant-only
model). And the elements of the variance matrix other than [1,1] are
all pretty much machine zero; it's just that some of them are
slightly to the negative side of zero.
I'm thinking that when we do restricted OLS, maybe we should allow a
small numerical slop factor when computing standard errors. That is,
take negative diagonal values as zero if their absolute magnitude is
below some small value. In this case a threshold of 1.0e-17 would do
the job.
If we were to do this, I'd favour restricting the "clean-up" to the
standard errors (printing 0 rather than NA) and let the $vcv
accessor show what was actually computed, warts and all.
Allin