On Fri, 11 Sep 2009, Talha Yalta wrote:
I have a bug report and a feature request:
1)- When I open a saved model (OLS) based on a sub-sample, I see that,
when running various tests, the original subsample is not always
remebered. For example, I have a model saved as an icon which uses the
variables 1-90. When I open the model with the full range restored,
White's test, BP and Chow tests for example remember the subsample
while RESET and normality of residual tests don't. Here, RESET test
returns 1.8e+308 while the normality of the residual window reports
using the observations 1-350 instead of 1-90 (although the results
seem to be correct).
Moreover, instead of full range, if I use another range, say
91-150, White's tests and others also seem to return misleading
output or error messages... One more thing to consider here is
that it is possible (as probably should be) to change the sample
range while a model window is open and doing this also causes
the above errors so this situation should also be taken into
account.
Could you please give a step-by-step account of how to provoke
this problem? There is supposed to be a guard in place against
this sort of thing and in my experience the guard seems
effective. I guess it's breaking down in some circumstances but
I'm not sure how.
Here's an example:
* Open Ramanathan data4-10
* Use "/Sample/Restrict based on criterion" to restrict the
sample to WHITE > 0.5
* Estimate a model via OLS and save it as an icon
* Save the session to file
* Restart gretl and open the session file
* Choose "/Sample/Restore full range"
* Double-click on the saved model in the icon window to
display the model
* Click on the Tests menu in the model window
I get the message: "model is subsampled, dataset is not", and the
Tests menu is deactivated (grayed out), along with various other
items in the model-window menus. This is as it should be (though
it might be nice if there were an option here to re-establish the
sub-sample on which the particular model was estimated).
* Now do "/Sample/Restrict" again and give the criterion
MEMNEA < 0.9, producing a different subsample
* View the original model again and try the Tests menu
I now get "model and dataset subsamples not the same", and again
the Tests menu is deactivated.
So what I'm interested in is what you're doing that is getting
around this automatic guard.
Allin