I would like to express my concern regarding the recent modifications
to gretl fixing some of the diagnostic tests from a saved model based
on a different sub-sample. The current version of gretl indeed does
not produce erronious output in these tests because, if one opens a
previously estimated model, what gretl does is to give such error
messages as "dataset is subsampled, model is not" or "model and
dataset subsamples not the same" and to disable the Tests menu
altogether. IMHO this way of disabling functionality is not a good
solution.
What would be really neat and to be expected of a high quality program
that is gretl is that the program remembers the subsample for each
model saved as an icon and automatically adjusts the subsample when
these models are revisited.
I understand that this may not be too easy to implement but this is
really the only correct solution. I think the whole concept of
sessions is based on the expectation of such behavior. If I have a
bunch of saved models, I expect to be able to revisit any of them
later, run some tests and compare results without difficulty. The
current fix makes this unnecessarily difficult. Moreover, the current
behavior is also problematic because the subsamples can be random. If
I have a saved model using a random subsample and want to revisit or
compare results from a test, the program basically tells me that this
is not possible anymore, rendering the saved model useless.
Maybe this issue needs some rethinking?
Cheers
A. Talha Yalta
--
“Remember not only to say the right thing in the right place, but far
more difficult still, to leave unsaid the wrong thing at the tempting
moment.” - Benjamin Franklin (1706-1790)
--