Am 19.06.2012 19:17, schrieb Riccardo (Jack) Lucchetti:
That said, I have the feeling that, in order to use effectively the
datasets Allin is referring to, we need an extra ingredient (which,
IMHO, is THE feature that made Stata the killer package in some quarters
of the econometrics profession): the ability to extract data sensibly by
performing those operations that, in database parlance, are called JOINs.
Hi,
it's not directly about really big data, but since you are in the
process of revamping data-handling, I would like to mention the issue of
real-time or revised time-series (sometimes including panel) data,
meaning that for a certain observation there will be several vintages
available over time. As you know, using this type of data has become
standard in some areas such as empirical central bank reaction
functions, or for forecast evaluation.
It seems to me --though I haven't systematically checked this-- that no
other (commercial or free) package directly supports this, probably
because the data backend becomes conceptually more complex. This would
be a very attractive feature for gretl, but also a very fundamental one
in the sense of requiring low-level changes, I guess.
Actually I was thinking whether the implementation of this feature
could/should be somehow put in a larger project context (possibly with
some external funding), and I wanted to still write up some more
concrete thoughts, but then this thread popped up with the news about
the data-handling in gretl. So I'm mentioning the issue here already.
I'm not sure if technically the current low-level changes you are making
are related to the real-time data issue in the sense that they would
need to be adjusted later. Or are they independent from each other?
cheers,
sven