as some of you noticed, the gretl wiki had become a receptacle for all
kinds of spam known to man (and then some), to a point that in practice it
was impossible to clean it.
My decision was then to scrap it and start anew. I managed to save a few
pages from the old wiki, but I guarantee nothing: some pages may have been
deleted forever. Sorry about that.
The main change from the old wiki is that the creation of user accounts
will be subjected to my approval: there will be some bureeaucracy
involved, but that is the price to pay to keep spammers out.
The new wiki will be online in a few days.
Riccardo (Jack) Lucchetti
Dipartimento di Economia
Università Politecnica delle Marche
I think we still have some problems with the .xlsx files: Sometimes Gretl
(version 1.9.7 / build date 2012-03-01 / Windows Vista) can't see the data
structure. This is a intriguing issue because even when the spreadsheets
are equal (at least visually) Gretl shows a different behavior.
Please take a look at the Teste.xlsx file attached. In that file we have
two sheets with the same information. With the command,
open C:\Users\f1831737\Desktop\Teste.xlsx --rowoffset=8 --sheet="IBC-Br SA"
Gretl doesn't define the data structure (time series). But with the command,
open C:\Users\f1831737\Desktop\Teste.xlsx --rowoffset=8 --sheet="IBC-Br SA
Gretl defines my data as a time series.
If I try to open via the GUI I get the following message/results:
'IBC-Br SA' -> worksheets/sheet1.xml
Found 6 variables and 385 observations"
And Gretl can not see the correct data structure. The same command in the
'IBC-Br SA' -> worksheets/sheet1.xml
didn't get worksheet
xmlParseFile falhou em xl\rId2
And Gretl doesn't open the data.
Allin, sorry for bugging you (and all the list members) again with this
subject, but I think the easiness of the data opening is one of the most
important features of Gretl and, because of that, it's very important
maintain its correct functioning.
I've now added the following in CVS:
(a) If you ctrl-click, shift-click or swipe in the main window to
select more than one series, then right-click, one of the options is
"Define list". Choose this, type in a name, and you've got a list.
(b) In the model specification dialog, named lists are shown along
with series in the left-hand "available vars" box.
This is not yet in the snapshots for Windows and OS X; I'd like to
see a little testing first.
A few design decisions arose:
(1) How should lists be visually distinguished from series in the
selector? I could colorize list names, but right now they appear in
bold face. This seemed to me slightly more suggestive of what's
going on (list = bulkier object than series).
(2) Where should lists be placed in the selection box? I've placed
them after the constant (which by gretl convention always comes
first, if applicable) but before the series. My thinking is that
lists in the GUI are likely to be most useful when the dataset
contains many series, in which case you wouldn't want to have to
scroll down past all the series to find your lists.
(3) What should happen with the business of "pre-select the first
variable on the left" (skipping the constant) in the selection box?
(I mean, when you open the dialog, one series is already selected
for you.) Well, the rationale for this is that the most efficient
way of using the model spec dialog from scratch is: first
double-click on your dependent variable, then swipe and right-click
to define a set of independent variables -- and in many datasets the
first series is the one that one is likely to wish to take as
dependent. Since a list can't serve as dependent variable I've made
it so that the first plain series is pre-selected.
(4) This is larger: What should happen on the right when one clicks
a list from the left into a right-hand side box? The most basic
question is, should the list be cashed out into its member series,
or should it appear on the right as a single element? Right now,
it's the former: you see the member series names on the right. There
are things to be said on both sides of this; let me elaborate.
First, cashing out the list right away is least disruptive of the
current functionality in the C code module that supports the model
spec dialog. We check that the list truly exists and contains valid
series ID numbers, then from that point on it's just as if the user
had put in the list-member series individually.
One might think that keeping the list as a list on the right would
make things cleaner for the user -- a shorter, easier-to-read
display on the right; and one could select a list and pull it out
again if one wanted. But this would raise several other design
problems; here are two examples:
* What if the box on the right already contains one or more of the
series in a list that is added subsequently? Right now it's easy: in
adding a list, we in fact add only those series that are not already
present. If we were to represent the list as a list on the right,
we'd have to kick out any duplicated series. Then what if you remove
the list? Should we put the duplicated series back in?
* The lag selection mechanism: how should lists be handled in that
context? (Hint: it's already very complicated.)
Department of Economics
Wake Forest University
[ok now *really* to gretl-devel]
Am 06.03.2012 10:59, schrieb Sven Schreiber:
> I'm wondering whether there may be a case to extend the princomp()
> function somewhat to get a direct access to that (optional third matrix
> parameter in pointer form for example). Sure it would be sort of
> redundant from a syntactic point of view, but since those eigenvectors
> have to be computed anyway it could be an efficient way to get those.
[p.s. to gretl-devel only:]
A general syntactic question about those pointer-style arguments: Given
that hansl is usually very concise, is it really always necessary to
define/initialize the object in a separate line?
What I mean is, instead of:
would it be possible to do something like:
Note the added '[matrix ...]' -- I am not proposing to use the old
syntax and simply create the argument on the fly and guess its type,
which would not catch typos for example (if really previously defined
objects are meant).
This is really just about saving the extra line and hopefully make hansl
even more readable. I know tastes will differ here (especially those of
the C people...).
Second instalment on design suggestions.
Lee Adkins suggested giving named lists of series a more integral
place in the gretl GUI (e.g. in the model specification dialog); and
Jack suggested an easy way of defining a list in the GUI, namely
selecting a bunch of series in the main window, right-clicking, and
choosing "Make this a list" (or some such).
Jack's suggestion would be easy to implement, but before I do that
we need to think through exactly how we might use lists in the GUI.
At present lists are mostly a scripting thing: the main (only?)
exception is the use of lists as arguments to use-defined functions.
If a function package with a GUI interface wants a list of
regressors, for example, the function dialog provides a means of
defining a list (or selecting from among any lists that are already
How might named lists be brought into the existing model
specification dialog (for built-in estimators)? My first thought is
that the current model specification dialog is nicely simple in some
cases (OLS) and about as complicated as is reasonable in others
(ARMA). We would not want (I think) to introduce an additional
selection box showing lists alongside that showing series. Could we
include lists in the "available series" boxes but colored
differently (series in black, lists in blue or red or something)?
I'm not sure, but I don't want to invest effort in this unless and
until we have a plausible design concept that doesn't make the model
spec dialog fall over from excessive complexity.
This might be more of a user question than development, but ....
When I use a list in the mle function I'm getting an error. Here is the
Here is a simple script to estimate Harvey's multiplicative het model from
logs C Q PF
series l_Q_sq = l_Q^2
list z = const LF
list x = const l_Q l_Q_sq l_PF
list y = l_C # or series y = l_C
# start values
ols y x --robust
matrix beta = $coeff
scalar n = $nobs
ols lehat z
matrix gam = $coeff
mle loglik = -n/2 * ln(pi) - (.5)*zg - (.5)*(e^2/exp(zg))
series zg = lincomb(z, gam)
series e = y - lincomb(x, beta)
params beta gam
This routine works, but only if y is defined as a series and not a list
(line 6). Why?
I'm using gretl 1.9.7 on Windows, btw.
Professor of Economics
> These people were from State Planning Agency. They told me that they
> have about 60 series (which have different levels of collinearity) and
> they use SPSS to do principal components analysis to create regional
> development indices (Turkey has 81 provinces). I am not very familiar
> with pca but I can call them and learn more.
> I think you just need the eigenvectors. Put all the series into a matrix,
get the eigenvectors, and peel off the first few, and put these into
series; these are pc. Estimate model, and perform the inverse
transformation on the coefficients to get back to the coefficients in the
original data rotation if desired.
> >> 2)- Some students suggested (and all others agreed) that it would be
> >> very useful to have a predict command, which will provide predicted
> >> values as well as slopes (given Xs) for various nonlinear models such
> >> as polynomial regressions, logit, probit etc. I think this could be
> >> nice to have as a command as well as a GUI entry next to the forecast
> >> item. Maybe a small goodie to consider for the 2.0 release? They said
> >> Stata has this.
> > I don't see what the difference is between "predicted values"
> > and what we offer already (in sample fitted values and
> > out-of-sample forecasts). Can you expand on what you mean?
> Now this is maybe I didn't know how to fully use gretl in this
> context. The issue arised on 2 occasions:
> (1) I had a polynomial regression and I was showing them to enter from
> the GUI the command something like:
> prediction = $coeff + $coeff*x + $coeff*x^2
> (2) I was showing an ordered logit example and I had long commands like:
> pcut0 = 1 / (1+exp(-$coeff-x*$coeff)+exp(-$coeff-x*$coeff))
> pcut1= exp(-$coeff-x*$coeff) /
> pcut2= exp(-$coeff-x*$coeff) /
> ...and they said Stata (supposedly) has a command where you enter x
> and get the prediction and slope for different models :-P
> Indeed Stata does and it is very, very slick. I think it has taken the
Stata team a long time to get it to the point that it is now. That said,
it's not very hard to do in gretl on a model by model basis See chapter
16 of http://learneconometrics.com/gretl/using_gretl_for_POE4.pdf for some
simple examples. The ability to jump seamlessly between matrices and gretl
results makes it amazingly easy to use. In it there are a few examples of
computing marginal effects and even standard errors for the average
marginal effects. A more competent programmer could do something more
elegant and general, but it it's not very hard to do on a case by case
Although, as Talha says, a 'button' or function is nice since most users
don't appreciate the challenge of getting these like some of us do...:)
One of the things that I've noticed about Stata's development is that they
have expanded many of the commands to encompass almost every model and they
continue to add functionality to them with each new version. For instance,
predict is used to get in-sample model predictions, residuals, variances
(e.g., from arch and garch), and to generate both static and dynamic
forecasts. margins computes different types of marginal effects (discrete,
continuous, average, at specific points, at the averages, and so on
depending on the options used). In fact, these commands do so much it
makes them a little hard to use correctly, IMO--for instance, it took some
time to figure out exactly how and which marginal effects were being
computed for the qualitative choice models.
So, that presents something of a design choice. Should one use commands
that apply to all circumstances (even though they may work differently
under the hood, depending on model being estimated), or should estimation
routines have their own set of postestimation commands for tests, marginal
effects, and so on. I don't know the answer to this. Stata flirts with
both models: ubiquitous (margins, predict) and model specific (estat
Open source suggests perhaps more modularity than one gets with a
proprietary software like Stata (though nearly everything it does is
executed in .do or .ado add-ons). What we discussed in Torun was the idea
that Allin and Jack would work on the back-bone and that others would try
to develop the expertise with the bundle concept to add specific
functionality. So, the question is, is enhancing prediction or marginal
effects a back-bone issue or an add-on? (I'm not sure)
My idea of a back-bone issue would be the introduction of factor variables
(version 2.0). In it, variables are defined as being continuous or
discrete. They can be interacted in various combinations
(continuous-discrete, continutous-continuous and discrete-discrete very
easily within Stata's .do files). For us, it may solve an issue with the
handling of missing values (0 or NAN) and permits a very straightforward
way to interact variables and compute marginal effects in interaction
models. Just a thought....
> “An expert is a person who has made all the mistakes that can be made
> in a very narrow field.” - Niels Bohr (1885-1962)
> Gretl-devel mailing list
Professor of Economics
Thanks to Lee Adkins and others for the design ideas that have been
floated recently. I'll try to respond to these one by one.
Lee says (and I agree) that in an extended gretl GUI session it's
easy to get lost in a big stack of gretl windows. To counteract this
he suggests that the script editor be "tabbed" (so it can hold
several script files, one per tab), and maybe the model output
window should have the same treatment.
Personally I don't much like tabbed windows. With scripts I
generally like to to have the windows side by side, or partially
overlapped, to facilitate copy/paste and comparison, and for model
output I like to be able to compare side-by-side or above and below.
But I realize that some people do like tabs and I've started to
experiment with a tabbed script editor (not ready for public testing
In the meantime I've been working on another way of overcoming the
"lost in a stack of windows" effect; the results are now in CVS and
For some time (I don't know how many people have noticed) some gretl
windows have had a little "Windows" icon (it shows two overlapped
windows) in the toolbar. If you click on the icon it pops up a list
of your open gretl windows -- and if you select one it comes to the
top of the stack.
Up till now that hasn't been very useful: many windows didn't have
that icon so you couldn't rely on it as a means of navigation. What
I've done recently is (a) put that icon into almost all windows and
(b) spruce up the appearance of the pop-up list a bit.
To expand on "almost all windows": this icon should now appear in
all windows that have an icon-based toolbar (if it doesn't show in
any of those, that's a bug). Graphs haven't had a toolbar up till
now, but I've added a discreet one in the bottom right corner. The
main gretl window had window-list functionality, but uselessly
buried under the "View" menu; I've now moved it to the main toolbar.
And in the model window, which can't have an icon-based toolbar
because the menu concepts are much too abstract, I've added the same
functionality via a top-level "Windows" menu item.
Please try it and tell me what you think. The idea is that from any
one gretl window (with a few relatively unimportant exceptions) you
should have a quick means of navigating to any other gretl window.