i'm a medical student from Italy, so not an expert in statistics and I
apologize in advance for my linguistic or technical errors.
I would like to ask if there is a way, using Gretl, to perform a series of
unique logistic regressions with a single command, keeping the same nominal
variable and analyzing its relationship with a series of measurement
variables extracted from an excel database. let me explain: I have a single
nominal variable , A, and a series of measurement variables B C D (100
items). usually I will open Gretl, load the database, click on "logit",
"binary", select the nominal variable and the first regressor, get the data
I need (O.R., p value, IC 95%), copy them, and paste them into Word or
Excel. then I repeat all again using the second regressor, then the third,
and so on...a huge waste of time and energies. Is there a way to avoid this
exhausting task, automating the process and performing the calculation
operation once, and then getting a table with the results in the column? Do
you think it's possible? could you help me with this task? thank you in
the EABCN guys have just made this year's version of the AWM database
available. I packaged it for gretl and you can find it at
I guess we could include this in the next release in place of AWM17,
couldn't we, Allin?
Riccardo (Jack) Lucchetti
Dipartimento di Scienze Economiche e Sociali (DiSES)
Università Politecnica delle Marche
(formerly known as Università di Ancona)
Please, I need you to enlighten me on the panel Cointegration using Panel
ARDL. My questions are as follows :
1. Can we use stationary data for PMG and/or MG
2. What is the ideal dimension of T and N to use PMG and /or MG.
I will be waiting for your response.
nice idea, Allin! I was thinking that since numeric codes may prove
easier to handle in some circumnstances, if it wasn't much trouble the
numeric field 3166-1 could also be included (although I checked and
apparently that is also flawed, Wurzistan is missing).
Professore Ordinario di Statistica Economica
Dip. di Scienze Statistiche
"Sapienza" Università di Roma
P.le A. Moro 5 - 00185 Roma - Italia
I've been working lately with some students running cross-country
regressions, and it struck me it would be nice to have a convenient
way to map between country names and the two- or three-letter codes
for countries -- the former are often found in cross-country data
files but the latter are generally more usable in plots.
So we now (in git and snapshots) have a function, isocountry(), to
do that job using the ISO 3166 country information. We handle three
elements from ISO 3166:
1 Country name
2 2-letter code
3 3-letter code
The basic idea is that we take one of these as input and return one
of the others (or the same one, see below) as output.
The function (not yet formally documented) takes two arguments:
* A string, or array of strings (required)
* An integer (optional, may be omitted)
It returns either a string or an array of strings, matching the type
of the first argument.
The second argument (1, 2, or 3), if supplied, says which of the
three forms above you want on output. If the optional argument is
omitted the default is to convert from 1 to 2 (if form 1 is given on
input), or 2 to 1 (if 2 is given), or 3 to 1. If input is given that
is neither two nor three upper-case characters we try to interpret
it as part of a full country name. In that case we try to return the
country's 2-letter code, but a second argument of 1 can be given to
return the full name (if a partial match is found).
Here are some examples:
? eval isocountry("Bolivia")
? eval isocountry("Bolivia", 3)
? eval isocountry("GB")
United Kingdom of Great Britain and Northern Ireland
? eval isocountry("GB", 3)
# try some abbreviations
? eval isocountry("Vanua")
? eval isocountry("Zimb")
? eval isocountry("Zim", 1)
? eval isocountry("Wurzistan")
isocountry: 'Wurzistan' was not matched
? strings S = defarray("ES", "DE", "SD")
? strings C = isocountry(S)
? print C
Array of strings, length 3
? C = isocountry(S, 3)
? print C
Array of strings, length 3
This is implemented as a "plugin", so the ISO 3166 table is not
bloating gretl's memory footprint unless you have reason to use it.
Comments welcome, if anyone has ideas for making this more useful.
Please, I need you to put me through on two things.
First, how can I save a correction matrix using hansl syntax.
Secondly, I want to perform a forecasting after estimating through maximum
likelihood. For example, as shown below,
mle ll = "likelihood function"
series e = GDPrate - constant
If I want to perform the dynamic forecast or one step ahead forecast, what
is the additional syntax needed.
I hope to receive your response soon.
I am running a system GMM estimator for dynamic panel data models.
I need to perform some computations after the second step estimation than involve the matrix of instruments, the matrix of regressors (both differenced and level), and the weighting matrix used in computation of the system GMM (obtained from first step estimates).
Is it possible to recover these information after the dpanel command?
Would also be possible to compute the Ahn & Schmidt non linear panel data estimator in gretl? Should I program it by myself using gmm?
Dear Allin and Ricardo,
I would like to report some problems I encountered using DBnomics database from Gretl. They are the following:
a) If the data series does not exist, it crushes Gretl.Try to download: Eurostat/ei_bssi_m_r2/M.BS-ESI-I.SA.IE (Ireland does not report Economic Sentiment Indicator)
b) If you try to download data with some missing values it stops with error; try:
data Eurostat/namq_10_gdp/Q.CLV10_MEUR.NSA.B1GQ.DE --name="DE_y_nsa"
data Eurostat/bop_iip6_q/Q.MIO_EUR.FA.S1.S1.N_LE.WRL_REST.DE --name="DE_niip"
Here the problem is that Q.MIO_EUR.FA.S1.S1.N_LE.WRL_REST.DE reports only Q4 values for the years 1999 - 2003, before it starts reporting all quarterly values from 2004 onwards.
I guess the crush should be considered a bug. It would be very helpful if you could fix at least the first problem. My system is Win 10 with Gretl 2018b.
Many thanks for the good work.
I am glad to inform you that Gretl is gaining more ground in my country
Nigeria nowadays. However, I will like to make few suggestions to add to
I think there is a need to develop a special GUI for SVECM and to support
both long and short run restrictions. If possible, it can be linked with
the VECM output in order to fetched its estimates from the beta-alpha
Moreover, I suggest that Gretl should be developed to handle DSGE modeling.
I think my opinion makes senses and I wish to hear from the developers
(Allin and others) soon.
I will surely let you know shortly
Date: Mon, 26 Nov 2018 18:06:06 -0500 (EST)
From: Allin Cottrell <cottrell(a)wfu.edu>
To: r.lucchetti(a)univpm.it, Gretl list <gretl-users(a)lists.wfu.edu>
Subject: Re: [Gretl-users] Saved results from dpanel
Content-Type: text/plain; charset=US-ASCII; format=flowed
On Mon, 26 Nov 2018, Riccardo (Jack) Lucchetti wrote:
> On Sat, 24 Nov 2018, Laura Magazzini wrote:
>> I am running a system GMM estimator for dynamic panel data models.
>> I need to perform some computations after the second step estimation than
>> involve the matrix of instruments, the matrix of regressors (both
>> differenced and level), and the weighting matrix used in computation of the
>> system GMM (obtained from first step estimates).
>> Is it possible to recover these information after the dpanel command?
> In normal circumstances, the info you need is not useful to the user and I'd
> avoid storing it into the $model bundle because typically these are very
> large matrices.
> On the other hand, it would be very time consuming for the user who needs
> those matrices to reconstruct them in hansl, so Laura's request is perfectly
> legitimate IMO. So, I'm attaching a patch which introduces an option to the
> dpanel command named --keep-extra. After applying the patch, if you run the
> dpanel command with the --keep-extra option, you get two new elements in the
> $model bundle, named "Z" (the instrument matrix, transposed) and "A" (the
> I haven't pushed this to git yet because I this this is a rather sensitive
> change and I'd like Allin to approve it (especially because there's a fair
> chance I've f****d up memory allocation, as I regularly do when I write C
The patch looks good to me, and I've now committed the change.
Perhaps Laura could tell us if the new "A" and "Z" matrices in the
$model bundle -- after estimation with "dpanel" and the --keep-extra
option -- do the job for her.
(But note that this update is not yet in the gretl snapshots, only
the git source code. Snapshots should follow tomorrow.)
[Geek note on memory management: the new code sticks the extra
matrices onto the model using gretl_model_set_matrix_as_data().
The first thing we need to know is: does passing a matrix via this
function amount to "donating" the matrix to the model, or will it
take a copy of the matrix passed? The fact that the matrix parameter
to the function in question is _not_ marked as "const" suggests
strongly that it's case of donation, and that's correct. Next
question: would it be OK to donate the source matrix? In this case
the answer is No, since both matrices are members of a "matrix
block" and so do not have independently allocated memory. Therefore
we must copy the relevant matrices before passing them. And that's
exactly what Jack's code does, so it's perfectly correct!]