Hello, gretl developers.
I'm trying to start a translation of help files into russian on
launchpad.net as it seems to be the most suitable tool to participate
for all familiars with econometrics but not with gettext, linux. cvs,
The problem is that there is already a project for gretl on launchpad
and it is strongly prohibited to start more than one project for a
single program. I cannot contact with Constantine Tsardounis for about
a month, so I think it is time to re-assign that project to someone
else. On the irc-channel of launchpad I was told that
Our admins can re-assign the project to new owners but we'd prefer to
hear from the upstream owners. can you get one of them to submit a
But if nobody from main developers wants to register and do something
at launchpad it is possible to assign this function to me and in that
case a letter in this list will probably be enough.
I have prepared a .po-file for genr_funcs.xml and gretl_commands.xml
with the help of po4a utility and got 1511 strings for translation
(strings a rather big).
Good luck, Ivan Sopov.
P.S. My previous letter about using launchpad for translation is
As some of you know, we're currently experimenting with openmp in
gretl. When building from CVS, use of openmp is the default (if
openmp is supported on the host) unless you pass the option
--disable-openmp to the configure script. In addition the current
snapshots for Windows and OS X are built with openmp support
(using gcc 4.4.3 and gcc 4.2.4 respectively).
This note is just to inform you about the state of play, and to
invite submission of test results if people would like to do that.
Right now, we use openmp only for gretl's native matrix
multiplication. So it'll get used (assuming you have at least two
cores) if you do matrix multiplication in a script, or call a
function that does matrix multiplication (such as qform), or use a
built-in command that happens to call matrix multiplication. If we
decide it's a good idea, we could use openmp directives in other
gretl code (but as along as we rely on lapack for much of our
number-crunching, and as long as lapack is not available in a
parallelized form, the scope for threading will remain somewhat
In a typical current use situation, with gretl running on a
dual-core machine where there's little other demand being placed
on the processors, the asymptotic speed-up from openmp should be
close to a factor of two. However, it takes a big calculation to
get close to the asymptote, and we've found that with small to
moderate sized matrices the overhead from starting and stopping
threads dominates, producing a slowdown relative to serial code.
This is similar to what we found with regard to the ATLAS
optimized blas; see
Anyway, in case anyone would like to test I'm attaching a matrix
multiplication script that Jack wrote. Right now this is mostly
useful for people building gretl from source, since you want to
run timings both with and without MP, which requires rebuilding.
But if you're currently using a snapshot from before yesterday
(build date 2010-03-21 or earlier) you could run the script, then
download a current snapshot and run it again.
I'm thinking that it's probably time to put out gretl 1.9.0,
which, in light of discussions here a while back, will be the
first (and possibly the last, but we'll see) in a series leading
towards gretl 2.0. (And as such it will spit out warnings for
script constructions that are deprecated and will be removed in
Looking at the change log, we seem to have accumulated enough
fixes and features to justify a release:
If anyone knows of any show-stopper bugs that really should be
fixed before releasing 1.9.0, please speak up. And I'd be grateful
if people could find the time to beat on current CVS/snapshot
in case any recent changes have broken things.
On Fri, 30 Apr 2010, Talha Yalta wrote:
>>>>> Also, I know that gretl has a tendency to have plot titles in small
>>>>> letters (although there are exceptions such as the estimated density
>>>>> plot which has the E capitalized). Is there a reason for this? I don't
>>>>> know what most people would say but my personal preference is to have
>>>>> such titles in title format (first letter capitalized).
>>>> Right-click -> Edit
>>> I don't even have to do this because such titles are all in title
>>> format in the Turkish translation ;-)
>>> But seriously, is this something undebatable? How about an option in
>>> the preferences, if not too difficult to implement that is.
>> Nothing is undebatable, in the proper context. But do you really think gretl
>> should have a capitalisation *policy*?
> Why not? I think most people would agree that accurate and
> professional looking visualization is extremely important for a
> scientific package (not that I wish to claim the current plots look
I beg to differ. In my humble opinion, "accurate and professional looking
visualisation" (which is difficult to define anyway: I am a professional,
so are you; I like small letters, you like capital letters. Who's right?
Po-tay-to, po-tah-to) has is place, but is dwarfed in importance by
computational speed, accuracy, breadth of methods, quality of
documentation and so on. If capital letters in plots are so "extremely
important" to deserve a formalised policy, my imagination struggles to
find something that isn't.
Riccardo (Jack) Lucchetti
Dipartimento di Economia
Università Politecnica delle Marche
In the "Graph specified vars" menu, all plots except Q-Q plots and X-Y
with control are missing automatic plot titles. I think it could be
worthwhile that all gretl plots have a suitable title by default.
Also, I know that gretl has a tendency to have plot titles in small
letters (although there are exceptions such as the estimated density
plot which has the E capitalized). Is there a reason for this? I don't
know what most people would say but my personal preference is to have
such titles in title format (first letter capitalized).
“Remember not only to say the right thing in the right place, but far
more difficult still, to leave unsaid the wrong thing at the tempting
moment.” - Benjamin Franklin (1706-1790)
Date: Fri, 16 Apr 2010 18:14:08 -0400 (EDT)
From: Allin Cottrell <cottrell(a)wfu.edu>
Subject: Re: [Gretl-devel] NA and nan: next steps
To: Gretl development <gretl-devel(a)lists.wfu.edu>
Content-Type: TEXT/PLAIN; charset=US-ASCII
On Sat, 17 Apr 2010, Sven Schreiber wrote:
> Allin Cottrell schrieb:
> > On Wed, 14 Apr 2010, Gordon Hughes wrote:
> >> Can I raise a dissenting voice? Do you REALLY want to expend the
> >> effort to distinguishing between NA and NaN in every single procedure
> >> and (presumably) every function, etc? It would be even worse if you
> >> added +/-Inf. My reaction is that there are better ways to spend
> >> time in developing the program.
> > I have no desire to spend a lot of time in this area. I suspect
> > there's an intractable problem here, which has to be resolved by
> > fiat. In principle, NA and NaN are different things, which is
> > particularly apparent in the case of evaluating 0*NA versus 0*NaN.
> > The statistical programs that we've had reports on to date on this
> > list resolve the issue by treating NAs as if they were NaNs;
> I don't mean to suggest any implication for gretl's development here,
> but it seems to me that this statement is not correct as regards Octave
> and R; at least from what quick googling revealed to me, since I'm not
> an expert in either of those packages. Both Octave (/Matlab) and R seem
> to distinguish NA and NaN (and I guess even +-Inf) AFAICS.
->OK, they may make _some_ distinction, but if they evaluate 0*NA as-
->NA (as we've heard) then they are not doing it right.
Except if 0 is a dummy variable. In that case 0*NA = NA., 0 is not really
zero in ordinal data--it's arbitrary and just indicates a category.
Otherwise recoding the dummy variables changes the effective data set and
lead to unexpected results. If 0 is really zero in the cardinal sense then
gretl handles this correctly and R/Octave do not. If 0 is a categorical
variable gretl gets it wrong and Octave/R get it right. One solution would
be to declare dummy variables as factors (like Jeff Racine does in his
semiparametric models) so that the handling of the interactions could be
right in both instances.
I like what gretl does for cardinal data. But I'm not sure this is what I
would want for ordinal/categorical data.
I've looked further into how R handles NA/Nan/Inf (for
floating-point data). It has its own logic but I'm not sure it's
very intuitive, or something we'd want to emulate.
Yes, R does have distinct internal representations for NA and NaN.
But in some ways NaNs are treated as a subset of NAs, while in
other ways NAs are treated as if they were NaNs. If you do
x <- 0/0 # or
x <- log(-1)
you get a value that gives TRUE for both is.nan(x) and is.na(x).
If you do
x <- NA
you get a value that answers TRUE to is.na(x) but FALSE to
NaNs are treated like NAs in that they are automatically skipped
by default when running a linear regression via lm(). Infinities
are not treated the same way. That is, if you have data vectors x
and y and you define one of the x-values to log(-1) you get a
In log(-1) : NaNs produced
but the relevant observation is skipped by lm(), as in gretl. If
you define an x-value to log(0) (producing -Inf) you get no
warning but the regression fails with
Error in lm.fit(x, y, offset = offset, singular.ok = singular.ok,
NA/NaN/Inf in foreign function call (arg 1)
And (as we already knew) NAs are treated as NaNs in that 0*NA =
Can I raise a dissenting voice? Do you REALLY want to expend the
effort to distinguishing between NA and NaN in every single procedure
and (presumably) every function, etc? It would be even worse if you
added +/-Inf. My reaction is that there are better ways to spend
time in developing the program.
Anyone learning statistics or econometrics rapidly comes across the
need to deal with missing values of several different
kinds. Recognising that lx=log(x) for x <=0 causes a problem is a
very elementary and early lesson. Masking it by, for example,
setting lx=-Inf is likely to mislead. Since this would be very
different from what most other programs do, it is likely to generate
more problems of consistency across analyses performed using
As far as I understand, the original argument was generated by the
question: should 0*NA be 0 or NA? My personal view is that it should
always be NA because it is always possible for the user to override
this result by explicitly recognising that NA is really NaN and using
conditional generate statements. The default of generating a missing
value when there is any doubt is easily addressed by the user if a
different result is required, but it is much safer for the unwary.
>If we want to distinguish between true NAs and nan/inf (as we
>probably should), some other design questions come up, as a
>consequence of the fact that we would be allowing non-finite
>values in series and scalar variables. (Unless, that is, we make
>it an error to put non-finite values into such variables.)
>I presume that in simple, per observation, calculations such as y
>= log(x) or y = x*z we'd want to let IEEE rules prevail, but what
>about more complex calculations?
>At present we automatically exclude observations with NAs from
>regression calculations, means and variances and so on. Should we
>do the same for nan/inf, or should we let IEEE rules prevail -- or
>should we add a "set" switch to control this?
>A practical use case is this:
>series lx = log(x)
>ols y 0 lx
>where the series x contains non-positive values. Right now the bad
>log x values are converted to NA and skipped. If we leave them as
>nan or -inf then what should we do?
Dear Gretl Team,
When I run the normality test for a VAR (model window, Tests -> Normality of
residuals) I get the results only in English. Is this the desired behaviour?
Henrique C. de Andrade
Doutorando em Economia Aplicada
Universidade Federal do Rio Grande do Sul