Gretl project on the launchpad
by Ivan Sopov
Hello, gretl developers.
I'm trying to start a translation of help files into russian on
launchpad.net as it seems to be the most suitable tool to participate
for all familiars with econometrics but not with gettext, linux. cvs,
etc.
The problem is that there is already a project for gretl on launchpad
and it is strongly prohibited to start more than one project for a
single program. I cannot contact with Constantine Tsardounis for about
a month, so I think it is time to re-assign that project to someone
else. On the irc-channel of launchpad I was told that
Our admins can re-assign the project to new owners but we'd prefer to
hear from the upstream owners. can you get one of them to submit a
question here:
https://answers.edge.launchpad.net/launchpad
But if nobody from main developers wants to register and do something
at launchpad it is possible to assign this function to me and in that
case a letter in this list will probably be enough.
I have prepared a .po-file for genr_funcs.xml and gretl_commands.xml
with the help of po4a utility and got 1511 strings for translation
(strings a rather big).
Good luck, Ivan Sopov.
P.S. My previous letter about using launchpad for translation is
http://lists.wfu.edu/pipermail/gretl-devel/2009-November/002171.html
12 years, 3 months
oxgauss
by Sven Schreiber
Hi,
I would like to ask whether it's worthwhile to enable the OxGauss
functionality in combination with gretl's Ox support. (OxGauss means
that Ox can run many existing Gauss programs.) It seems to me that basic
support would be relatively simple, since only a -g switch is needed; so
for running a Gauss program 'mygauss.prg' gretl would need to call:
<path/to/>oxl -g mygauss.prg
(instead of '<path/to/>oxl myox.ox')
I guess a further issue would be how to pass matrices to what would then
be Gauss code, but note that even without it I think it would already be
useful to be able to do:
<gretl-script>
store @dotdir/mydata.dat --jmulti
# jmulti's format should be same as Gauss (?)
foreign language=OxGauss
T = 100; # ugly hardcoding, but not the point here
k = 2;
load datamatrix[T,2] = mydata.dat; # hope OxGauss would find this
print "yeah";
end foreign
</gretl-script>
In terms of user interface I tend to think that no separate script class
for Gauss code (executed via OxGauss) should be introduced, since Gauss
in my view is a little obsolete. But maybe opinions differ on that.
BTW, the background in my case is to build wrappers around the break
test codes of Qu&Perron (Econometrica 2007) or Bai&Perron etc. which are
only available in Gauss, a little lengthy, and also I'm not sure whether
their license would allow porting.
thanks,
sven
14 years, 10 months
bug/requests collection
by Sven Schreiber
Hi,
the list has been busy recently with bug reports (and also some feature
requests), and it's absolutely understandable that not all bugs could be
fixed right away. (It still continues to amaze me that a sizeable
proportion of bug reports are addressed immediately by Allin.) So here's
a list of what may still be open issues. After clarification and
discussion I will transfer the remainder of this list into the bug
tracker and feature requests databases.
thanks,
sven
--------
* icon for "code view" in the function package list window: change from
cogwheel to something more intuitive
* icon of function package list window (and others) in taskbar is only
generic (non-gretl) on linux (self-compiled cvs) -- actually, i don't
get any icons for menu items on linux (as opposed to windows), I suppose
that's a bug of my setup?
* the help about invcdf() says P(X<x), but shouldn't it be P(X<=x)?
* the command 'include myfilenamewith.dots' fails even if
'myfilenamewith.dots.inp' exists (and is in the right place/dir),
apparently because gretl interprets .dots as a filename extension
* variable is being treated as discrete should be made optional rather
than automatic (I thought it was already the case after a discussion
some months/years ago)
* function namespace bug; see
http://lists.wfu.edu/pipermail/gretl-devel/2009-December/002286.html
* Estimating one equation with 7 variables and 3 lags with OLS produces
a glitch with one of the variable names: Instead of 'Yield_10yr_1' the
name 'd_Yield_10yr' is printed. (Maybe it has to do with the
underscores in the name?)
* The command 'rmplot' is defined only for the GUI. I (=Ignacio) think
it would be very good if we could use it also in scripts.
* script accessors ($test etc.) for bootstrapped test results
* the exogenous variables in a Johansen test setting aren't reported;
and a warning should be printed that the critical values and p-values
are in general only appropriate in the case without exogenous variables
14 years, 10 months
Subject: Re: simulation speed
by Gordon Hughes
A word of warning about running the BigCrush test. Looking through
the results of the first run I noticed that some of the tests
generate test statistics that would reject the relevant hypothesis at
the 1% confidence level, even though all of the tests were reported
as having passed (since the criterion is p in the range
[0.001,0.999]). Since we are dealing with a random number generator
it is possible that one run may lead to no failures but another may
generate a number of failures.
Hence I ran the same test a second time. The execution times were
very similar (29h 40m vs 29h 41m). The second run reported a single
failure - Test 89 PeriodsInStrings , r = 20 with a p-value of
6.4e-4. Hence, one should really run these tests several times to
get a proper assessment of the frequency of failures - a bit tedious
given the amount of time required but essential. Further,
comparisons across operating systems or hardware shouldn't be based
on a single run only.
I am now running the revised version of glibtest with the Dec 29th
version of glib.c and will report the results when they
finish. However, I have one initial observation. Sven reported that
the updated version of the ziggurat executes substantially faster
than the earlier version. The early tests in the BigCrush suite give
a different picture - the execution times are all slightly longer
using the new gretl_one_snormal than using the previous
ran_normal_ziggurat. Is this a consequence of the change needed to
use one and a quarter random ints per normal draw, since the previous
code in ran_normal_ziggurat seems to correspond to the Voss
procedure? As a rough guess the increase in execution time is of the
order of 5-10%.
Gordon
15 years
Subject: Re: simulation speed
by Gordon Hughes
I did not receive this message until the Big Crush test was well
advanced, so I let it finish.
It took almost 30 hours to run on the system that I described and
reported that all of the tests were passed. I will rerun the test on
the new CVS version when you are satisfied with it.
Gordon
>However, people might hold off for a bit with the crush tests. I
>think I now see what's going on. I'm no expert on RNGs, but here
>goes...
>
>1) The main speed advantage from ziggurat is that (in the original
>Marsaglia/Tsang version) it requires the generation of only one
>random 32-bit integer per normal sample, where Box-Muller requires
>two.
>
>2) However, Doornik points out that there's a problem with this,
>which emerges if you want to generate normally distributed
>"doubles" (64-bit floating point values -- the original ziggurat
>generated 32-bit "floats"). The trouble is that if you use just
>one set of 32 bits to select the ziggurat box or level and for the
>uniform value to be tested for acceptance at that level, you get a
>certain sort of subtle dependence, and the symptom is that the
>normal RNG fails Knuth's collisions test (on the regular crush
>suite as well as big crush).
>
>3) I took Jochen Voss's ziggurat code and adapted it for gretl,
>and it passed all the tests. So at first I thought we'd somehow
>got around Doornik's problem (maybe by using a better RNG for the
>initial uniform input).
>
>4) But then I took a closer look at the Voss code, and I see that
>it dodges the dependence problem by a means that Doornik mentions
>but does not recommend. Namely, from the initial 32-bit input it
>uses:
>
>7 bits for the ziggurat level
>1 bit for the sign
>24 bits for the value to be tested
>
>There's no dependence problem because Voss uses non-overlapping
>bits for the box selector and the test value (and so one can quite
>confidently predict that this generator will pass the collisions
>test), but the drawback is that one can then generate "only" about
>30 million distinct normal values.
>
>5) To get better coverage of the real line one can use one and a
>quarter random ints per normal draw. For example, one can use 7
>bits for selecting a box, one for the sign, and 30 for the test
>value, to get about 10^9 distinct normal values (with 2 bits of
>"wastage").
>
>I've now implemented this in gretl and I'm running the crush suite
>right now. If it passes (in principle it should, but I might have
>screwed something up in regard to the quarter-ints), I'll commit
>it to CVS and then people can try attacking it with big crush.
>
>Allin.
>
>
>
>------------------------------
>
>_______________________________________________
>Gretl-devel mailing list
>Gretl-devel(a)lists.wfu.edu
>http://lists.wfu.edu/mailman/listinfo/gretl-devel
>
>End of Gretl-devel Digest, Vol 35, Issue 23
>*******************************************
15 years
Re: [Gretl-devel] simulation speed
by Gordon Hughes
I have a spare Ubuntu machine (not very fast - dual core 1.66 Ghz)
that I can leave running on its own for 24 hours or longer without
any problem - I use it for Monte Carlo runs under Stata. However, I
simply don't have enough familiarity with either shell scripts or C
compilation to convert your instructions into a functioning
program. If you could give somewhat more detailed instructions, I
would be happy to run the test program to the end.
Gordon
>As I mentioned, I ran the Crush suite on gretl without any
>failures. I've now run most of Big Crush. I had to unplug my
>laptop after about 14 hours, and got through 80 out of 106 tests,
>again with no failures. The completed tests include all of the
>"collisions" variants, on which Doornik said that standard
>ziggurat failed. Obviously, though, it would be nice to run the
>whole thing, which would require about 16 hours on my machine.
>
>I'm attaching the source for the test program I used. I built the
>program with:
>
>CC = gcc -Wall -O2
>CFLAGS = `pkg-config --cflags glib-2.0 gretl`
>LIBS = `pkg-config --libs glib-2.0 gretl`
>
>glibtest: glib.c
> $(CC) $(CFLAGS) -o $@ $< -ltestu01 $(LIBS)
>
>Allin.
15 years
Re: [Gretl-devel] simulation speed (Sven Schreiber)
by Gordon Hughes
OK, thank you for the help. Following your instructions, I have
generated a working version of glibtest which is able to resolve the
libraries, so I am not sure why your version failed to find them.
I will leave glibtest running for as long as necessary to complete
and report the results in due course.
On your last point, the testu01 instructions say that the program
will run under Windows or Linux but makes no reference to OS-X. Even
under Windows it relies upon an emulation layer (Cygwin or
equivalent), so I am not sure how much one would learn about the
Windows behaviour of the programs rather than about the Linux
emulation. I will leave that to someone else. What I can do is run
testu01 in a Linux virtual machine on a Mac if anyone thinks that
this would be useful.
Gordon
>*) build and install the test suite (./configure, make, install)
>http://www.iro.umontreal.ca/~simardr/testu01/install.html
>
>*) set paths (<install directory> likely is /usr/local, and maybe this
>isn't necessary...?)
>
>export LD_LIBRARY_PATH=<install directory>/lib:${LD_LIBRARY_PATH}
>export LIBRARY_PATH=<install directory>/lib:${LIBRARY_PATH}
>export C_INCLUDE_PATH=<install directory>/include:${C_INCLUDE_PATH}
>
>*) copy Allin's instructions into a text file called "Makefile"
>
>*) copy Allin's .c file side by side with the Makefile
>
>*) in this directory, type 'make' in a shell -- you should get an
>executable file 'glibtest'
>
>However, when I run './glibtest' (glibtest built without error) I get
>the error:
>./glibtest: error while loading shared libraries: libtestu01.so.0:
>cannot open shared object file: No such file or directory
>
>...even though I had previously installed testu01??
>
>BTW, I'm not an expert but it seems to me that it would be a waste if we
>all run this in parallel on the same platform (Linux on Core2Duo). If
>any parallel effort is done, it may be more useful to do it on various
>platforms and/or hardware, like Mac, or AMD, or whatever.
>
>thanks,
>sven
15 years
code fragments
by Kurt Annen
hi,
i wrote an eviews to xls importer in ansi c one year ago. since i have
no time to add the code into gretl, i woold give someone this code to
add it in gretl.
--
------------------------------------------------------------------------
*Kurt Annen*
Diplom Volkswirt (Uni)
Im Haspelfelde 3
30173 Hannover
Phone
+49 (0) 511 - 37066819
Mobile
+49 (0) 179 - 1149369
E-Mail
annen(a)web-reg.de <mailto:annen@web-reg.de>
Internet
http://www.web-reg.de <http://www.web-reg.de/>
This transmission may contain information that is privileged,
confidential, legally privileged, and/or exempt from disclosure under
applicable law. If you are not the intended recipient, you are hereby
notified that any disclosure, copying, distribution, or use of the
information contained herein (including any reliance thereon) is
STRICTLY PROHIBITED. Although this transmission and any attachments are
believed to be free of any virus or other defect that might affect any
computer system into which it is received and opened, it is the
responsibility of the recipient to ensure that it is virus free and no
responsibility is accepted by Kurt Annen., its subsidiaries and
affiliates, as applicable, for any loss or damage arising in any way
from its use. If you received this transmission in error, please
immediately contact the sender and destroy the material in its entirety,
whether in electronic or hard copy format. Thank you.
15 years
simulation speed
by Sven Schreiber
Hi,
yet another useless benchmarking exercise: I was playing around a little
comparing the speed of various matrix languages with respect to doing
stochastic simulations. The codes below draw repeated random samples,
compute their means, and find the 95% percent quantile of the empirical
distribution of the means. I distinguish between a "naive" loop
programming variant and one working only with matrix functions. Here are
the results on Ubuntu Linux 9.10 (I ran the scripts/programs several
times and they are pretty robust):
Octave 3.0.5:
loop ca. 3.3 seconds
vectorized ca. 0.8 seconds
Python 2.6.4/Numpy 1.3.0/matplotlib 0.99(?):
loop ca. 2.1 seconds
vectorized ca. 1.5 seconds
gretl 1.8.6cvs:
loop ca. 2.6 seconds
vectorized ca. 2.0 seconds
Ox 5.1:
loop ca. 1.1 seconds
vectorized ca. 0.7 seconds
Ox confirms its reputation about speed, but for me it was surprising
that Octave is almost as fast with the vectorized code. (But Octave has
the slowest loops.) Gretl seems to have the smallest relative gain going
from loops to vectorized code. But considering that gretl is quite new
in the field of matrix programming I think it is doing fine.
cheers,
sven
<Octave-code>
length = 1000;
iterations = 10000;
whichp = 0.95;
tic;
# looping variant
averages = [];
for i=1:iterations;
mrandom = randn(length,1);
averages = [averages ; mean(mrandom)];
endfor;
myquant = quantile(averages,whichp);
toc;
printf ("Result: %f\n", myquant);
tic;
# vectorized variant
mrandom2 = randn(length,iterations);
averages2 = mean(mrandom2); # row vector
myquant2 = quantile(averages2',whichp);
toc;
printf ("Result: %f\n", myquant2);
</Octave-code>
<Python-code>
from numpy import matlib as nm
from matplotlib import mlab as ml
import time
length = 1000
iterations = 10000
whichp = 0.95
starttime = time.time()
# looping variant
averages = nm.empty((0,1))
for i in range(iterations):
mrandom = nm.randn(length,1)
averages = nm.vstack((averages, nm.mean(mrandom)))
myquant = ml.prctile(averages,whichp*100) # per cents
endtime1 = time.time()
print "Result: " + str(myquant)
print "Execution time loop variant, length " + str(length) + ",
iterations " + str(iterations) + ": " + str(endtime1-starttime)
# vectorized variant
starttime2 = time.time()
mrandom2 = nm.randn(length,iterations)
averages2 = nm.mean(mrandom2, axis=0)
myquant2 = ml.prctile(averages2,whichp*100)
endtime2 = time.time()
print "Result: " + str(myquant2)
print "Execution time vectorized variant, length " + str(length) + ",
iterations " + str(iterations) + ": " + str(endtime2-starttime2)
</Python-code>
<gretl-code>
length = 1000
iterations = 10000
whichp = 0.95
set echo off
set messages off
set stopwatch
# looping variant (non-vectorized)
matrix averages = {} # will be col vector
loop iterations
matrix mrandom = mnormal(length,1)
matrix averages = averages | meanc(mrandom)
end loop
matrix myquant = quantile(averages, whichp)
time = $stopwatch
printf "Result: %f\n", myquant
printf "Execution time loop variant, length %d, iterations %d: %f\n",
length, iterations, time
time = $stopwatch # don't count printing overhead
# vectorized variant
matrix mrandom2 = mnormal(length,iterations)
matrix averages2 = transp(meanc(mrandom2))
matrix myquant2 = quantile(averages2, whichp)
time = $stopwatch
printf "Result: %f\n", myquant2
printf "Execution time vectorized variant, length %d, iterations %d:
%f\n", length, iterations, time
</gretl-code>
</Ox-code>
#include <oxstd.h>
const decl length = 1000;
const decl iterations = 10000;
const decl whichp = 0.95;
main(){
decl averages, averages2;
decl myquant, myquant2;
decl mrandom, mrandom2;
decl starttime, endtime;
decl i;
starttime = timer();
// looping variant
averages = <>;
for (i=0; i<iterations; ++i)
{ mrandom = rann(length,1);
averages = averages | meanc(mrandom);
}
myquant = quantilec(averages,whichp);
endtime = timer();
print("Result: ", myquant, "\n");
print("elapsed time: ", timespan(starttime,endtime));
starttime = timer();
// vectorized variant
mrandom2 = rann(length,iterations);
averages2 = meanc(mrandom2); // row vector
myquant2 = quantiler(averages2,whichp);
endtime = timer();
print("\nResult: ", myquant2,"\n");
print("elapsed time: ", timespan(starttime,endtime),"\n");
}
</Ox-code>
15 years
bug with variable label in estimation table
by Sven Schreiber
Here comes another bug report, I guess the last one for today :-)
Estimating one equation of an underlying VAR with 7 variables and 3 lags
by hand with OLS (don't ask why right now) produces a glitch with one of
the variable names: Instead of 'Yield_10yr_1' the name 'd_Yield_10yr' is
printed. Strangely, 'Yield_10yr_2' and so forth print ok. The other
variable names print 100% ok. Fortunately, I verified that it's only the
label and not a different variable, i.e. the first lag of Yield_10yr is
used alright, not the difference as it looks at first.
This applies to GUI as well as script input, but I couldn't create a
test case with the built-in datasets, sorry. Maybe it has to do with the
underscores in the name?
cheers,
sven
15 years, 1 month