On Thu, 7 Apr 2016, Hélio Guilherme wrote:
Dear Allin and Jack, and all other Gretl contributors,
As you may know I am a Software Tester (one of my many skills :) ).
I have noticed sometimes that there are Regressions (errors in Gretl that
were fixed, and later reappeared) and new Bugs like today's exponent
prevalence.
This must be the best Open Source project, where bugs are detected,
reported and fixed (within the hour :). (And sometimes code improvements,
too!)
Well, I am offering my contribution to:
- have a server to do Regression Testing (which I already have setup in my
home)
- create a Unit tests structure, to be run with every code commit
- create an Acceptance/Regression Tests structure to be run periodically,
especially before releasing new versions.
So, I would need to know if there are already any kind of Unit tests, and
to collect as many as possible Gretl files that would use public datasets
(like the ones coming with Gretl), or datasets prepared for testing.
For the Unit tests I have not yet studied the best Framework (maybe plain C
language coding), for the other cases, I plan to use Robot Framework (which
is my favorite). (You folks could benefit from using Robot Framework to do
Data Mining as you may program to simulate the user clicking on web pages
with browsers and obtaining files.)
What do you think? Is this a good discussion starting point?
More testing, and more systematic testing, would certainly be a good
thing, and it would be great having someone else doing some of this.
Right now I'm very pressed for time, but I'm putting my test rig at
http://ricardo.ecn.wfu.edu/~cottrell/testing/test-gretl.tar.xz
so anyone can take a look. It exercises almost 20000 scripts; the
testing mechanism utilizes "make" and shell scripts. Each directory
contains "output" and "newout" subdirectories. The basic idea is that
"make" in a given directory will populate the "newout" directory with
output files, then run a "diff" on newout vs "output" (besides
reporting any failures). So "output" is supposed to contain "known
good" results.
You can run everything by typing "make test-all" in the top-level
directory. This may take a while; it will produce a composite diff of
all new output against all previous output.
Anyone running this on their own system should first go into the "bin"
directory (under "test-gretl") and edit the file named "sitevars".
Allin