This is a short version of a message waiting moderator clearance.
(was "Proposal for creation of Unit and Acceptance Tests")

(...)
Hi All,

This is my first adaptation of Allin's test package.
 
Right now I'm very pressed for time, but I'm putting my test rig at

(deleted)

so anyone can take a look. It exercises almost 20000 scripts; the testing mechanism utilizes "make" and shell scripts. Each directory contains "output" and "newout" subdirectories. The basic idea is that "make" in a given directory will populate the "newout" directory with output files, then run a "diff" on newout vs "output" (besides reporting any failures). So "output" is supposed to contain "known good" results.

You can run everything by typing "make test-all" in the top-level directory. This may take a while; it will produce a composite diff of all new output against all previous output.

Anyone running this on their own system should first go into the "bin" directory (under "test-gretl") and edit the file named "sitevars".

This setup still must be done/checked. Path to gretl and libgretl. After this, it should be fine for any (Linux) user.

These are the changes I have done:
Force running Gretl in English.
After Gretl "newout" has been created those files are edited, so that all user ($HOME and test-gretl) paths are replaced to simulate  "output" original files.

TODO: Remove inconsistent paths, on Allin's original "output" files preparation (make new).
           Try to not compare with "output", when applications are not installed (Stata, OX, ...)
           Obtain system info (kind of benchmarking, or performance factoring)
           Should we also do the NIST tests?
           There is a test that creates two Gnuplot plots, maybe they should have output to PNG.
           When I tested in Portuguese, there were warnings on strings max length (need tests for all languages).

---- EOT ----

I have now created a GitHub project to keep the test files. Everyone may clone the directory gretl-tests/test-gretl (which are Allin's tests adapted to your own machines). After this first testing fixes, I will prepare automation system and reporting of the test runs. But first we must ensure that there are no missing scripts or data file, and no run when applications are not installed (like stata in my case).

There are 902 gretl scripts and in my setup the all_fails file contains 36 failed, and from these, there are 9 caused by missing gretl scripts (other because missing stata, numeric or spacing differences). Better analysis will follow after solving easy fails.

There is a gretl-git project which is a clone from sourceforge (It is not accessible, because it should be independent).

Attached are some old test files, but you should see latest_tests.zip .

I expect to receive Pull Requests with the adding of missing data/scripts files. I am willing to give Collaborator access to the owners of Gretl and major contributors :)

Now that we have a GitHub base, we can think in having Gitter or Slack, for instant messaging (I love Slack ;).

Have fun :)

I am eagerly waiting for your feedback.

Thanks,
Hélio