On Wed, 30 Apr 2008, Sven Schreiber wrote:
Allin Cottrell schrieb:
> On Mon, 28 Apr 2008, Sven Schreiber wrote:
>
> Suppose we change something that breaks an existing package: I
> guess it's OK if we're explicit about that, and put up a
> modified version of the package that works with the new
> release? (With judgment required, of course, on how often to
> do that sort of thing.)
In my view, yes, the bug fix may (and probably often will)
consist of an updated package. All I'm saying is that gretl
development should not adopt the attitude "if the package
suffers it's not my problem". So IMHO it's not (only) the
original author of the package who is responsible for fixing the
bug (=getting it working again).
Agreed.
> > 2. For each gretl release some notes (readme) are prepared
> > which complement the changelog. They explain the
> > backwards-incompatible changes and how to solve the
> > associated problems.
>
> Yes, we should do that.
If you want me to do that, I will need a couple of days warning
before an imminent release (maybe the same warning as for
translators)...
Thanks, but I think it's probably easiest for me or Jack to do
that bit.
> > 3. Another sourceforge tracker is introduced as a database
> > for incompatible changes, including information on when such
> > a change was introduced (date and version numbers) and how
> > to deal with it.
>
> If you (or someone else) is willing to take charge of that,
> that's fine by me.
Ok I will see if I can set it up soon. It should probably be
read-only for the general public?
Yes, I'd say so.
> > Number 1 is two-fold: the testing would need volunteers
> > ("package maintainers"?), but simply agreeing to the bug
> > policy is just a decision, not work.
>
> I think it would be easier to roll such tests into the
> existing gretl regression suite. I can make this available
> via the web.
What do you mean by that? Posting a big meta-script? But it
sounds nice.
It's a large collection of scripts with a Makefile system to run
tests. The basic test is to run the scripts and diff the output
against known good output. There's also a facility to test all
the scripts for memory usage with valgrind. There are several
subdirectories to test specific sorts of functionality (mle, gmm,
panel models and so on).
Allin.