On Sun, 13 Apr 2014, Riccardo (Jack) Lucchetti wrote:
On Sun, 13 Apr 2014, Andreï | Андрей Викторович wrote:
> I should be grateful if you fixed the crash, though.
The problem you discovered needs to be fixed, and we thank you for spotting
it.
I'm not quite sure what the fix should be, though: the crash occurs because
you're nesting two loops, both of which have the --progressive option. This
leads to memory corruption and has presumably been there forever; I can only
say that nesting two progressive loops is something that nobody, to my
knowledge, ever attempted before, basically before (from a semantic point of
view) ot makes little sense to do so: the --progressive option acts like a
binary switch (on/off); you normally don't want to turn the light on if it's
on already, do you? ;)
I can see 3 alternatives here:
1) Take actions to fix the memory corruption
2) forbid the nesting of progressive loops (which makes no sense anyway)
3) Ignore the innermost --progressive flag if present.
I'd go for 3). Allin?
Thanks for the quick diagnosis. I'm inclined to go for option 2. Since I
can't think of anything sensible that should happen in the case of nested
progressive loops I think it's cleanest to ban such nesting. But I'm going
to defer action on that for a short thinking break.
However, there's another issue here. The way Andrei's model is set up it
is not of a constant size across iterations of the loop: the AR and MA
orders are increasing. This causes the internal LOOP_MODEL mechanism to
blow up, since we're trying to write more coefficients and standard errors
into this structure than we allocated storage for on the first iteration.
In CVS I'm adding a check for constancy of model size within progressive
loops. Given the semantics of the --progressive option it doesn't make
sense to have a model of non-constant size.
Furthermore, we need to add similar checks for "store" and "print"
within
progressive loops: all of these special items must have a constant number
of elements across interations.
Allin