Hi,

I think this might be a bug. When using the ARMA model, I sometimes get a message "numerical gradient: switching to Richardson". However, once I get this message, nothing else happens, but the CPU stays at 100%. After closer inspection, I've noticed that the richardson_gradient function in lib/src/gretl_bfgs.c never returns (so some kind of infinite loop in there). So I did some debugging and believe this is the problems:

The last part of the function has 2 nested loops as follows:

    for (m=0; m<r-1; m++) {
        for (k=0; k<r-m; k++) {
             df[k] = (df[k+1] * p4m - df[k]) / (p4m - 1.0);
        }
        p4m *= 4.0;
    }

At the beginning of the function, r is initialized to RSTEPS which is 4, and the array df has RSTEPS elements. When looking at the code above, we can see that once the outer loop starts, m is 0. The inner loop will therefore run from k=0 to k<r-m. r is 4, m = 0, therefore r-m=4-0=4. The inner loop will therefore run with k equal to 0, 1, 2 and 3. Now inside the inner loop, we access df[k+1]. If k=3, we will access df[3+1] which is df[4]. Since df has a size of 4, we are accessing an invalid index (max of 3 allowed). This probably causes some important memory to be overriden, and the inner loop becomes infinite. For some reason (I think some memory problems), the inner loop will just keep running. The value of k will increase (0, 1, 2, 3, 4, 5, ... to infinity) and the value of r-m will stay at 4, but for some reason the loop doesn't exit (so the check k<r-m will not stop the loop, even if k exceeds 3).

If you remove the statement:

df[k] = (df[k+1] * p4m - df[k]) / (p4m - 1.0);

from the inner loop, everything works perfectly. I therefore think the access df[k+1] causes the problem.

Is this a bug or am I missing something?

Regards
Chris