I want to draw attention to gretl's results in calculating the R^2 for
the noint1 a noint2 data sets. Results are very poor with both QR and
Cholesky methods. Can there be a bug that affects calculating the R^2
in regressions without the constant term?
IIRC, Sven brought this up some time ago. I did a little testing, and
QR is
about 10-15% slower than Cholesky. This is the price you have to pay for
greater accuracy. So, it all boils down to choosing speed over precision. I
would go for QR myself, but in 99.99% of the cases the difference in precision
is not even noticeable: the test cases you give are artificial datasets
especially designed to be very ill-conditioned (but, to be honest, I did
stumble once into a real-life dataset where Cholesky couldn't cut it and QR
would). Besides, with the CPUs we have today, a few microseconds are nothing.
I also support using QR. Memory or microseconds of speed differences
are not enough reason today to support an inferior algorithm
(
http://www.ee.ucla.edu/~vandenbe/103/qr.pdf provides a simple example
why this is so). One can easily switch back to Cholesky if speed or
memory is a concern.
On 7/20/06, Riccardo Jack Lucchetti <r.lucchetti(a)univpm.it> wrote:
>
> On Thu, July 20, 2006 17:35, Talha Yalta wrote:
> > Professor Cottrell:
> > During my testing of gretl using the StRD linear regressions test
> > suit, I found that QR decomposition performs better than the Cholesky
> > decomposition and sent you a pdf file containing a table comparing the
> > two methods. QR method mostly creates higher number of accurate digits
> > and is able to produce a solution for the Flip data set, where
> > Cholesky fails.
> >
> > In the light of this evidence I wrote my paper and prepared the
> > summary tables assuming the new default for linear regressions would
> > be the QR decomposition. I see that the new snapshots still have
> > Cholesky as the default. If Cholesky will stay as the default in the
> > new version, please let me know so that I can update my tables.
> >
> > I am attaching the pdf file containing the comparisons.
>
IIRC, Sven brought this up some time ago. I did a little testing, and
QR is
about 10-15% slower than Cholesky. This is the price you have to pay for
greater accuracy. So, it all boils down to choosing speed over precision. I
would go for QR myself, but in 99.99% of the cases the difference in precision
is not even noticeable: the test cases you give are artificial datasets
especially designed to be very ill-conditioned (but, to be honest, I did
stumble once into a real-life dataset where Cholesky couldn't cut it and QR
would). Besides, with the CPUs we have today, a few microseconds are nothing.
>
> What's other people's opinion?
>
> Riccardo "Jack" Lucchetti
> Dipartimento di Economia
> Facoltà di Economia "G. Fuà"
> Ancona
>
> _______________________________________________
> Gretl-users mailing list
> Gretl-users(a)ricardo.ecn.wfu.edu
>
http://ricardo.ecn.wfu.edu/mailman/listinfo/gretl-users
>
--
A positive attitude may not solve all your problems, but it will annoy
enough people to make it worth the effort. - Herm Allbright
(1876-1944)
--