Hi Allin,
I have 27 years of daily price data. I was running rolling regressions
of 252 data points. For a small country I got 2 million OLS, so, I do
have a large amount of data.
When I ran OLS rolling regression, I notice the computer RAM in use
increasing fast. After some time, the 8GB of physical memory is
completely used by Gretl and some swap memory is additionally used. But,
when I try the same for LAD, memory use stops increasing, giving the
idea that and error occurs at the earliest stages.
I will try your sugestions in my data and let you know if it was
successful.
--
Filipe Rodrigues da Costa
Send me an email to: filipe(a)pobox.io
Reach me through Telegram at:
https://t.me/rodriguesdacosta
On Mon, 11 Dec 2017, at 18:33, Allin Cottrell wrote:
On Mon, 11 Dec 2017, Allin Cottrell wrote:
> I tried modifying your example above, replacing
>
> open data9-13.gdt
> list returns = *ret
>
> with
>
> nulldata 1000
> list returns
> loop i=1..50
> returns += genseries(sprintf("x%dret", i), normal())
> endloop
> series sp500 = normal()
>
> but leaving the rest unchanged. So that's 1000 observations on 50
> returns and the script still ran OK. Roughly what are the actual
> dimensions of your data?
Ah, silly me. Cranking up the overall sample length and the number of
returns will each just scale the time taken linearly. What might
really matter for lad is the number of observations used in each
regression (the "size" variable in your script).
I suspect if your real "size" value is quite large lad may be bogging
down, and in that case quantreg may do better.
Allin
_______________________________________________
Gretl-users mailing list
Gretl-users(a)lists.wfu.edu
http://lists.wfu.edu/mailman/listinfo/gretl-users