The filter() function
by Allin Cottrell
This is sort of a companion piece to
http://lists.wfu.edu/pipermail/gretl-devel/2018-June/008842.html
The filter() function is certainly nice to have, but I can't help
thinking it's kinda backwards, and hence less intuitive than it could
be -- or is it just me? This function offers "an ARMA-like filtering
of the argument x", but what's naturally interpreted as the MA part
comes first and is obligatory while the AR part is second and
optional. You can of course use it to construct a pure AR series but
that requires "pretending" that y is x, so to speak. Here's a little
example, comparing "by hand" calculation with filter() and a notional
armafilt() function that puts the AR term first.
<hansl>
# "by formula"
de_c = -1 - m * de_c(-1)
de_a = -y(-1) - m * de_a(-1)
de_m = -e(-1) - m * de_m(-1)
# via filter()
de_c = -1 + filter(de_c, -m)
de_a = filter(y, {0,-1}, -m)
de_m = filter(e, {0,-1}, -m)
# perhaps preferable?
de_c = -1 + armafilt(-m)
de_a = armafilt(-m, y, {0,-1})
de_m = armafilt(-m, e, {0,-1})
</hansl>
Note that with filter() on the first equation one has to enter "y"
(meaning the series on the left) in the argument slot that's labeled
"x" in the documentation. This works OK, but somehow it's not so
obvious that it's going to work.
To be explicit, my notional armafilt() would have one required
argument, a scalar AR coeff or vector of same. Then would come an
optional series for MA treatment and a scalar MA coeff or vector.
To get a pure MA you'd give 0 as the first argument.
Allin
6 years, 6 months
autoregressive "genr" bug
by Allin Cottrell
I've just discovered something that I think is a bug, though it can
also be a convenience in some cases. I'd be interested to hear what
people think about it.
Simplest case:
series u = normal()
u = 0.5 * u(-1)
The thing is that after the second line is executed, the first value
of u is not NA, it's just the original value. More generally, in this
sort of case we skip any initial NAs resulting from the expression on
the right before starting our rewrite of the original series.
I noticed this when generating an AR error in a panel dataset: for
individual 1 there was no initial NA, but for all subsequent
individuals the first observation was NA, as I think it ought to be.
The convenient case: if you're generating forecast errors based on an
AR process, you may well want to initialize the series to zero and
have pre-sample values assumed to be zero. You can do that cleanly
with the filter() function, but it seems more transparent to do it "by
formula". As things stand a zero initialization carries over into the
output from an autoregressive formula, but if we decide to "fix" the
issue that will no longer be the case, one will have to do something
like
u = rho*u(-1) + ...
u[1] = 0 # scrub the initial NA
Allin
6 years, 6 months
Menus not responding after script error
by Sven Schreiber
Hi,
on Ubuntu with yesterday's git version I'm noticing that "quite often"
the menus are not responding anymore (not clickable). It seems to happen
when a script terminates with an error, but not 100% sure about that. A
gretl restart is necessary. (Well, the script editor window and so on
remain usable I think.)
I tried to do "sudo ldconfig" before a gretl restart but that didn't
help. (A wild guess anyway.)
Haven't tried with the latest snapshot on Windows yet.
thanks,
sven
6 years, 6 months
irf() and fevd()
by Allin Cottrell
Here's an update in relation to the thread (re-)started by Sven at
http://lists.wfu.edu/pipermail/gretl-devel/2018-May/008776.html .
We now have irf() and fevd() functions which go part-way to meeting
Sven's suggestions. We could go all the way if we reckon it's
worthwhile (see below). Anyway, at present:
irf() function:
arg1 target [required]
arg2 shock [required]
arg3 alpha [optional]
arg4 bundle [optional, obtain via $system]
Returns: matrix containing response of targ to shock, optionally
with bootstrap confidence interval.
fevd() function:
arg1 target [required]
arg2 bundle [optional, obtain via $system]
Returns: matrix containing full decomposition of forecast variance
of targ.
The final, optional, bundle argument should be obtained via the
$system accessor (which still has to be documented) after estimation
of a VAR or VECM. If this argument is omitted gretl looks to the
last estimated model to get the equivalent information; if there's
no such model or it's not a VAR/VECM an error is flagged.
The divergence from Sven's specific suggestions reflects some
(debatable) simplification on my part.
It seems to me that when a FEVD is wanted, most of the time you'd
want it for a single "target" variable and all sources of variation.
So right now you must choose a target and do not get to choose a
single source. You can of course pull a specific column of interest
out of the returned matrix if you want less information, or call the
function in a loop if you want more.
In the (more expensive) IRF case, however, it seems to me that the
response of a single target to a single shock (with or without a
confidence interval) would be the most "natural" unit. So you have
to specify both target and shock.
Possible modifications: We could add a "source" argument to fevd()
to narrow the output, and for the target/shock arguments to both
functions we could let 0 signify "do everything".
Allin
6 years, 6 months
console syntax coloring
by Sven Schreiber
Hi,
shouldn't it be possible (leveraging existing Gtk tools I mean) to get
syntax-coloring also in the gretl console? Not necessarily before the
next release, but in principle nowadays that's expected I guess.
thanks,
sven
6 years, 6 months
'gnuplot --input' from string buffer?
by Sven Schreiber
Hi,
the ---input option to the gnuplot command takes a filename to read in
some prefabricated gnuplot plot script. I wonder if it could be arranged
to also use a string variable?
The aim is to use 'outfile --buffer=...' and avoid temporary files
altogether.
In principle I guess there could be some ambiguity whether '--input=s'
refers to a file with name 's' or to a string variable (especially on
*nix systems where file name extensions are not so common). My
suggestion would be to look for a defined variable first, despite the
minor backwards incompatibility.
thanks,
sven
6 years, 7 months
catching linear dependency (rank deficiency) in var command
by Sven Schreiber
Hi,
I've noticed that in contrast to 'ols' the 'var' command is not clever
enough to see when a regressor is linearly redundant. Example:
<hansl>
nulldata 10
setobs 1 1 --time-series
series x = normal()
series y = 0
var 1 x y # error matrix not pos.def.
</hansl>
The error message appears to refer to X'X and is coming from gretl's
internals. Maybe some pre-checks would be good, similar to what 'ols' does.
thanks,
sven
6 years, 7 months
matrix slicing
by Riccardo (Jack) Lucchetti
Folks,
a script that Marcin sent me privately a few days back prompted Allin and
myself to do some work on optimising certain cases of matrix slicing. We
made a few changes so that if you use constructs such as
X[3:5,]
where you refer to subsets of rows of a matrix you might get a
considerable speedup; here's a little test script to exemplify the change:
<hansl>
set verbose off
ROWS = {10, 100, 1000, 10000}
H = 100000
loop i = 1 .. nelem(ROWS) --quiet
ri = ROWS[i]
C = zeros(ri, 5)
limits = ceil(muniform(H, 2) * ri)
tt0 = 0
tt1 = 0
loop h = 1..H --quiet
ini = minr(limits[h,])
fin = maxr(limits[h,])
s = seq(ini,fin)
set stopwatch
matrix tmp = C[s, 3]
tt0 += $stopwatch
matrix tmp = C[ini:fin, 3]
tt1 += $stopwatch
endloop
printf "%6d rows: seq = %7.5f, range = %7.5f\n", ri, tt0, tt1
endloop
</hansl>
On my laptop, this is what you get before and after the change:
BEFORE:
10 rows: seq = 0.05446, range = 0.05860
100 rows: seq = 0.09540, range = 0.09755
1000 rows: seq = 0.27118, range = 0.25175
10000 rows: seq = 1.82294, range = 1.65042
AFTER:
10 rows: seq = 0.05649, range = 0.04842
100 rows: seq = 0.09460, range = 0.06303
1000 rows: seq = 0.25984, range = 0.11708
10000 rows: seq = 1.78409, range = 0.70894
Now, here's the important part: we're fairly confident that the change
shouldn't break anything, but it's pretty low-level, so if you could try
current git with all the scripts you have and report anything weird, that
would be a big help.
-------------------------------------------------------
Riccardo (Jack) Lucchetti
Dipartimento di Scienze Economiche e Sociali (DiSES)
Università Politecnica delle Marche
(formerly known as Università di Ancona)
r.lucchetti(a)univpm.it
http://www2.econ.univpm.it/servizi/hpp/lucchetti
-------------------------------------------------------
6 years, 7 months