I have the problem that when I declare a bundle in a script and run the
script a *second* time, I get the following error:
? bundle test
Invalid declaration: maybe you need the "clear" command?
Error executing script: halting
> bundle test
The example script is literally included above, the one-liner. This is a
cvs build from Dec 8th on ubuntu.
I am having troubling putting both lines and points in plot
The code below creates lines & points like I want in some cases
But in a few cases I get all lines -- see the last gnuplot example
Is this a GRETL bug or something about gnuplot and --with-lines= I
## Basic plotting using simple time series set
setobs 1 1980 --time-series
series x1= index
series x2 = 1
x2 = x2(-1) + 0.25
series y1 = 0
series y2 = sin(x1)
# Line & points OK
gnuplot y1 y2 x1 --with-lines=y2 --output=display
gnuplot y1 y2 --time-series --with-lines=y1 --output=display
gnuplot y2 y1 --time-series --with-lines=y2 --output=display
# Does not show y1 as points if --with-lines=y2 and y2 is second element
gnuplot y1 y2 --time-series --with-lines=y2 --output=display
I am using Gretl 1.9.9 June 2012 build
I have a few other cases using matrix= as well
I am considering using Gretl for a Statistics class I teach in an executive MBA program. I am think about this because (i) Gretl is free for students to use, and (ii) it does not require admin rights to install. Point (ii) is the most important. My consern is that while Gretl is great for econometrics, it might require too much techinical skill for an MBA program. Does anyone have any thought and/or suggestions?
It might be bit much. While econometrics, time series, and such can be done easily. Above that level, you are going to have to do scripting, which scares many MBA students. I have the same problem if I use R. However, to do useful work, your students are going to have to use some statistical program that requires scripting. Think those who use SAS procs versus those who use SPSS.
Excel as an alternative is a problem, since the inaccuracies for statistics are all documented.
I use Gretl in my MBA managerial economics course: econometrics and time series mostly. I am using R more and more though.
If you search back through the posts on this forum, you will find some that say that gretl is working toward the functionality of R.
Remember that SAS, Gretl, and R are more accurate than "easy-to-use" Excel.
On Dec 7, 2012, at 5:07 PM, Logan Kelly <logan.kelly(a)uwrf.edu> wrote:
> Hello all,
> I am considering using Gretl for a Statistics class I teach in an executive MBA program. I am think about this because (i) Gretl is free for students to use, and (ii) it does not require admin rights to install. Point (ii) is the most important. My consern is that while Gretl is great for econometrics, it might require too much techinical skill for an MBA program. Does anyone have any thought and/or suggestions?
> Gretl-users mailing list
I bumped into a feature of Hansl that may be produce puzzling results:
using the "pre-multiplication by transpose" notation X'Y with X a 1x1
matrix (that is, defined as a matrix, but with 1 row and 1 column, for
instance because the result of the product of a row vector for a column
vector) produces the "non conformable data type" error, while using the
full syntax X'*Y os ok.
Professore Ordinario di Statistica Economica
Dip. di Scienze Statistiche
Università di Roma "La Sapienza"
P.le A. Moro 5 - 00185 Roma - Italia
I have tried what you suggested me and every change I imagined and I keep
getting the same.
This is what I have in the script:
arima 0 1 1 ; 0 1 1 ; y --nc
matrix yhat = $fcast
matrix ci = (yhat - 1.96*se) ~ (yhat + 1.96*se)
.and this is what I get
For 95% confidence intervals, z(0.025) = 1.96
y prediction std. error 95% interval
1997:06 132791. 33883.8 66379.9 - 199202.
1997:07 178246. 34280.3 111058. - 245434.
1997:08 130851. 34672.3 62894.7 - 198808.
1997:09 119282. 35059.9 50565.9 - 187998.
1997:10 176718. 35443.3 107251. - 246186.
1997:11 166081. 35822.5 95869.8 - 236291.
1997:12 164873. 36197.8 93926.3 - 235819.
1998:01 148782. 36569.2 77108.0 - 220457.
1998:02 140043. 36936.9 67647.5 - 212437.
1998:03 153097. 37301.0 79989.0 - 226206.
1998:04 167374. 37661.5 93558.7 - 241189.
1998:05 151507. 38018.6 76991.9 - 226022.
1998:06 133511. 38372.4 58302.9 - 208720.
Can you please tell me what am I doing wrong?
On Tue, 4 Dec 2012, Miviam wrote:
> I'm trying to write an script to save the confidence intervals after a
> forecast for an ARIMA model but the confidence intervals all have the
> same size. I read that someone experimented the same problem some time
> ago. How can I get the correct values? which are supposed to increase over
They will increase over time only if the forecast is dynamic. The most
natural way to ensure that is to forecast out of sample.
# out-of-sample observations to reserve
scalar os = 8
smpl ; -os
arma 1 1 1 ; QNC
matrix yhat = $fcast
matrix se = $fcerr
matrix ci = (yhat - 1.96*se) ~ (yhat + 1.96*se) matrix results = yhat ~ se ~
ci colnames(results, "yhat s.e. low high") print results </hansl>
remaining faithful to my lazy approach :-) I add to the discussion rather
than checking in other environments.
Explaining how I discovered the point may help: I was trying to generalise a
script written by somebody else for cases when both matrices involved were
bound to be non-scalars, to include a case where the first one can collapse
to a scalar (for instance because of restrictions that reduce the parameter
space from many to one dimenion). So the problem arises when you have an
object that can either be a matrix or a scalar. My view is that the code
should work in both cases. Of course, the easy way is writing X'*Y!
> To me it would seem that if somebody writes <m'>, the script author is
> treating m as a matrix. Because why would you use a transpose if the
> object is always expected to be a scalar? Therefore it would seem
> appropriate to always treat <m'y> as a matrix multiplication even in the
> special case when m is 1x1.
> Apart from that, this type of discussion has been led on the mailing
> lists of other open-source projects. Maybe we should all study the
> arguments and results of those discussions, before trying to re-invent
> the wheel... (But you will notice that I, too, was too lazy and instead
> wrote this message straight away :-)
in working with a lot of Kronecker products, I noticed the following:
** seems to take precedence over * (standard matrix multiplication). Of
course that's just a convention and as such is fine, but I couldn't find
any mention of it in the manual -- or did I miss it? (This basically
applies to all precedence issues; I guess it's a thing that would go
into the future hansl guide.)
And other unrelated things:
- The docstring for $xtxinv is quite short. I think it would be helpful
to mention the idea that $sigma**$xtxinv gives you the covariance matrix
of the short-run parameters of the Vecm (or the entire covariance matrix
of the VAR parameters). Also some brief link to where the ordering of
the variables inside the "X" is described would not hurt I guess.
[AFAIK the ordering is: constant, all lags of the first endogenous
variable (in increasing lag order, and differences for a Vecm), all lags
of the second variable, ...., the first unrestricted exogenous variable,
..., all (lagged) error correction terms (for a Vecm). -- were seasonal
dummies enter here exactly I don't know, right now.]