Data types not conformable for operation error msg
by David Hamilton

Hello, I'm new to Gretl, and while I think I've picked things up
pretty quickly I'm stumped with what is probably a simple problem for
regular users. When I execute the muhat function in script below I get
a "Data types not conformable for operation" error message. It
appears that the second function doesn't like the matrix reference
arguments, but I don't see why that would be a problem. I've searched
the archives for similar problems but haven't found a solution to my
problem. (FYI I'm running Gretl 1.8.4 on an XP machine.) Your help
would be greatly appreciated.
Script:
function matrix local_level (series y)
/* starting values */
scalar s1 = 1
scalar s2 = 1
/* set up Kalman matrices */
matrix H = {1 ; 1}
matrix F = {1, 0 ; 0, 0.97}
matrix Q = {s1, 0 ; 0, s2}
/* Kalman filter set-up */
kalman
obsy y
obsymat H
statemat F
statevar Q
end kalman --diffuse
/* ML estimation */
mle ll = ERR ? NA : $kalman_llt
Q[1,1] = s1
Q[2,2] = s2
ERR = kfilter()
params s1 s2
end mle
return s1 ~ s2
end function
function list loclev_sm (series y, scalar s1, scalar s2)
kalman
obsy y
obsymat H
statemat F
statevar Q
end kalman --diffuse
matrix ret = ksmooth()
series wt = ret[,1]
series xt = ret[,2]
list components = wt xt
return components */
end function
/* -------------------- execute -------------------- */
matrix Vars = local_level(y)
list muhat = loclev_sm(y, Vars[,1], Vars[,2])
Output:
gretl version 1.8.4
Current session: 2009-09-18 20:25
? function matrix local_level (series y)
> /* starting values */
> scalar s1 = 1
> scalar s2 = 1
> /* set up Kalman matrices */
> matrix H = {1 ; 1}
> matrix F = {1, 0 ; 0, 0.97}
> matrix Q = {s1, 0 ; 0, s2}
> /* Kalman filter set-up */
> kalman
> obsy y
> obsymat H
> statemat F
> statevar Q
> end kalman --diffuse
> /* ML estimation */
> mle ll = ERR ? NA : $kalman_llt
> Q[1,1] = s1
> Q[2,2] = s2
> ERR = kfilter()
> params s1 s2
> end mle
> return s1 ~ s2
> end function
? function list loclev_sm (series y, scalar s1, scalar s2)
> kalman
> obsy y
> obsymat H
> statemat F
> statevar Q
> end kalman --diffuse
> matrix ret = ksmooth()
> series wt = ret[,1]
> series xt = ret[,2]
> list components = wt xt
> return components */
> end function
/* -------------------- execute -------------------- */
? matrix Vars = local_level(y)
Using numerical derivatives
Tolerance = 1.81899e-012
Function evaluations: 67
Evaluations of gradient: 15
Model 1: ML, using observations 2001:10-2009:06 (T = 93)
ll = ERR ? NA : $kalman_llt
Standard errors based on Outer Products matrix
estimate std. error t-ratio p-value
----------------------------------------------------
s1 4.08513 16.8057 0.2431 0.8079
s2 4.12376 16.5388 0.2493 0.8031
Log-likelihood -241.2759 Akaike criterion 486.5518
Schwarz criterion 491.6170 Hannan-Quinn 488.5970
Generated matrix Vars
? list muhat = loclev_sm(y, Vars[,1], Vars[,2])
Data types not conformable for operation
Error executing script: halting
> list muhat = loclev_sm(y, Vars[,1], Vars[,2])
14 years, 10 months

Re: [Gretl-users] rounding
by Sven Schreiber

Kehl Dániel schrieb:
> Dear Sven,
>
> thanks for your help. In fact, I want to generate n (eg. 10000)
> random numbers of some kind (stnormal, normal with other parameters
> and other distributions too) and want to check the effects of
> rounding of the 10000 numbers on the main descriptive statistics (in
> the first step). Of course I want to do that several times, to see
> how stable the difference between the desc. stats are. I can plot the
> distribution of the differences caused by the rounding.
>
> Is the inner loop still waste of time in this study?
The point is that to generate n random numbers you just have to create a
new (empty) datafile with a sample size of 10000, and then you call
normal() just once, like so (more or less):
nulldata 10000
u=normal()
u2=round(u)
u will be a series with 10000 independent random numbers, and u2 will
hold your transformed series.
or you could do it with matrices instead of series, which would like
approx. like this:
matrix mu = mnormal(10000,1)
matrix mu2 = round(mu) # not sure if this works
But if you haven't worked with gretl matrices before, you probably
should stick to the series-based approach
your outer loop looked perfectly fine
good luck,
sven
14 years, 10 months

Re: [Gretl-users] rounding
by Kehl D ániel

Dear Ignacio and Riccardo!
Thank you for your help! I understand now what the problem was!
The script is now:
loop 10 --progressive
loop 10000
genr u = normal()
genr u2 = round(u)
endloop
genr a = mean(u)
genr b = sd(u)
genr a2 = mean(u2)
genr b2 =sd(u2)
store xxx.gdt a b a2 b2
endloop
First I put the "--progressive" to the wrong loop.
The only problem is that this was a bit slow, with 10/10000 it took about 15 minutes.
But as I awaited: the standard deviation of the rounded series is about 4% higher than that of the original one.
Once again, if anybody knows papers about this topic, please let me know (I hardly found any).
Thanks again!
Daniel
14 years, 10 months

rounding
by Kehl D ániel

Dear Community,
I want to generate x random numbers and compute some descripive statistics on the results. Than I round the values of the random numbers and want to know, how my desriptive statistics have changed. I want to make this process n times. I have two questions.
1. Is there a way to set the roundings Base, I mean to the nearest 0,1 or 10. Not just integers but other rounding rules I mean.
2. Why can't I store my results to the xxx.gdt using the store command? I get the message:
'a' is not the name of a variable
>> store xxx.gdt a b a2 b2
I know a is a scalar, but how do I store it than?
Thanks for your help, and if you know any papers/articles on this topic, please let me know!
Have a nice day!
Kehl Dániel
University of Pécs, Hungary
my (first in my life) script is:
loop n
loop x
genr u = normal()
genr u2 = round(u)
endloop
summary u
summary u2
genr a = mean(u)
genr b = sd(u)
genr a2 = mean(u2)
genr b2 =sd(u2)
store xxx.gdt a b a2 b2
endloop
14 years, 10 months

Re: [Gretl-users] unobserved components using kalman
by David Hamilton

Dear Prof. Diaz-Emparanza
I greatly appreciate your taking the time to reply. I looked at the
example in the user’s guide, and I see the similarities. A few
things still confuse me, however.
Given my system and data, I have r = 2, n =1, and T = 93. My kalman
matrices should therefore have the following dimensions:
y (obsy, T x n) 93 x 1
H (obsymat, r x n) 2 x 1 : H = {1 ; 1}
R (obsvar, n x n) 1 x 1 : R = {s2}
F (statemat, r x r) 2 x 2 : F = {1, 0; 0, rho}
Q (statevar, r x r) 2 x 2 : Q = {s1, 0; 0, 0}
Do you agree?
So, using the Gretl code included in the user’s guide as a template,
my system would be written:
function matrix local_level (series y)
scalar s1 = 1
scalar s2 = 1
scalar rho = 1
matrix H = {1 ; 1}
matrix F = {1, 0; 0, rho}
matrix Q = {s2, 0; 0, s1}
kalman
obsy y
obsymat H
statemat F
statevar Q
obsvar s1
end kalman --diffuse
mle ll = ERR ? NA : $kalman_llt
F[1,1] = s2
Q[2,2]= rho
ERR = kfilter()
params s1 s2 rho
end mle
return s1 ~ s2 ~ rho
end function
function series loclev_sm (series y, scalar s1, scalar s2, scalar rho)
kalman
obsy y
obsymat H
statemat F
statevar Q
obsvar s1
end kalman --diffuse
series ret = ksmooth()
return ret
end function
matrix Vars = local_level(y)
muhat = loclev_sm(y, Vars[1], Vars[2], Vars[3])
This seems to work up to where the code invokes muhat, then I get
“Data error”:
gretl version 1.8.4
Current session: 2009-09-17 12:19
? function matrix local_level (series y)
> > scalar s1 = 1
> scalar s2 = 1
> scalar rho = 1
> > matrix H = {1 ; 1}
> matrix F = {1, 0; 0, rho}
> matrix Q = {s2, 0; 0, s1}
> > kalman
> obsy y
> obsymat H
> statemat F
> statevar Q
> obsvar s1
> end kalman --diffuse
> > mle ll = ERR ? NA : $kalman_llt
> F[1,1] = s2
> Q[2,2]= rho
> ERR = kfilter()
> params s1 s2 rho
> end mle
> return s1 ~ s2 ~ rho
> end function
? function series loclev_sm (series y, scalar s1, scalar s2, scalar rho)
> kalman
> obsy y
> obsymat H
> statemat F
> statevar Q
> obsvar s1
> end kalman --diffuse
> series ret = ksmooth()
> return ret
> end function
? matrix Vars = local_level(y)
Using numerical derivatives
Tolerance = 1.81899e-012
Function evaluations: 46
Evaluations of gradient: 16
Model 19: ML, using observations 2001:10-2009:06 (T = 93)
ll = ERR ? NA : $kalman_llt
Standard errors based on Outer Products matrix
estimate std. error t-ratio p-value
------------------------------------------------------
s1 -1.66438 0.993641 -1.675 0.0939 *
s2 0.238300 0.582520 0.4091 0.6825
rho 8.71710 1.77558 4.909 9.13e-07 ***
Log-likelihood -234.0736 Akaike criterion 474.1471
Schwarz criterion 481.7449 Hannan-Quinn 477.2149
Replaced matrix Vars
? muhat = loclev_sm(y, Vars[1], Vars[2], Vars[3])
Data error
Error executing script: halting
> muhat = loclev_sm(y, Vars[1], Vars[2], Vars[3])
Again, I appreciate your help and look forward to your feedback.
14 years, 10 months

unobserved components using kalman
by David T. Hamilton

Hi, I could use some guidance from seasoned Gretl users. I am trying to estimate the trend component of a univariate financial time series using an unobserved components model and the Kalman Filter using the kalman command. I'm new to Gretl and it's been some time since I've even looked at state space models. Here are my equations:
Let x(t) be the univariate monthly financial time series data I have.
x(t) = w(t) + y(t) where w(t) is the trend and y(t) is the cycle
w(t) = x(t-1) + e(t)
y(t) = rho*y(t-1) + u(t)
e(t) and u(t) not correlated
In the notation of the Gretl users guide, I believe I have the following matrices for the kalman command:
H = { 1 ; 1}
F = {1, 0; 0, rho}
Q = {var(e), 0; 0, var(u)}
I'm not sure what to do with the observation matrix -- this is where I'm stuck. Also,should I specify statevar and obsvar as scalars instead? I have a univariate time series, x(t). Once I have the system set up I could proceed to estimate rho, var(e) and var(u), then calculate the trend forecasts.
Any guidance would be greatly appreciated.
14 years, 10 months

Re: [Gretl-users] saving session vs saving data (Sven Schreiber)
by Gordon Hughes

Sven may be correct about the potential confusion between data files
and session files, but I disagree strongly with his solution.
I suspect that this is a matter of approach to econometric projects,
but for me the overriding issue is to ensure that there is a clear
trail of what has been done to raw data as well as what models have
been estimated. Quite apart from my own mistakes, I have had to deal
with so much trouble caused
by students and researchers who fail to maintain a satisfactory
record of what they have been done.
For this reason, in my view it is basic good practice to use script
files (a) to document the process of transforming data variables and
creating new data files, and (b) to maintain a record of the models
that have been tested so that the process can be replicated either
with new or updated data or by other researchers or in preparing work
for publication. In this context, a clear distinction between data
files, script files, etc should be the dominant way of
working. Session files may be convenient but they discourage good
working practice. For myself, I do 99% of my work through script
files (or do files in Stata) and I insist that my students and
research assistants should work in a similar way because it is so
much easier to understand what has been done.
This is a plea to maintain the primacy of gdt datafiles combined with
script files, which are simple text. Session files should not become
the default file type.
Gordon Hughes
>Therefore I think that it would be worth considering centering
>everything on the session files (.gretl). In that respect I think Eviews
>has got it right: just one workfile potentially holding all kinds of
>objects that you may need. The datafiles (.gdt) would still be supported
>of course as an export/import format, but the interface would make it
>clear to the user that it's an explicit export (save data as...) and
>that the active storage file is always the session file.
>
>Of course that would be more like a medium-term plan I guess, so in the
>short term I would suggest the following:
>* when a session is opened, don't show any datafile name in the status
>line just below the menu bar -- that's confusing precisely because that
>file may not exist anymore or be altered in the meantime
>* not show a dialog for ctrl-s (too annoying for real work)
>* so probably follow your suggestion to interpret ctrl-s as saving the
>session (though some people may be shocked when they don't find a .gdt
>file afterwards)
14 years, 10 months

saving session vs saving data
by Summers, Peter

Hi all,
I don't know if this is a bug or a reflection of my relative inexperience with session files. I opened a previous session & did some further work, then when I tried to save the data set (ctrl-s), I got an error message saying "couldn't open [my data set] for writing." I've since figured out that I need to save the session itself, rather than the data set, but I'm wondering if this is gretl's intended behavior in this case.
TIA
===============================
Dr. Peter Summers
Assistant Professor
Department of Economics & Geography
Texas Tech University
===============================
14 years, 10 months

Re: [Gretl-users] Stata collapse command
by Gordon Hughes

I cannot forebear to respond to Allin's comment.
Stata's collapse command - and the expand command (which replicates
observations) - is incredibly useful if you doing any large amount of
data processing involving the manipulation of cross-section or panel
datasets with mixed periodicity or very different sources of
data. Consider adding regional statistics to state data. I am using
Stata for a large study based on cross-country panel data from a
whole variety of sources. My Stata programs probably have 50 or more
uses of collapse in one context or another. It is much more
cumbersome to write such code using matrix commands, especially when
datasets get to the limits of storage capacity.
I suspect that the key is "horses for courses". No program can do
everything equally well. I don't think that Gretl can or should
attempt to be a Swiss penknife for data manipulation or for analysing
very large datasets. Stata is expensive unless you can use it via an
academic site license, but most kinds of data manipulation can be
managed in Excel (or Gnumeric if you want to stick with open source
software).
It is worth noting that collapse (like many Stata commands) is
implemented via an ado-file. My experience is that writing ado-files
is a horrible process because of the cumbersome way of dealing with
variables - the reasons for which I understand but still don't
like. I think the real lesson for Gretl is to promote the use and
sharing of script functions with a reasonable balance between
generality and ease of use.
Gordon Hughes
>On Thu, 10 Sep 2009, Irwin, James R wrote:
>
> > Hi. Wondering if anyone can point me toward how to get Gretl to
> > do the equivalent of STATA's collapse command. For example, I
> > have a data set with about 1,000 observations with YEAR X2 and
> > X3 (where YEAR is an integer with values from 1760 to 1880).
>
> > I want to get a data set that is the count and average of X2 by
> > YEAR. In STATA I write
> >
> > collapse (count) num2=X2 (mean) avg2=X2, by(YEAR)
> >
> > and I get a data set with that is YEAR and counts and means of
> > the variable X2 for each year.
> >
> > From what I've seen of Gretl it seems this should be a trivial
> > exercise but I seem to be stumped.
>
> > Thanks for your consideration. -- jim irwin (economic historian,
> > trying to migrate from STATA).
>
>Welcome to the gretl list, and I hope we can help you to migrate
>without too much pain!
>
>I have to admit that Stata's "collapse" command seems oddly
>specific to me -- I mean, I wouldn't have thought that such an
>apparently specialized operation would merit a command to itself.
>But maybe that just shows lack of imagination on my part!
>
>Anyway, yes, gretl can do this sort of thing but you have to roll
>your own "collapse". My approach below is to create a matrix
>containing the "collapsed" values, then substitute this matrix for
>the current dataset.
14 years, 10 months

stata collapse command
by Irwin, James R

Hi. Wondering if anyone can point me toward how to get Gretl to do the
equivalent of STATA's collapse command. For example, I have a data set
with about 1,000 observations with YEAR X2 and X3 (where YEAR is an
integer with values from 1760 to 1880).
I want to get a data set that is the count and average of X2 by YEAR.
In STATA I write
collapse (count) num2=X2 (mean) avg2=X2, by(YEAR)
and I get a data set with that is YEAR and counts and means of the
variable X2 for each year.
>From what I've seen of Gretl it seems this should be a trivial exercise
but I seem to be stumped.
Thanks for your consideration. -- jim irwin (economic historian, trying
to migrate from STATA).
14 years, 10 months