Fixed effects forecast
by Ricardo Gonçalves Silva

Hi,
Can Gretl forecast 1 to 3 periods ahead an estimated panel data(fixed-effects with no iterations) model?
HTH
RIck
12 years, 1 month

Paper advocating open source software and gretl
by Talha Yalta

Dear gretl users:
You might be interested to hear about my new paper entitled "Should
Economists Use Open Source Software for Doing Research?" published
this month in Computational Economics. The paper investigates
econometric software reliability and advocates the use of open source
software by taking gretl as a case study and showing how responsive
and transparent its development process is. I think many people here
might find it an interesting read.
More information and the download link is available here:
http://ideas.repec.org/a/kap/compec/v35y2010i4p371-394.html
I can send a working paper version if you do not have access to the above.
Cheers
A. Talha Yalta
--
“Remember not only to say the right thing in the right place, but far
more difficult still, to leave unsaid the wrong thing at the tempting
moment.” - Benjamin Franklin (1706-1790)
--
12 years, 6 months

gretl 1.7.6rc1
by Allin Cottrell

Current gretl CVS and the Windows snapshot at
http://ricardo.ecn.wfu.edu/pub/gretl/gretl_install.exe
contain release candidate 1 for gretl 1.7.6.
Please note that this version involves a backward-incompatible
change with respect to gretl 1.7.5 and earlier, affecting
user-defined functions that (a) take a named list of variables as
an argument and (b) do things with the list-member variables by
means of a "foreach" loop on the list.
I won't go into the rationale for this change here. Anyone who
wants the details may look at the proceedings on the gretl-devel
list for July, which were mostly taken up with this issue:
http://lists.wfu.edu/pipermail/gretl-devel/2008-July/thread.html
There's also a brief discussion in the chapter of the User's Guide
that deals with user-defined functions. But here's the bottom
line for users:
* If you want to "get hold of" a list-member variable in the
context noted above, you have to use the syntax listname.varname,
where listname is the name of the list in question and varname is
the name of the list member. (This is required only if you're
working with a list that was supplied as a function argument.)
Trivial example: inside a function, creating new variables which
are the cubes of the members of an original list, xlist, where
xlist is an argument to the function.
Old style:
loop foreach i xlist
$i_3 = $i^3
endloop
New style:
loop foreach i xlist
$i_3 = (xlist.$i)^3
endloop
In the new scheme, "$i" gets the name of the list-member variable
alright, but the variable is not "visible" under that name within
the function. So on the right-hand side of the expression that
creates the cubes, we need "(xlist.$i)^3". (Well, actually the
parentheses are not required, but wearing your seatbelt is in
general a good idea.)
Although this may affect quite a large number of existing
functions, we believe the effects are localized and the update
should be trivial. If anyone has a function for which the update
is _not_ trivial, please let us know.
Allin Cottrell.
12 years, 11 months

Re: [Gretl-users] Gretl-users Digest, Vol 42, Issue 39
by MICHAEL BOLDIN

For PIN estimation in SAS, see the 2 examples you will find by
searching Google with "SAS PIN estimation"
You might be able to translate to GRETL, but if you have minimal
programming experience and no experience with MLE econometrics I
suggest first learning how to run some of simple MLE examples in the
GRETL manual.
Or you could decide it is easiest to use SAS and the programs 'AS IS'
>
> Message: 1
> Date: Thu, 29 Jul 2010 04:54:58 -0700 (PDT)
> From: Josephine Sudiman <jsudiman(a)yahoo.com>
> Subject: [Gretl-users] PIN Estimation
> To: gretl-users(a)lists.wfu.edu
> Message-ID: <201575.57102.qm(a)web53603.mail.re2.yahoo.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Dear Allin and GRETL-users,
>
> Thank you for your kind response. I have read chapter 17, Maximum Likelihood
> Estimation, but still not sure from which angle I have to start. I never did
> this before and have a very minimal exposure on programming, except for some
> Macro at Excel.?I give some description of what the PIN (Probability of Informed
> Trading)model is about and the parameter.
>
>
> This model assumed that each day can be classified into either a day with
> information (with the probability x) or a day without information (with the
> probability 1-x).
>
>
> If the day is categorized as day without information, then only uninformed
> traders will do transactions (buy and sell)?during that day; the buy arrival
> rate of is eb and the sell arrival rate is es.
>
> If it is?a day with information, there are further possibilities:
> (1)The news is bad with probability d
> (2)The news is good with probability (1-d)
>
> If the case is day with good news, then the?sell arrival rate?is es and the buy
> arrival rate is u + eb.
> If the case is day with bad news, then the buy arrival rate is eb and the sell
> arrival rate is u + es.
>
> To be brief, u is the arrival rate of informed traders. These traders only act
> to buy (sell) if the day has good (bad) news. While eb is the buy?arrival rate
> of uninformed traders and es is the sell arrival rate of uninformed traders.
>
> We want to estimate this x,u,eb,es, and d using maximum likelihood estimation.
> The data that we have to estimate them comes from the daily number of buyer
> initiated trades (B) and daily seller initiated trades (S)?over the period P
> days. In my case I have 240 days, so I have B1 till B240 and S1 till 240 as my
> data set.
>
>
> ?L(0|B,S) = (1-x)e^-eb (eb^B/B!) e^-es (es^S/S!) +
> ?????????????????????? xde^-eb (eb^B/B!)e^-(u+es) (((u+es)^S)/S!)+
> ????????????????????? x(1-d)e^-(u+eb) ((u+eb)^B!)e^-es ((es^S)/S!)
> ?
> The model also has an assumption that arrival rates of informed and uninformed
> traders follow independent Poisson processes.
> ?
> I would be glad if there are people on the list who can give me a clue on how to
> start, as I am not sure what is alpha, beta and gamma in this model. Many thanks
> in advance for your kind attention.
> ?
> Best wishes,
> Josephine
>
>
>
>
13 years, 1 month

database problem
by artur bala

Hi Allin,
I saved a .gdt file as a gretl database. When I wanted to open the
database (.bin file) gretl pops up the error window "data error" and
couldn't load it.
best,
artur
--
*************************************
Artur BALA
Development Economist, Consultant
Phone: +216 24 71 00 80
E-mail: artur.bala.tn(a)gmail.com
skype: artur.bala.tn
*************************************
13 years, 1 month

PIN Estimation
by Josephine Sudiman

Dear Allin and GRETL-users,
Thank you for your kind response. I have read chapter 17, Maximum Likelihood
Estimation, but still not sure from which angle I have to start. I never did
this before and have a very minimal exposure on programming, except for some
Macro at Excel. I give some description of what the PIN (Probability of Informed
Trading)model is about and the parameter.
This model assumed that each day can be classified into either a day with
information (with the probability x) or a day without information (with the
probability 1-x).
If the day is categorized as day without information, then only uninformed
traders will do transactions (buy and sell) during that day; the buy arrival
rate of is eb and the sell arrival rate is es.
If it is a day with information, there are further possibilities:
(1)The news is bad with probability d
(2)The news is good with probability (1-d)
If the case is day with good news, then the sell arrival rate is es and the buy
arrival rate is u + eb.
If the case is day with bad news, then the buy arrival rate is eb and the sell
arrival rate is u + es.
To be brief, u is the arrival rate of informed traders. These traders only act
to buy (sell) if the day has good (bad) news. While eb is the buy arrival rate
of uninformed traders and es is the sell arrival rate of uninformed traders.
We want to estimate this x,u,eb,es, and d using maximum likelihood estimation.
The data that we have to estimate them comes from the daily number of buyer
initiated trades (B) and daily seller initiated trades (S) over the period P
days. In my case I have 240 days, so I have B1 till B240 and S1 till 240 as my
data set.
L(0|B,S) = (1-x)e^-eb (eb^B/B!) e^-es (es^S/S!) +
xde^-eb (eb^B/B!)e^-(u+es) (((u+es)^S)/S!)+
x(1-d)e^-(u+eb) ((u+eb)^B!)e^-es ((es^S)/S!)
The model also has an assumption that arrival rates of informed and uninformed
traders follow independent Poisson processes.
I would be glad if there are people on the list who can give me a clue on how to
start, as I am not sure what is alpha, beta and gamma in this model. Many thanks
in advance for your kind attention.
Best wishes,
Josephine
13 years, 2 months

Creating a new series
by Henrique Andrade

Dear Gretl Community,
I have 2 time-series, X and Y, and I need to create a new one, Z, that is
defined as:
Z(1) = X(1)
Z(2) = Z(1) + Y(2)
Z(3) = Z(2) + Y(3)
.
.
.
Z(n) = Z(n-1) + Y(n)
Where Z(1) is the first observation of the series Z, Z(2) is the second
observation of the series Z, and so on.
How can I do this with Gretl?
Best,
--
Henrique C. de Andrade
Doutorando em Economia Aplicada
Universidade Federal do Rio Grande do Sul
www.ufrgs.br/ppge
13 years, 2 months

PIN Estimation.
by Josephine Sudiman

Dear GRETL-users,
I am Josephine, currently having problem on how to do maximum likelihood for PIN
(Probability of Informed Trading) estimation for my data. I give the brief
information of this model (proposed initially by Easley, Kiefer, O'Hara and
Paperman, 1996) and the sample of my data in the attachment. Thank you very much
in advance for your kind attention.
Best wishes,
Josephine
13 years, 2 months

Test for cointegration
by Farmer, Jesse

Hello:
I am doing a test for cointegration across 5 time-series variables.
I've run the test but I am not sure how to interpret the output. Could
someone tell me if my data is exhibiting cointegration, and if so, how
did you determine that? I realize this is a n00b question, so apologies
in advance.
Thanks!
My output below:
-----------------
Step 1: testing for a unit root in Var1
Augmented Dickey-Fuller test for Var1
including 5 lags of (1-L)api2
sample size 517
unit-root null hypothesis: a = 1
test with constant
model: (1-L)y = b0 + (a-1)*y(-1) + ... + e
1st-order autocorrelation coeff. for e: 0.004
lagged differences: F(5, 510) = 7.952 [0.0000]
estimated value of (a - 1): -0.00320084
test statistic: tau_c(1) = -1.10968
asymptotic p-value 0.7144
Step 2: testing for a unit root in Var2
Augmented Dickey-Fuller test for Var2
including 5 lags of (1-L)base
sample size 517
unit-root null hypothesis: a = 1
test with constant
model: (1-L)y = b0 + (a-1)*y(-1) + ... + e
1st-order autocorrelation coeff. for e: 0.001
lagged differences: F(5, 510) = 2.011 [0.0756]
estimated value of (a - 1): -0.00202185
test statistic: tau_c(1) = -0.612473
asymptotic p-value 0.8656
Step 3: testing for a unit root in Var3
Augmented Dickey-Fuller test for Var3
including 5 lags of (1-L)peak
sample size 517
unit-root null hypothesis: a = 1
test with constant
model: (1-L)y = b0 + (a-1)*y(-1) + ... + e
1st-order autocorrelation coeff. for e: 0.002
lagged differences: F(5, 510) = 2.565 [0.0263]
estimated value of (a - 1): -0.0015613
test statistic: tau_c(1) = -0.535532
asymptotic p-value 0.8819
Step 4: testing for a unit root in Var4
Augmented Dickey-Fuller test for Var4
including 5 lags of (1-L)nbp
sample size 517
unit-root null hypothesis: a = 1
test with constant
model: (1-L)y = b0 + (a-1)*y(-1) + ... + e
1st-order autocorrelation coeff. for e: 0.001
lagged differences: F(5, 510) = 5.671 [0.0000]
estimated value of (a - 1): -0.0011618
test statistic: tau_c(1) = -0.431389
asymptotic p-value 0.9016
Step 5: testing for a unit root in Var5
Augmented Dickey-Fuller test for Var5
including 5 lags of (1-L)brent
sample size 517
unit-root null hypothesis: a = 1
test with constant
model: (1-L)y = b0 + (a-1)*y(-1) + ... + e
1st-order autocorrelation coeff. for e: 0.001
lagged differences: F(5, 510) = 1.759 [0.1196]
estimated value of (a - 1): -0.00386803
test statistic: tau_c(1) = -1.05127
asymptotic p-value 0.7369
Step 6: cointegrating regression
Cointegrating regression -
OLS, using observations 2008/01/02-2010/01/01 (T = 523)
Dependent variable: api2
coefficient std. error t-ratio p-value
---------------------------------------------------------
const -35.8323 1.81277 -19.77 3.20e-065 ***
base 1.58498 0.321094 4.936 1.08e-06 ***
peak -0.701765 0.225461 -3.113 0.0020 ***
nbp 0.848089 0.0617052 13.74 7.18e-037 ***
brent 0.686534 0.0279061 24.60 4.14e-089 ***
Mean dependent var 109.5593 S.D. dependent var 35.61656
Sum squared resid 16623.86 S.E. of regression 5.665015
R-squared 0.974895 Adjusted R-squared 0.974701
Log-likelihood -1646.637 Akaike criterion 3303.274
Schwarz criterion 3324.571 Hannan-Quinn 3311.615
rho 0.946380 Durbin-Watson 0.103074
Step 7: testing for a unit root in uhat
Augmented Dickey-Fuller test for uhat
including 5 lags of (1-L)uhat
sample size 517
unit-root null hypothesis: a = 1
model: (1-L)y = b0 + (a-1)*y(-1) + ... + e
1st-order autocorrelation coeff. for e: -0.001
lagged differences: F(5, 511) = 3.361 [0.0054]
estimated value of (a - 1): -0.0533006
test statistic: tau_c(5) = -3.60562
asymptotic p-value 0.2762
There is evidence for a cointegrating relationship if:
(a) The unit-root hypothesis is not rejected for the individual
variables.
(b) The unit-root hypothesis is rejected for the residuals (uhat) from
the
cointegrating regression.
13 years, 2 months

Excel VBA + libgretl API
by MB

Hi,
Is there any example Excel VBA code around that demonstrates how to use the
gretl API from Excel VBA i.e. shift data from VBA to the API, have it run a
regression and a few diagnostics and then return the result to VBA?
Thanks!
13 years, 2 months