simultaneous equations in gretl
by Allin Cottrell
Hello all,
Up till now, gretl has supported only three-stage least squares for
estimating systems of simultaneous equations. Now, in CVS and in
the win32 snapshot available from sourceforge, I have added support
for FIML and LIML (along with automatic testing of over-identifying
restrictions). I have also added the Hansen-Sargan test for
over-identifying restrictions, for 3SLS and SUR.
I'm looking for help in testing these new additions. I can
replicate the benchmark results for that workhorse, Klein's "Model
1", but I'm finding it hard to find other benchmarks to test
against. So if anyone has datasets on which they've estimated
equation systems using FIML or LIML with other programs, I'd be very
grateful if you could either:
* See if you can replicate the results using gretl, and let me know
what happens; or
* Send me a copy of the dataset and the output from the other
program, so I can try the replication.
Thanks very much for any assistance.
--
Allin Cottrell
Department of Economics
Wake Forest University, NC
19 years, 1 month
Forecasting with AR modeling
by Deepak Muricken
I would like to know how i could use Gretl to make forecasts using AR
modeling under the following conditions:
1. the observations used for making the forecasts are both positively and
negatively biased.
2.Both the positive and negative values of bias have ranges of values from
(28.1 to 1389) and (-57 to -250)
I seem to get some result by :
1. importing the time series data
2.conduct AR modeling with AR-4 and MA-4
3. add to data set the fitted values (by picking add to data set ->fitted
values from model data menu)
4. do Ar modeling on the new data set (options AR-4 and MA-4)
I would appreciate any feedback.
Sincerely
Deepak Muricken
___________________________________________
Deepak Muricken
Renewable Resource Engineer I
Business Performance & Planning, SaskPower
Phone : (306)566-3136
Fax : (306)566-2916
Email : dmuricken(a)saskpower.com
19 years, 2 months
VEC
by alessandro_porpiglia
Hi, I am trying to model a Vector Error Correction model whith gretl, but i
am not able to do it. There is any pakage to download to do it? or there is
some command that i don't know?
thank you all
19 years, 2 months
Information criteria in VARs
by jack
Allin,
I had a look at yesterday's additions in CVS, and they look good to me.
There's only one thing I'd like you to think about: most packages compute
the AIC and BIC criteria for VARs by using the average loglikelihood,
whereas we use the total loglik. After adjusting the correction factors
accordingly, this means that we end up with information criteria that are
exactly n times those from other packages (I checked with Stata and
Eviews).
Of course the difference is immaterial when it comes to the purpose these
statistics are designed for (ie model selection); also, info criteria are
consistently computed in gretl by using the total loglikelihood, so we're
just being consistent here. Still, a casual user might compare our
estimates with those from the competition (;-)) and mistakenly conclude
"gretl is broken".
Do you think that a word of warning in the docs is called for? I'm
sending this to the user list because I'd like to know what the ohers
think too.
Riccardo `Jack' Lucchetti
Dipartimento di Economia
Università di Ancona
jack(a)dea.unian.it
http://www.econ.unian.it/lucchetti
19 years, 2 months
Call for help with documentation
by Allin Cottrell
Since the last gretl release I have added a fair amount of new
functionality, notably:
* seasonal ARMA
* many new features for VAR analysis
* extension of the Johansen cointegration test to cover the
now-standard "five cases", and with the option of including
centered seasonal dummy variables
It's now time to pause and document the new stuff before it gets out
of hand. Some of the new features are pretty much self-explanatory,
and it's just a matter of updating the manual for reference.
The case where I'm looking for some help is the Johansen test.
I'd like for the gretl manual to give a good explanation of the
significance of the 5 cases: when we use one case or another, what
are the implications for the analysis? What are we assuming?
Most of the accounts that I found of this matter are either vague or
very highly technical. Ideally, for gretl we want an explanation
that is comprehensible by an intelligent undergraduate, with a
knowledge of macroeconomics and some experience in econometrics but
not necessarily a firm grasp of matrix algebra. Also ideally, this
account should include some macroeconomic examples (e.g., if you're
investigating purchasing-power parity then case X is probably what
you want, because...; if it's the consumption-income relationship
then case Y is likely to be relevant because...).
In other words, I'm looking for an account of the various
statistical options that runs in terms of their theoretical
suitability, given the nature of the long-run macroeconomic
equilibrium relationship on which the cointegration analysis is
supposed to shed light.
Any help on this would be much appreciated.
Allin Cottrell.
19 years, 2 months
Wishlist: Forecasting "Keynesian Distribution" - a superset of gretl, R & Fairmodel
by James.Callahan@CityofOrlando.net
Allin Cottrell wrote on 07/14/2005:
> As for forecasting, that's now pretty much done in the latest
> release. In gretl CVS we have seasonal ARMA and considerably
> enhanced support for VARs. We don't yet have a serious start on
> user-defined matrix manipulation, though many of the required basic
> functions are there in gretl_matrix.c.
>
> More generally, the question is, what level of sophistication is
> gretl aiming for? My original idea (as I recall) was that I wanted
> gretl to be an excellent tool for undergraduate-level econometrics.
> I think we've pretty much got there, but the target has moved.
> I think the notion is: If gretl provides a good interface for doing
> econometric work, why stop at the basics? The value of gretl, even
> to undergrads, will be enhanced if they can continue to use this
> program for professional work.
>
> "Shadowing or avoiding commercial products"
>
> What I have here is more of an attitude than a definite plan, but
> I'd like for gretl to be able to do much of what Eviews can, but
> better(!).
This is my itch.
I had hoped to be working on this nights & weekends, but a few
complications came up in the spring (a long story...).
What I would like to see is a macroeconomic forecasting environment that
is a superset of gretl.
My inspiration/itch is my experience at Chase Econometrics in the early
1980s and what we could and were trying to do on mainframe virtual
machines.
The environment had several elements:
1. XSIM - regression, multi-equation simulation/modeling, time-series
database, scripting language, report writer (similar to TROLL)
2. Chase Econometric Macroeconomic forecasts (US & International)
3. Chase Econometric Macroeconomic databases (US & International)
Rather than trying to enhance gretl to be a scripting language -- from an
armchair perspective (not hands on at the moment) it would seem to make
more sense to call the gretl API from PYTHON. PYTHON could provide the
command line and GUI interface, as well as a scripting language; while
gretl would provide the statistical tools and objects.
PYTHON could serve as the unifying frontend, scripting & GUI builder.
PYTHON could also be used to call FORTRAN (compiled under appropriate gcc
FORTRAN compiler) and R (see the O'Reilly book, "Learning Python" appendix
for some useful tools for calling functions & objects in other languages
and the RSPython package documentation for interfacing with R).
http://www.omegahat.org/RSPython/
There is a new package in R, called ZOO, that simplifies the handling of
time-series.
http://tolstoy.newcastle.edu.au/R/packages/04/0076.html
PYTHON interfaces to a number of GUI tools including Tkinter and WXWidgets
(wxpython).
http://wiki.wxpython.org/
Professor Roy Fair at Yale has his Fairmodel which is includes a US and a
MultiCountry (MC) macroeconomic models and he regularly publishes
forecasts produced with the models. he also makes all of the data used to
estimate the model available.
http://fairmodel.econ.yale.edu/main2.htm
The Fairmodel runs in its own FORTRAN program the Fair-Parkes (FP)
program.
http://fairmodel.econ.yale.edu/fp/fp.htm
"The FP program can be downloaded in either FORTRAN code to be compiled on
the user's machine or in an executable form for PCs. The FORTRAN code is
not machine specific, and this allows the program to be compiled on a
variety of systems."
Incorporating FP would require the explicit permission of Professor Fair.
When I e-mailed Professor Fair almost 2 years ago there wasn't a specific
license, such as GPL for FP and the FairModel. For better or worse FP &
the Fairmodel were downloadable without an explicit legal contract - I
haven't checked if that has changed.
Now, imagine a Linux Live CD (modelled after Dirk Eddelbuettel's "Quantian
"). Imagine the hypothetical live CD known as "Keynesian Distribution"
that has a PYTHON interface to gretl, Fair-Parkes, Fairmodel, the R
package Zoo and for good measure QuantLib -- as well as some of Dirk's
Perl scripts for downloading financial data from Yahoo.
http://dirk.eddelbuettel.com/quantian.html
http://www.quantlib.org/
http://dirk.eddelbuettel.com/code/yahooquote.html
Oh and while we are daydreaming, in addition to just a raw dump of the
forecast, the forecast can be printed with one of the new Java report
writers (compiled with GCC's GCJ Java compiler) called from PYTHON. Java
report writers include JasperReports and Eclipse/BIRT (search on google).
HERE'S THE PAYOFF:
A professor teaching Freshman macroeconomics could have students do
cookbook excerises such as estimating a consumption function or doing a
predetermined macroeconomic simulation, "just type the commands in the
book."
A professor teaching a Junior year second course in macroeconomics could
have students do more open-ended exercises.
Students could take their "Keynesian Distribution" live CD toolkit with
them to internships in the US Congress, the world of work and even
graduate studies. In the best case scenario this might even lead to
economically literate, model based discussions of macroeconomic policy
options....
FUTHER PAYOFFS:
Once you have reliable source of macroeconomic forecasts one can build
industry and regional models. For example, I never worked in the US Macro
group at Chase Econometrics, I worked in industry specific groups that
relied on the US and International macro forecasts (as well as forecasts
developed by the Regional group). We used the macro/regional databases and
forecasts as a source of independent variables -- both for estimation and
forecast for our industry specific models/forecasts. The Fairmodel is
extraordinarily well-tuned and doesn't require scores of add-factors to
keep it sane (the dirty little secret of most macro models).
What took a 32-bit virtual machine on a mainframe can be done on a PC;
what cost $10,000 or more (in early 1980s) can now be open source.
Now I work for a city government -- but this design has grown beyond what
the City needs and thus the scope of what I can officially do during the
day -- so it has been relegated to nights and weekends -- and hence
remains undone, just the imagination of a person sitting in an armchair.
As I haven't done any coding on this yet, I am too naive to realize the
numerous reasons why the "Keynesian Distribution" won't work.
If you know of an organization willing to sponsor this effort with a
"MacArthur grant" and/or and individual willing to work on it, please let
me know.
Jim Callahan
Management, Budget & Accounting
City of Orlando
(407) 246-3039 office
(407) 234-3744 cell phone
19 years, 2 months
Re: Wishlist Keynesian distribution Fw: [Dynare] dynare version 3.046
by Itzak Ademic
Does anyone use, or just know, the Dynare simulation package?
It seems to have two versions -- a Matlab/Octave(?) suite
and native Windows.
Is it comparable to Fair-Park?
How would it complement existing capabilities?
----- Original Message -----
From: Michel Juillard <Michel.Juillard(a)ens.fr>
To: <dynare(a)cepremap.cnrs.fr>
Sent: Wednesday, July 20, 2005 8:01 PM
Subject: [Dynare] dynare version 3.046
> Version 3.046 of Dynare for Matlab is available for download from
> http://www.cepremap.cnrs.fr/dynare
>
> It fixes a series of bugs
> * correcting bug in fs2000a example
> * correcting several bugs in posterior distribution of various statistics
> * correcting inefficiency in simulation of second order approximated models
> * correcting the bug with a parameter called 'c' in deterministic models
>
> This version also introduces the possibility of supplying a Matlab function for
> computing the steady state of the model instead of relying on Dynare's
> nonlinear solver. See the manual under steady, stoch_simul, estimation or
> unit_root_vars and the example fs2000a_steadystate.m in examples/fs2000
>
> Best wishes
>
> Michel
>
> --
> Michel Juillard
> CEPREMAP, Paris Sciences Economiques,
> Universite Paris 8
>
> ________________________________________
> Dynare mailing list
> Dynare(a)pythie.cepremap.cnrs.fr
> http://pythie.cepremap.cnrs.fr/mailman/listinfo/dynare
19 years, 2 months
Re: gretl development roadmap
by Allin Cottrell
On Thu, 14 Jul 2005, Itzak Ademic wrote:
> I've just discovered GReTL.
> Before I plunge into the code,
> I wonder if there is a FAQ, plan or just an attitude to shadow, or avoid,
> existing products like TSP, FAME, RATS, TROLL, SAS, LISREL, Shazam...
>
> Why not contribute to R or Octave?
>
> Is there a wishlist for capabilities, broader directions, buglist?
Very good questions.
Up till fairly recently I've been "following my nose" in gretl
development, responding to user requests and to things I have found
interesting. This spring I went and visited gretl enthusisasts in
Italy, Spain and Poland, and I have a bit more direction. The main
short- to medium-term requests I heard related to better support for
forecasting, seasonal ARMA models, better support for VARs, and (a
bit more speculatively) a mechanism for manipulation of user-defined
vectors/matrices so people can roll their own estimators and tests
(perhaps along the lines of Doornik's Ox).
As for forecasting, that's now pretty much done in the latest
release. In gretl CVS we have seasonal ARMA and considerably
enhanced support for VARs. We don't yet have a serious start on
user-defined matrix manipulation, though many of the required basic
functions are there in gretl_matrix.c.
More generally, the question is, what level of sophistication is
gretl aiming for? My original idea (as I recall) was that I wanted
gretl to be an excellent tool for undergraduate-level econometrics.
I think we've pretty much got there, but the target has moved.
I think the notion is: If gretl provides a good interface for doing
econometric work, why stop at the basics? The value of gretl, even
to undergrads, will be enhanced if they can continue to use this
program for professional work.
"Shadowing or avoiding commercial products"
What I have here is more of an attitude than a definite plan, but
I'd like for gretl to be able to do much of what Eviews can, but
better(!). I have great respect for RATS. I have used RATS (under
dosemu) to test the output of some gretl routines. I don't think
gretl (at least in the medium term) can do all that RATS does, but I
do think we can have a greatly superior user interface.
"Why not contribute to R or Octave?"
Fair question. Again, I have great respect for R, but it has a hell
of a learning curve; I really blanch at the idea of asking my
students to work with R. And personally, I find I always have to
look everything up in the docs before I can work with data in R
(though I have done my homework from time to time, and have used R
output as a benchmark against which to test gretl for some
functionality). So there's ease of use. But also, gretl is ahead
of R in some areas of econometric functionality. Last time I
checked, R did not support FIML estimation of systems of equations.
Octave is a good program, but it's not particularly oriented towards
econometrics. I would happily "steal" Ocatve's matrix manipulation
code, except that Octave is written in C++ while gretl is in C
(though this need not be absolute barrier).
I'll try to summarize: My (immodest) notion is that gretl should be
an extremely user-friendly econometrics (and to some extent, general
statistics) package with true cross-platform GUI functionality and
also solid support for scripting. It should be a flagship for
free, open-source software in the statistical domain -- a program
which many users might prefer to expensive proprietary alternatives
(I have some evidence that this is true already).
I'm cc'ing this to the gretl users mailing list
http://ricardo.ecn.wfu.edu/mailman/listinfo/gretl-users
If you're sufficiently interested, I'd urge you to continue the
correspondence there. One of the things I learned this spring is
that at this point gretl is becoming a community project, and it
would be good to have others put in their views.
Allin Cottrell
19 years, 2 months
using gretl for AR(1) modeling of daily time series
by Deepak Muricken
Hi
I am trying to use gretl for modeling daily data as opposed to yearly
data. I
feel that this is possible as one of the sample data ( file data 10-2),
hourly load and
temperature data seem to be compatible for time series analysis. The
data is an hourly information.
W hat I have with me is daily observation which I wish to use in the
format
given below.
obs (meaning)
8:01 1 st of August
8:02
.......
8::30 30 th of August
8:31 31 st of August
1:01 1 st of September
1:02
........
1:30 30 th of August
I would appreciate any feed back.
19 years, 2 months