These people were from State Planning Agency. They told me that they
have about 60 series (which have different levels of collinearity) and
they use SPSS to do principal components analysis to create regional
development indices (Turkey has 81 provinces). I am not very familiar
with pca but I can call them and learn more.
I think you just need the eigenvectors. Put all the series into a matrix,
get the
eigenvectors, and peel off the first few, and put these into
series; these are pc. Estimate model, and perform the inverse
transformation on the coefficients to get back to the coefficients in the
original data rotation if desired.
>> 2)- Some students suggested (and all others agreed) that it
would be
>> very useful to have a predict command, which will provide predicted
>> values as well as slopes (given Xs) for various nonlinear models such
>> as polynomial regressions, logit, probit etc. I think this could be
>> nice to have as a command as well as a GUI entry next to the forecast
>> item. Maybe a small goodie to consider for the 2.0 release? They said
>> Stata has this.
>
> I don't see what the difference is between "predicted values"
> and what we offer already (in sample fitted values and
> out-of-sample forecasts). Can you expand on what you mean?
Now this is maybe I didn't know how to fully use gretl in this
context. The issue arised on 2 occasions:
(1) I had a polynomial regression and I was showing them to enter from
the GUI the command something like:
prediction = $coeff[1] + $coeff[2]*x + $coeff[3]*x^2
(2) I was showing an ordered logit example and I had long commands like:
pcut0 = 1 / (1+exp(-$coeff[1]-x*$coeff[2])+exp(-$coeff[1]-x*$coeff[3]))
pcut1= exp(-$coeff[1]-x*$coeff[2]) /
(1+exp(-$coeff[1]-x*$coeff[2])+exp(-$coeff[1]-x*$coeff[3]))
pcut2= exp(-$coeff[1]-x*$coeff[3]) /
(1+exp(-$coeff[1]-x*$coeff[2])+exp(-$coeff[1]-x*$coeff[3]))
...and they said Stata (supposedly) has a command where you enter x
and get the prediction and slope for different models :-P
Indeed Stata does and it is very, very slick. I think it has taken the
Stata team
a long time to get it to the point that it is now. That said,
it's not very hard to do in gretl on a model by model basis See chapter
16 of
http://learneconometrics.com/gretl/using_gretl_for_POE4.pdf for some
simple examples. The ability to jump seamlessly between matrices and gretl
results makes it amazingly easy to use. In it there are a few examples of
computing marginal effects and even standard errors for the average
marginal effects. A more competent programmer could do something more
elegant and general, but it it's not very hard to do on a case by case
basis.
Although, as Talha says, a 'button' or function is nice since most users
don't appreciate the challenge of getting these like some of us do...:)
One of the things that I've noticed about Stata's development is that they
have expanded many of the commands to encompass almost every model and they
continue to add functionality to them with each new version. For instance,
predict is used to get in-sample model predictions, residuals, variances
(e.g., from arch and garch), and to generate both static and dynamic
forecasts. margins computes different types of marginal effects (discrete,
continuous, average, at specific points, at the averages, and so on
depending on the options used). In fact, these commands do so much it
makes them a little hard to use correctly, IMO--for instance, it took some
time to figure out exactly how and which marginal effects were being
computed for the qualitative choice models.
So, that presents something of a design choice. Should one use commands
that apply to all circumstances (even though they may work differently
under the hood, depending on model being estimated), or should estimation
routines have their own set of postestimation commands for tests, marginal
effects, and so on. I don't know the answer to this. Stata flirts with
both models: ubiquitous (margins, predict) and model specific (estat
postestimation commands).
Open source suggests perhaps more modularity than one gets with a
proprietary software like Stata (though nearly everything it does is
executed in .do or .ado add-ons). What we discussed in Torun was the idea
that Allin and Jack would work on the back-bone and that others would try
to develop the expertise with the bundle concept to add specific
functionality. So, the question is, is enhancing prediction or marginal
effects a back-bone issue or an add-on? (I'm not sure)
My idea of a back-bone issue would be the introduction of factor variables
(version 2.0). In it, variables are defined as being continuous or
discrete. They can be interacted in various combinations
(continuous-discrete, continutous-continuous and discrete-discrete very
easily within Stata's .do files). For us, it may solve an issue with the
handling of missing values (0 or NAN) and permits a very straightforward
way to interact variables and compute marginal effects in interaction
models. Just a thought....
Cheers,
Lee
--
“An expert is a person who has made all the mistakes that can be made
in a very narrow field.” - Niels Bohr (1885-1962)
--
_______________________________________________
Gretl-devel mailing list
Gretl-devel(a)lists.wfu.edu
http://lists.wfu.edu/mailman/listinfo/gretl-devel
--
Lee Adkins
Professor of Economics
lee.adkins(a)okstate.edu
learneconometrics.com