User-level model printing command
                                
                                
                                
                                    
                                        by Riccardo (Jack) Lucchetti
                                    
                                
                                
                                        Dear all,
lately I found that when I write a script that does some kind of 
estimation, most of the times I have to write a long and boring function 
to display the results "nicely".
So I thought this kind of thing could be done once and for all via a 
command. The patch you'll find attached[*] implements a "modprint" 
command, which I believe will turn out useful to people like me, Ignacio, 
Gordon, Sven, Stefano, Franck etc.
In practice, once you have your estimates, you pack your estimated 
coefficients and their standard errors in a nx2 matrix (call it X), store 
their names in a string (call it parnames) using the comma as a separator 
and issue the modprint command as follows
modprint parnames X
If you have additional statistics that you want printed, you collect them 
in a column vector (call it addstats), which you specify as a third 
argument.
An example script is also attached, which should hopefully clarify what I 
have in mind. Bear in mind this is still preliminary work; my main idea as 
of now is to hear your comments.
Have fun!
---------------------------------------------------
[*] How do you apply the patch? Simple:
1) save the diff file somewhere. 
2) from the unix shell, go to your gretl source main directory (the one 
you run ./configure from)
3) be sure you have a fresh CVS source; run cvs up if necessary
4) issue the command
 	patch -p0 < /path/where/you/saved/the/diff/file/modprint.diff
5) run make etcetera
Riccardo (Jack) Lucchetti
Dipartimento di Economia
Università Politecnica delle Marche
r.lucchetti(a)univpm.it
http://www.econ.univpm.it/lucchetti
                                
                         
                        
                                
                                17 years, 1 month
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Changes to dummify
                                
                                
                                
                                    
                                        by Gordon Hughes
                                    
                                
                                
                                        In August I raised the possibility of extending the dummify function 
to accommodate syntax such as
list dlist = dummify(x, n)
where n <= 0 means that no category is dropped, while n > 0 means 
that the n-th category is dropped.  For this to work, it would be 
necessary to require that x is a series, whereas in its current 
version dummify(X) will work with a list X.
After some discussion I think that this was put on the list of 
backwards incompatible changes, but it does not appear in the current 
version of the gretl wiki which discusses such changes.  Is there a 
consensus that the extension is desirable or are there reasons for 
not implementing it?
As with other things my proposal is driven by the difficulty of 
programming functions.  The command "dummify x" will create a 
complete set of dummy variables.  The difficulty is that in a 
function it is quite difficult to turn these dummy variables into a 
list when you don't know how many values x may take, whether they are 
consecutive, etc.  On the other hand, the command "list 
dlist=dummify(x)" will always drop the first category.  There is a 
clumsy way of generating the missing dummy variable by using matrices 
but it is inefficient and will fail in some circumstances.  Hence, 
the more flexible syntax is much cleaner.
Gordon
                                
                         
                        
                                
                                17 years, 1 month
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Introducing the Turkish translation of gretl :-)
                                
                                
                                
                                    
                                        by Talha Yalta
                                    
                                
                                
                                        Gentlemen:
I am happy to tell you that I have just sent the complete and brand
new tr.po file to Allin so that he can replace it with the existing
old and incorrect one in the cvs. I have been translating the gretl UI
into Turkish over the summer and it was actually finished for quite
some time now. I have been testing it to make sure it is of a high
quality.
Over the last several weeks, I have also translated almost the entire
gretl web site into Turkish, which is also uploaded now to the cvs.
I don't think most of the fellow developers (except Allin and Jack)
know me, so, let me briefly introduce myself: I am a gretl user and a
follower of its development since my Ph.D. studies at Fordham
University in New York. After my graduation last year, I started
working as assistant professor at TOBB University of Economics and
Technology in Ankara. Over the last few years, I was instumental in
testing the numerical accuracy of gretl on several fronts including
linear regression, nonlinear regression, univariate summary
statistics, statistical distributions and the arma functionality. In
2007, I published a review of gretl including its numerial accuracy in
Journal of Applied Econometrics (22 (4), 849-54, 2007). My newest
article is forthcoming in International Journal of Forecasting
(available online, doi:10.1016/j.ijforecast.2008.07.002) and looks at
the Box-Jenkins methodology from a computational perspective. It also
compares ARIMA results from several econometric packages including
gretl.
Aside from testing the acuuracy of gretl on different fronts, from now
on, I will also be the maintainer of the Turkish translation and the
Turkish web pages. I am very happy and honored to finally join you
guys as a formal gretl developer.
It is my understanding that with the upcoming version 1.7.9, gretl
will now have support for two new languages namely Russian and
Turkish. Step by step, I think we are getting close
to world domination. This is pretty exciting. I would like to thank
all the fellow developers and Allin in particular for creating what
has become one of the best econometric packages around. Let's keep up
the good work.
Cheers
A. Talha YALTA
-- 
"Remember not only to say the right thing in the right place, but far
more difficult still, to leave unsaid the wrong thing at the tempting
moment." - Benjamin Franklin (1706-1790)
--
                                
                         
                        
                                
                                17 years, 1 month
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Re: Changes to dummify
                                
                                
                                
                                    
                                        by Gordon Hughes
                                    
                                
                                
                                        Responding to Allin's suggestion:
 > for series x, in which case all the dummies are generated; and
 > also support
 >
 >   list L = dummify(x, val)
 >
 > which treats 'val' as the omitted category.  (That is, the second
 > argument to dummify() is optional).
 > That leaves a question: is it easier/more intuitive to read 'val'
 > as denoting the val'th category when the distinct values of x are
 > ordered, or as the condition x == val?  I tend to think the latter
 > is better.
I agree.  It is very difficult to ensure that the first option 
produces predictable results in a function context when there might 
be missing categories.  Hence, in practice one would have to adopt 
the "list DL = dummify(x, max(x))".
However, without wanting to raise unnecessary difficulties, won't 
this imply a change in the use of "dummify(x)" as an argument in, 
say, OLS as in "OLS y Z dummify(x)"?  At the moment, this seem to 
drop one category automatically, so that list Z can contain const.  I 
assume that this is the backward-incompatible change and that you let 
the OLS function deal with linear dependence between Z and dummify(x).
Gordon
                                
                         
                        
                                
                                17 years, 1 month
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                        
                                
                                
                                        
                                                
                                        
                                        
                                        mle output
                                
                                
                                
                                    
                                        by Gordon Hughes
                                    
                                
                                
                                        Would it be possible to have more control over the output from mle?
My preferences are as follows:
A.  For normal output (not "--verbose" or "--quiet"): print a simple 
iteration log giving the iteration number and value of the 
log-likelihood for the iteration.
B.  An option to switch off printing of the results for normal output 
- useful for functions that access mle because it may be necessary to 
generate a results table containing transformed variables.
C.  For direct use of mle it is helpful to know the number of 
function and gradient evaluations, but again I would prefer to be 
able to suppress this for functions that embed it.
D.  An accessor $niter which gives the total number of iterations for 
the last model.
I realise that some of the output may be generated by standard 
settings for either nls or the BFGS maximizer, so any changes might 
apply to those commands rather than mle.
Gordon
                                
                         
                        
                                
                                17 years, 1 month
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        More about the test for analytical derivatives
                                
                                
                                
                                    
                                        by Gordon Hughes
                                    
                                
                                
                                        As a result of some testing of my panel stochastic frontier function, 
I would like to give a warning about the reliability of the test for 
the accuracy of analytical derivatives.  In general, it is very 
useful.  But the warning is that I have found that I can provoke 
failures in the test simply by changing the starting values of my 
parameters.  I am 99.9% sure that my analytical derivatives are now 
correct (though they are horribly tedious to program) because (a) I 
can reproduce results generated by Stata to the 6th significant 
figure using both numerical and analytical derivatives, and (b) Stata 
reports identical gradients for the parameters at the same starting values.
 From past experience I know that the log likelihood function for 
complex stochastic frontiers is not globally concave and getting good 
starting values is sometimes very difficult.  Part of the reason is 
that the log likelihood involves an evaluation of the cumulative 
normal at what may be extreme values, which can cause degeneracy and 
arbitrary fixes.
Stata frequently reports that it is in a non-concave part of the 
likelihood function.  What I think happens in gretl is that the 
minpack numerical derivatives routine generates arc values for the 
derivatives that differ from the analytical derivatives because the 
slope changes abruptly.  As a consequence the failure message is not 
necessarily correct or helpful, because the difference may be due to 
discontinuities or lack of concavity in the log-likelihood.
There is a possible solution, but there may be restrictions on 
implementing it.  Rather than stopping when the numerical and 
analytical derivatives appear to differ, why not continue but by 
using the numerical derivatives - reporting that fact.  Then, every 5 
or 10 iterations test whether they still differ.  If the derivatives 
appear to differ, continue with the numerical derivatives whereas if 
they are now identical the program should switch to use the 
analytical derivatives.
Gordon
     
                                
                         
                        
                                
                                17 years, 1 month
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        gretl Conference 2009 2nd announce
                                
                                
                                
                                    
                                        by Ignacio Diaz-Emparanza
                                    
                                
                                
                                        Dear colleagues:
[sorry for the cross-posting with gretl-users list]
this is to inform that we have already opened the registration page for our 
conference. Please remember that the web page is at 
http://www.gretlconference.org
At the same time, I would like to ask you to collaborate in announcing the  
conference. We prepared a poster announcing the meeting, that you may 
download at
http://www.gretlconference.org/Poster.pdf
and a "Call for papers" which is at
http://www.gretlconference.org/CallForPapers.pdf
Please, feel free to print them and put them at places where interested users 
from your institution may read them.
We have also prepared an email to send to econometrics-related lists or other 
colleagues. This is just below, you may also help sending it to the lists you 
know.
Thank you for all your help.
--------------------------------------
gretl Conference 2009
Bilbao, may 28-29
Dear colleague:
Some time ago, some users of the Econometrics package gretl e-mail list 
mentioned the possibility of promoting a first international meeting of gretl 
users. At the same time, one of the ideas that we had and that should be also 
promoted in this meeting is that it should also contribute, in a relevant 
manner, to the scientific development, and that it should also be a forum in 
which the use of free software is also advocated. Finally, with the valuable 
support of the School of Business Administration and Economics, the 
Department of Applied Economics III (Econometrics and Statistics), and the 
ERG research group, all from the University of the Basque Country (Bilbao, 
Spain), and also from the gretl development team, a small group of professors 
have decided to take this initial idea into practice and, thus, organize such 
an important event. This meeting will have the usual structure a scientific 
meeting has and it will take place at the School of Business Administration 
and Economics of the University of the Basque Country in Bilbao, Spain, on 
May 28-29, 2009. 
The scientific programme will include several invited sessions organized by 
the original gretl developers and, in addition, there will be one invited 
session with a talk given by Prof. D.S.G. Pollock, and several sessions in 
which authors interested to contribute will have the opportunity of present 
either an oral or a poster contribution. Topics of interest for this sessions 
include:
- Teaching Econometrics and Statistics using free software
- Implementation of some Econometric techniques in gretl
- Methodological papers with a strong computational emphasis
- Applications using gretl
- Developing gretl
The web site for the conference is at http://www.gretlconference.org
Important dates:
Submission Deadline            January 15, 2009
Notification to Authors          February 27, 2009
Registration Deadline            April 30, 2009
Conference                          May 28-29, 2009
We encourage gretl users to participate in this first meeting so that we all 
can contribute to the expected success of the event.  At the same time, we 
would like to ask you to pass this announcement to all people that you may 
consider have some interest in the use and advantages of this econometric 
package. 
Best regards and see you in Bilbao!!
Local Organizing Committee
gretl Conference 2009 
-- 
Ignacio Diaz-Emparanza  
DEPARTAMENTO DE ECONOMÍA APLICADA III (ECONOMETRÍA Y ESTADÍSTICA)                                        
UPV/EHU
Avda. Lehendakari Aguirre, 83 | 48015 BILBAO
T.: +34 946013732 | F.: +34 946013754
www.et.bs.ehu.es 
                                
                         
                        
                                
                                17 years, 1 month
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Source for reference files
                                
                                
                                
                                    
                                        by moradan
                                    
                                
                                
                                        Hello.
I have started translation of gretl command reference onti russian some days ago, but I don't know what is the source file for all that help files (as far as I know all of them are being compiled from one exact place?). By now the amount of the translated text is not very big to just copy/paste it into the proper place, but some time later it would be too keys to press%-)
Please notice that source.
P.S. I'm that second person whom Alexander Gedranovich mentioned.
Ivan.
                                
                         
                        
                                
                                17 years, 1 month
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        mle derivatives
                                
                                
                                
                                    
                                        by Gordon Hughes
                                    
                                
                                
                                        
On Tue, 23 Sep 2008, Gordon Hughes wrote:
 >> As I understand, when mle is given derivatives of the
 >> log-likelihood, it checks the computed derivatives against its
 >> own numerical derivatives at the first iteration.  If they are
 >> different, the program reports that there is a difference and
 >> stops.  This is a very useful test, but the diagnostic
 >> information could be a bit more helpful.
 >>
 >> Would there be any problem in reporting the computed derivatives
 >> and the numerical derivatives, because this would help the
 >> programmer identify where the error in programming the
 >> derivatives has occurred?
 >  Yes, we could offer some more information; I'll think about how to
 >  do that.  Our check is by courtesy of the minpack function chkder
 >  -- you can read the documentation for that function at
 > 
<http://www.netlib.org/minpack/chkder.f>http://www.netlib.org/minpack/chkder.f
I assume that you test the contents of the array err() generated by 
chkder.  Thus, the simplest option would seem to be to print out 
err() with text saying that the values represent the probability for 
each parameter that the relevant analytical derivative is 
correct.  Since this is basically an error message, it is not worth 
going to too much trouble to format err().
Gordon
                                
                         
                        
                                
                                17 years, 1 month