add time-observations for panel data
by Artur Tarassow

Hi all,
I would like to add additional time-dimensions (n observations) for each
unit in a panel. I've tried to make use of the <dataset addobs> command
but it does, instead, add a whole new unit:
<hansl>
clear
set verbose off
nulldata 6 -p
genr index
series unit = cum(((index-1) % 3) == 0)
series time = vec(cum(mshape(ones(6,1),3,2)))
setobs unit time --panel-vars
dataset addobs 3 # stacks time-series of length 3 for a 'new' unit
print dataset -o
</hansl>
Is there a 'clever' way to expand the time-dimension while leaving the
unit-dimension untouched?
Thanks,
Artur
3 months, 4 weeks

sourceforge
by Stefano

dear all,
I am experiencing a weird problem with sourceforge: very simply, I
cannot reach it from from my Win10 PC at work, neither with the default
browser (Firefox) nor with MS Edge. Otherwise (mobile on WiFi, home PC)
no access problems. I tried cleaning the cookies and the cache of
Firefox and fiddling with the security options, no results. Any hints?
thanks,
Stefano
--
________________________________________________________________________
Stefano Fachin
Professore Ordinario di Statistica Economica
Dip. di Scienze Statistiche
"Sapienza" Università di Roma
P.le A. Moro 5 - 00185 Roma - Italia
Tel. +39-06-49910834
fax +39-06-49910072
web http://stefanofachin.site.uniroma1.it/
--
________________________________________________________
Le informazioni
contenute in questo messaggio di posta elettronica sono strettamente
riservate e indirizzate esclusivamente al destinatario. Si prega di non
leggere, fare copia, inoltrare a terzi o conservare tale messaggio se non
si è il legittimo destinatario dello stesso. Qualora tale messaggio sia
stato ricevuto per errore, si prega di restituirlo al mittente e di
cancellarlo permanentemente dal proprio computer.
The information
contained in this e mail message is strictly confidential and intended for
the use of the addressee only. If you are not the intended recipient,
please do not read, copy, forward or store it on your computer. If you have
received the message in error, please forward it back to the sender and
delete it permanently from your computer system.
4 months, 2 weeks

Re: Reading error from excel files created by Ox
by Fred Engst

Hi Allin,
I did a test on file formats created by "Ox Console version 8.02 (OS_X_64/U)" and reading from gretl 2020a, and I found many inconsistencies in Ox.
From files created by “savemat” command of Ox, gretl was able to read .csv, .dta, or .xlsx files fine, but not .xls files ("Failed to get workbook info”). Stata 10 was also able to read the .dta file.
From files created by Ox database class “Save”, gretl was able to read .xlsx, or .csv files but not the .xls file ("Failed to get workbook info”), nor the .dta file ("This file does not seem to be a valid Stata data file”). Neither was Stata 10 able to read this .dta file.
So the problem is with Ox, it seems. Since Ox claim the ".dta: Stata 11 data file (version 114)," I can’t test their claim with Stata 10.
From files created by Ox database class “SaveXlsx”, gretl was able to read only the .xlsx file. For the .dta file and the .xls file it give the same set of errors as before. For the .csv file, it read garbage, as seen in the string_table.txt:
String code table for variable 1 (PK):
1 = '%SÂx'
2 = 'ÝÒùlT/wd±Ãv)
[ÎQv`2ÀåNë£)?ã!{±~5\sé]ª4xÐÙúZ±6<lsg%AJîö³®¡"£¥H¹Ï7N}U'
3 = 'òRŠrð?%ëÑäöï0U
l 1éó»8ð¥ð÷'
4 = ''pšê'J¹&ö*C\ÔÐ§äo£éyŠXgZ 3¥üz2#uûíöÃwÔ
R+¬:Z'
5 = 'áh¯@5'Ïà¥mÃwb^fvé.ÈKbgÙn|Èõ!'
6 = 'yÕPè8i°br8"y_f4(<kµ¿ÜêüÐ8s"KÐHàß>2þÚýçªÖ_FËoÆgñÓ§ÂÕAÔÅ;PK'
7 = 'ÎóÜÌ×Zîïàõáþiyµö|®Ê)Á¿v/PK'
8 = 'ëöÖ)BQ.Ô¡°'
9 = '^|cfGQÁnÛÀhÈºxRðþöŒy’
So I will only use the Ox database class “Save” from now on.
I don’t think you need to do any work on this, it seems.
Fred
>
> On Mon, 24 Feb 2020, Fred Engst wrote:
>
>> Thank you Allin once again! Yes, I can now read .xlsx file created=20
>> by Ox in gretl.
>
> Hmm, I'm checking for problems in current gretl, as we prepare for a=20
> new release, and I see that the special case I introduced to handle=20
> Ox-generated xlsx files has broken reading of some other xlsx files=20
> which I think are probably more idiomatic. So I'll have to=20
> reconsider my "fix" unless I can get everything working.
>
>> I=E2=80=99m not sure what=E2=80=99s going on with OX. Not only was the =
> excel file=20
>> format not standard, but both .csv and .dta files created by=20
>> =E2=80=9Csavemat=E2=80=9D statement in Ox are causing reading error in =
> gretl.
>
> I tried generating csv and dta files from Ox (oxconsole7 on Linux)=20
> and found that these were read OK by gretl. Maybe you could send me=20
> some examples that don't work?
>
> Allin
4 months, 2 weeks

Re: Reading error from excel files created by Ox (Allin Cottrell)
by Fred Engst

Thank you Allin once again! Yes, I can now read .xlsx file created by Ox in gretl.
I’m not sure what’s going on with OX. Not only was the excel file format not standard, but both .csv and .dta files created by “savemat” statement in Ox are causing reading error in gretl.
Now that I have at least one file format that can be shared between Ox and gretl, I”m happy.
Fred
>
> On Fri, 21 Feb 2020, Allin Cottrell wrote:
>
>> On Fri, 21 Feb 2020, Fred Engst wrote:
>>
>>> If I save [a matrix from Ox] in xlsx format, gretl skips the=20
>>> header and gives me generic variable names as in: v1, v2, =E2=80=A6
>>> =20
>>> Any suggestion for what I should do?
>>
>> It seems like we should be able to pick up column headings from such a =
> file=20
>> but apparently Ox uses a somewhat unorthodox representation. If you ope=
> n the=20
>> Ox-generated xlsx file in LibreOffice then save it, still in xlsx forma=
> t,=20
>> gretl will find the headings OK. Maybe we can figure out how to handle =
> the=20
>> Ox-generated case, or maybe not...
>
> OK, I think we're now able to handle the header strings in=20
> Ox-generated xlsx files (updates in git and snapshots).
>
> Allin
4 months, 2 weeks

Re: MLE and binary (scalar) min and max operators
by Alecos Papadopoulos

Hi Sven, thanks for the input, plausible concerns.
The following seems to work just fine (in the sense that it gave results
that were validated with other estimation methods). Apart from the xmax
operator, it has a density with branches.
<hansl>
catch mle logl = check ? log(A1+A2+A3) :NA
series res = Depvar - lincomb(Reglist,bcoeff)
scalar m = xmax(a,b)
series dens1 = (res >= -b)*(res <= a - m)
series dens2 = (a - m< res)* (res <= m-b)
series dens3 = (res > m-b)*(res <= a )
series d2 = (a-res)/(a*b)
series d4 = (b+res)/(a*b)
series A1 = dens1*d4
series A2 = dens2*(1/m)
series A3 = dens3*d2
scalar check = (a>0) && (b>0)
params bcoeff a b
end mle
</hansl>
--
Alecos Papadopoulos PhD
Athens University of Economics and Business
web: alecospapadopoulos.wordpress.com/
skype:alecos.papadopoulos
Am 20.02.2020 um 21:37 schrieb Alecos Papadopoulos:
> Good evening. Will the mle command in gretl have any compatibility
> problem if in the likelihood some of the parameters under estimation
> appear also inside binary min and max operators
>
Spontaneously I'm skeptical, not because of any gretl limitations, but
because a min/max choice always means a discontinuity where derivatives
break down etc. So it doesn't look like a well-behaved problem for
"smooth" optimization. Maybe you would need a kind of switching algorithm.
But I may well be missing something, other input much appreciated.
cheers
sven
4 months, 3 weeks

A small administrative issue with gretl 2019d
by Alecos Papadopoulos

I run gretl 2019d for windows 64bit.
In previous versions the order of opening an (existing) .inp file and a
(existing) .gretl file did not matter.
One could open first a .gretl file, then open a .inp file, choose "No"
to the question "Start a new gretl instance?" and the two would be
linked and the script in the .inp file could draw data from the .gretl
file immediately, without the script in the .inp file containing a
command to that effect.
But also, one could start by opening first the .inp file, etc. and
things worked the same way.
Not in the 2019d version though. Here it appears it only works if one
opens first the .inp file and then the .gretl file, but not the other
way around.
Again, I am not referring to the case where one has opened a .gretl
file, and then creates a new .inp file and writes a script. The issue
appears only when a .inp file already exists with a script in it.
--
Alecos Papadopoulos PhD
Athens University of Economics and Business
web: alecospapadopoulos.wordpress.com/
skype:alecos.papadopoulos
4 months, 3 weeks

A simple Real Business Cycle model with gretl
by Mario Marchetti

Goodmorning everyone,
I'm Mario and for fun and study, especially to practice in hansl, I am trying to adapt a Maltlab script to the hansl language.
This script was written by Ryo Kato and consists of solving a simple RBC model.
The code is available here: http://www.ryokato.org/genmac/RBC1.m
A first (spartan) draft of the code that I wrote in hansl is the following (also available on github: https://github.com/mariometrics/RBCgretl):
<hansl>
####------------------------------------------------------------------------#####
set echo off
set messages off
## Mario Marchetti 23-02-2020
## Basic RBC model ##
## Adapted in hansl language from the code written in Matlab by Ryo Kato in 2004
## ------------------- [1] Parameter proc ------------------------
sigma = 1.5 # CRRA
alpha = 0.3 # Cobb-Dag
myu = 1 # labor-consumption supply
beta = 0.99 # discount factor
delta = 0.025 #depreciation
lamda = 2 # labor supply elasticity >1
phi = 0.8 # AR(1) in tech
param = {sigma,alpha,myu,beta,delta,lamda,phi}
## --------------------- [2] Steady State proc >> -----------------------
# SS capital & ss labor
# (1) real rate (By SS euler)
kls = (((1/beta)+delta-1)/alpha)^(1/(alpha-1))
# (2) wage
wstar = (1-alpha)*(kls)^alpha
# (3) Labor and goods market clear
clstar = kls^alpha - delta*kls
lstar = ((wstar/myu)*(clstar^(-sigma)))^(1/(lamda+sigma))
kstar = kls*lstar
cstar = clstar*lstar
vstar = 1
Ystar = (kstar^alpha)*(lstar^(1-alpha))
ssCKoLY = {cstar,kstar;lstar,Ystar} # show SS values
## --------------------------[2] MODEL proc-----------------------------##
function matrix RBC(matrix *param,matrix *x)
sigma = param[1]
alpha = param[2]
myu = param[3]
beta = param[4]
delta = param[5]
lamda = param[6]
phi = param[7]
# Define endogenous vars ('a' denotes t+1 values)
la = x[1]
ca = x[2]
ka = x[3]
va = x[4]
lt = x[5]
ct = x[6]
kt = x[7]
vt = x[8]
ra = 0
rt = 0
# Eliminate Price
ra = (va*alpha*(ka/la)^(alpha-1))
wt = (1-alpha)*vt*(kt/lt)^alpha
# Optimal Conditions & state transition
labor = lt^lamda-wt/(myu*ct^sigma) # LS = LD
euler = ct^(-sigma) -(ca^(-sigma))*beta*(1+ra-delta) # C-Euler
capital = ka - (1-delta)*kt-vt*(kt^alpha)*(lt^(1-alpha))+ct # K-trans
tech = va - phi*vt
matrix optcon = {labor;euler;capital;tech}
return optcon
end function
function scalar RBCY(matrix *param,matrix *xr)
# GDP (Optional)
alpha = param[2]
vt = xr[3]
kt = xr[2]
lt = xr[1]
Yt = vt*(kt^alpha)*(lt^(1-alpha))
return Yt
end function
# Evaluate each derivate
matrix x = {lstar,cstar,kstar,vstar,lstar,cstar,kstar,vstar}
matrix xr = {lstar,kstar,vstar}
# Numerical jacobian
matrix coeff = fdjac(x,RBC(¶m,&x))
matrix coeffy = fdjac(xr,RBCY(¶m,&xr))
# In terms of # deviations from ss
matrix vo = {lstar,cstar,kstar,vstar}
matrix TW = vo | vo | vo | vo
matrix B = -coeff[,1:4].*TW
matrix C = coeff[,5:8].*TW
# B[c(t+1) l(t+1) k(t+1) z(t+1)] = C[c(t) l(t) k(t) z(t)]
matrix A = inv(C)*B #(Linearized reduced form )
# For GDP( optional)
matrix ve = {lstar,kstar,vstar}
matrix NOM = {Ystar,Ystar,Ystar}
matrix PPX = coeffy.*ve./NOM
## =========== [4] Solution proc ============== ##
# EIGEN DECOMPOSITION
matrix W = {}
matrix theta = eigengen(A, &W)
Q = inv(W)
V = zeros(4,4)
V[diag] = theta
LL = W*V*Q # not find a role yet...
# Extract stable vectors
matrix SQ = {}
loop j = 1..rows(theta) --quiet
if abs(theta[j]) > 1.000000001
SQ |= Q[j,]
endif
endloop
# Extract unstable vectors
matrix UQ = {}
loop jj = 1..rows(theta) --quiet
if abs(theta[jj])<0.9999999999
UQ |= Q[jj,]
endif
endloop
# Extract stable roots
matrix VLL = {}
loop jjj = 1..rows(theta) --quiet
if abs(theta[jjj]) >1.0000000001
VLL |= theta[jjj,]
endif
endloop
# [3] ELIMINATING UNSTABLE VECTORS
k = min({rows(SQ),cols(SQ)}) # # of predetermined vars
n = min({rows(UQ),cols(UQ)}) # # of jump vars
nk = {n,k}
# Stable V (eig mat)
diago = zeros(rows(VLL),rows(VLL))
diago[diag] = VLL
VL = inv(diago)
# Elements in Q
PA = UQ[1:n,1:n]
PB = UQ[1:n,n+1:n+k]
PC = SQ[1:k,1:n]
PD = SQ[1:k,n+1:n+k]
P = -inv(PA)*PB # X(t) = P*S(t)
PE = PC*P+PD
# SOLUTION
PX = inv(PE)*VL*PE
AA = Re(PX)
## ------------------ [5] SIMULATION proc ----------------- ##
# [4] TIME&INITIAL VALUES
t = 48 # Time span
# Initial Values
# state var + e
S1 = {0;0.06}
# [5] SIMULATION
Ss = S1
S = zeros(t,k)
loop i = 1..t --quiet
q = AA*Ss
S[i,] = q'
Ss = S[i,]'
endloop
SY = S1' | S
X = (Re(P)*SY')'
# Re-definition
ci = X[,1]
li = X[,2]
ki = SY[,1]
vi = SY[,2]
Yi = (PPX*XI')'
# [6] DRAWING FIGURES
gnuplot --matrix=Yi --time-series --with-lines --output=display { set linetype 3 lc rgb "#0000ff"; set title "Y"; set key rmargin; set xlabel "time"; set ylabel "IRF Y_t"; }
# put columns together and add labels
plotmat = X ~ SY
strings cnames = defarray("C", "L","K","V")
cnameset(plotmat, cnames)
scatters 1 2 3 4 --matrix=plotmat --with-lines --output=display
####-------------------------------------------------------------------#####
</hansl>
So I write to ask for suggestions to improve or "streamline" the code, or to help find errors that have escaped me.
All to try to improve my knowledge of gretl software and its scripting language: "hansl".
For example: how can I improve the jacobian calculation?
Thanks to everyone and have a good day.
4 months, 3 weeks

Reading error from excel files created by Ox
by Fred Engst

Hi Allin and all other hard working member of the gretl team.
I’m having a hard time reading excel files created by "Ox Console version 8.02 (OS_X_64/U) (C) J.A. Doornik, 1994-2018”.
If I save a matrix in xls format, gretl gives me a message of “Failed to get workbook info”.
If I save the matrix in xlsx format, gretl skips the header and gives me generic variable names as in: v1, v2, …
Any suggestion for what I should do?
Fred
4 months, 3 weeks

Hamilton trend-cycle decomposition
by Riccardo (Jack) Lucchetti

Hi all,
yesterday, after having taught my students the HP decomposition, I
wondered if I should also tell them that one of the greatest time-series
econometricians on Earth recently wrote a rather scathing paper entitled
"Why You Should Never Use the Hodrick-Prescott Filter", where he proposes
a simple alternative.
So this morning I rustled up a little script with Hamilton's filter. Here
it is:
<hansl>
function series hamcycle(series y, bool do_plot[1], string title[null])
h0 = 2 * $pd
h1 = h0 + 4
list PROJ = y(-h0 to -h1)
ols y 0 PROJ -q
# ht = $yhat
hc = $uhat
if do_plot
if !exists(title)
title = argname(y)
endif
print title
diff8 = y - y(-h0)
setinfo diff8 --graph-name="Random walk"
setinfo hc --graph-name="Regression"
list PLT = diff8 hc
plot PLT
options time-series with-lines
literal set linetype 1 lc rgb "#ff0000"
literal set linetype 2 lc rgb "#000000"
literal set key top right
printf "set title '%s'", title
end plot --output=display
endif
return hc
end function
# example
nulldata 300
setobs 4 1947:1
open fedstl.bin
data gdpc1 expgsc1 pcecc96
list Y = gdpc1 expgsc1 pcecc96
LY = logs(Y)
strings Titles = strsplit("GDP Exports Consumption")
k = 1
# reproduce part of figure 6
loop foreach i LY --quiet
hc = hamcycle($i*100,,Titles[k++])
endloop
</hansl>
Should we turn this into a function package?
-------------------------------------------------------
Riccardo (Jack) Lucchetti
Dipartimento di Scienze Economiche e Sociali (DiSES)
Università Politecnica delle Marche
(formerly known as Università di Ancona)
r.lucchetti(a)univpm.it
http://www2.econ.univpm.it/servizi/hpp/lucchetti
-------------------------------------------------------
4 months, 3 weeks