Re: Reading error from excel files created by Ox
by Fred Engst

Hi Allin,
I did a test on file formats created by "Ox Console version 8.02 (OS_X_64/U)" and reading from gretl 2020a, and I found many inconsistencies in Ox.
From files created by “savemat” command of Ox, gretl was able to read .csv, .dta, or .xlsx files fine, but not .xls files ("Failed to get workbook info”). Stata 10 was also able to read the .dta file.
From files created by Ox database class “Save”, gretl was able to read .xlsx, or .csv files but not the .xls file ("Failed to get workbook info”), nor the .dta file ("This file does not seem to be a valid Stata data file”). Neither was Stata 10 able to read this .dta file.
So the problem is with Ox, it seems. Since Ox claim the ".dta: Stata 11 data file (version 114)," I can’t test their claim with Stata 10.
From files created by Ox database class “SaveXlsx”, gretl was able to read only the .xlsx file. For the .dta file and the .xls file it give the same set of errors as before. For the .csv file, it read garbage, as seen in the string_table.txt:
String code table for variable 1 (PK):
1 = '%SÂx'
2 = 'ÝÒùlT/wd±Ãv)
[ÎQv`2ÀåNë£)?ã!{±~5\sé]ª4xÐÙúZ±6<lsg%AJîö³®¡"£¥H¹Ï7N}U'
3 = 'òRŠrð?%ëÑäöï0U
l 1éó»8ð¥ð÷'
4 = ''pšê'J¹&ö*C\ÔÐ§äo£éyŠXgZ 3¥üz2#uûíöÃwÔ
R+¬:Z'
5 = 'áh¯@5'Ïà¥mÃwb^fvé.ÈKbgÙn|Èõ!'
6 = 'yÕPè8i°br8"y_f4(<kµ¿ÜêüÐ8s"KÐHàß>2þÚýçªÖ_FËoÆgñÓ§ÂÕAÔÅ;PK'
7 = 'ÎóÜÌ×Zîïàõáþiyµö|®Ê)Á¿v/PK'
8 = 'ëöÖ)BQ.Ô¡°'
9 = '^|cfGQÁnÛÀhÈºxRðþöŒy’
So I will only use the Ox database class “Save” from now on.
I don’t think you need to do any work on this, it seems.
Fred
>
> On Mon, 24 Feb 2020, Fred Engst wrote:
>
>> Thank you Allin once again! Yes, I can now read .xlsx file created=20
>> by Ox in gretl.
>
> Hmm, I'm checking for problems in current gretl, as we prepare for a=20
> new release, and I see that the special case I introduced to handle=20
> Ox-generated xlsx files has broken reading of some other xlsx files=20
> which I think are probably more idiomatic. So I'll have to=20
> reconsider my "fix" unless I can get everything working.
>
>> I=E2=80=99m not sure what=E2=80=99s going on with OX. Not only was the =
> excel file=20
>> format not standard, but both .csv and .dta files created by=20
>> =E2=80=9Csavemat=E2=80=9D statement in Ox are causing reading error in =
> gretl.
>
> I tried generating csv and dta files from Ox (oxconsole7 on Linux)=20
> and found that these were read OK by gretl. Maybe you could send me=20
> some examples that don't work?
>
> Allin
7 hours, 18 minutes

Re: Reading error from excel files created by Ox (Allin Cottrell)
by Fred Engst

Thank you Allin once again! Yes, I can now read .xlsx file created by Ox in gretl.
I’m not sure what’s going on with OX. Not only was the excel file format not standard, but both .csv and .dta files created by “savemat” statement in Ox are causing reading error in gretl.
Now that I have at least one file format that can be shared between Ox and gretl, I”m happy.
Fred
>
> On Fri, 21 Feb 2020, Allin Cottrell wrote:
>
>> On Fri, 21 Feb 2020, Fred Engst wrote:
>>
>>> If I save [a matrix from Ox] in xlsx format, gretl skips the=20
>>> header and gives me generic variable names as in: v1, v2, =E2=80=A6
>>> =20
>>> Any suggestion for what I should do?
>>
>> It seems like we should be able to pick up column headings from such a =
> file=20
>> but apparently Ox uses a somewhat unorthodox representation. If you ope=
> n the=20
>> Ox-generated xlsx file in LibreOffice then save it, still in xlsx forma=
> t,=20
>> gretl will find the headings OK. Maybe we can figure out how to handle =
> the=20
>> Ox-generated case, or maybe not...
>
> OK, I think we're now able to handle the header strings in=20
> Ox-generated xlsx files (updates in git and snapshots).
>
> Allin
1 day, 9 hours

Re: MLE and binary (scalar) min and max operators
by Alecos Papadopoulos

Hi Sven, thanks for the input, plausible concerns.
The following seems to work just fine (in the sense that it gave results
that were validated with other estimation methods). Apart from the xmax
operator, it has a density with branches.
<hansl>
catch mle logl = check ? log(A1+A2+A3) :NA
series res = Depvar - lincomb(Reglist,bcoeff)
scalar m = xmax(a,b)
series dens1 = (res >= -b)*(res <= a - m)
series dens2 = (a - m< res)* (res <= m-b)
series dens3 = (res > m-b)*(res <= a )
series d2 = (a-res)/(a*b)
series d4 = (b+res)/(a*b)
series A1 = dens1*d4
series A2 = dens2*(1/m)
series A3 = dens3*d2
scalar check = (a>0) && (b>0)
params bcoeff a b
end mle
</hansl>
--
Alecos Papadopoulos PhD
Athens University of Economics and Business
web: alecospapadopoulos.wordpress.com/
skype:alecos.papadopoulos
Am 20.02.2020 um 21:37 schrieb Alecos Papadopoulos:
> Good evening. Will the mle command in gretl have any compatibility
> problem if in the likelihood some of the parameters under estimation
> appear also inside binary min and max operators
>
Spontaneously I'm skeptical, not because of any gretl limitations, but
because a min/max choice always means a discontinuity where derivatives
break down etc. So it doesn't look like a well-behaved problem for
"smooth" optimization. Maybe you would need a kind of switching algorithm.
But I may well be missing something, other input much appreciated.
cheers
sven
1 day, 13 hours

A small administrative issue with gretl 2019d
by Alecos Papadopoulos

I run gretl 2019d for windows 64bit.
In previous versions the order of opening an (existing) .inp file and a
(existing) .gretl file did not matter.
One could open first a .gretl file, then open a .inp file, choose "No"
to the question "Start a new gretl instance?" and the two would be
linked and the script in the .inp file could draw data from the .gretl
file immediately, without the script in the .inp file containing a
command to that effect.
But also, one could start by opening first the .inp file, etc. and
things worked the same way.
Not in the 2019d version though. Here it appears it only works if one
opens first the .inp file and then the .gretl file, but not the other
way around.
Again, I am not referring to the case where one has opened a .gretl
file, and then creates a new .inp file and writes a script. The issue
appears only when a .inp file already exists with a script in it.
--
Alecos Papadopoulos PhD
Athens University of Economics and Business
web: alecospapadopoulos.wordpress.com/
skype:alecos.papadopoulos
1 day, 16 hours

A simple Real Business Cycle model with gretl
by Mario Marchetti

Goodmorning everyone,
I'm Mario and for fun and study, especially to practice in hansl, I am trying to adapt a Maltlab script to the hansl language.
This script was written by Ryo Kato and consists of solving a simple RBC model.
The code is available here: http://www.ryokato.org/genmac/RBC1.m
A first (spartan) draft of the code that I wrote in hansl is the following (also available on github: https://github.com/mariometrics/RBCgretl):
<hansl>
####------------------------------------------------------------------------#####
set echo off
set messages off
## Mario Marchetti 23-02-2020
## Basic RBC model ##
## Adapted in hansl language from the code written in Matlab by Ryo Kato in 2004
## ------------------- [1] Parameter proc ------------------------
sigma = 1.5 # CRRA
alpha = 0.3 # Cobb-Dag
myu = 1 # labor-consumption supply
beta = 0.99 # discount factor
delta = 0.025 #depreciation
lamda = 2 # labor supply elasticity >1
phi = 0.8 # AR(1) in tech
param = {sigma,alpha,myu,beta,delta,lamda,phi}
## --------------------- [2] Steady State proc >> -----------------------
# SS capital & ss labor
# (1) real rate (By SS euler)
kls = (((1/beta)+delta-1)/alpha)^(1/(alpha-1))
# (2) wage
wstar = (1-alpha)*(kls)^alpha
# (3) Labor and goods market clear
clstar = kls^alpha - delta*kls
lstar = ((wstar/myu)*(clstar^(-sigma)))^(1/(lamda+sigma))
kstar = kls*lstar
cstar = clstar*lstar
vstar = 1
Ystar = (kstar^alpha)*(lstar^(1-alpha))
ssCKoLY = {cstar,kstar;lstar,Ystar} # show SS values
## --------------------------[2] MODEL proc-----------------------------##
function matrix RBC(matrix *param,matrix *x)
sigma = param[1]
alpha = param[2]
myu = param[3]
beta = param[4]
delta = param[5]
lamda = param[6]
phi = param[7]
# Define endogenous vars ('a' denotes t+1 values)
la = x[1]
ca = x[2]
ka = x[3]
va = x[4]
lt = x[5]
ct = x[6]
kt = x[7]
vt = x[8]
ra = 0
rt = 0
# Eliminate Price
ra = (va*alpha*(ka/la)^(alpha-1))
wt = (1-alpha)*vt*(kt/lt)^alpha
# Optimal Conditions & state transition
labor = lt^lamda-wt/(myu*ct^sigma) # LS = LD
euler = ct^(-sigma) -(ca^(-sigma))*beta*(1+ra-delta) # C-Euler
capital = ka - (1-delta)*kt-vt*(kt^alpha)*(lt^(1-alpha))+ct # K-trans
tech = va - phi*vt
matrix optcon = {labor;euler;capital;tech}
return optcon
end function
function scalar RBCY(matrix *param,matrix *xr)
# GDP (Optional)
alpha = param[2]
vt = xr[3]
kt = xr[2]
lt = xr[1]
Yt = vt*(kt^alpha)*(lt^(1-alpha))
return Yt
end function
# Evaluate each derivate
matrix x = {lstar,cstar,kstar,vstar,lstar,cstar,kstar,vstar}
matrix xr = {lstar,kstar,vstar}
# Numerical jacobian
matrix coeff = fdjac(x,RBC(¶m,&x))
matrix coeffy = fdjac(xr,RBCY(¶m,&xr))
# In terms of # deviations from ss
matrix vo = {lstar,cstar,kstar,vstar}
matrix TW = vo | vo | vo | vo
matrix B = -coeff[,1:4].*TW
matrix C = coeff[,5:8].*TW
# B[c(t+1) l(t+1) k(t+1) z(t+1)] = C[c(t) l(t) k(t) z(t)]
matrix A = inv(C)*B #(Linearized reduced form )
# For GDP( optional)
matrix ve = {lstar,kstar,vstar}
matrix NOM = {Ystar,Ystar,Ystar}
matrix PPX = coeffy.*ve./NOM
## =========== [4] Solution proc ============== ##
# EIGEN DECOMPOSITION
matrix W = {}
matrix theta = eigengen(A, &W)
Q = inv(W)
V = zeros(4,4)
V[diag] = theta
LL = W*V*Q # not find a role yet...
# Extract stable vectors
matrix SQ = {}
loop j = 1..rows(theta) --quiet
if abs(theta[j]) > 1.000000001
SQ |= Q[j,]
endif
endloop
# Extract unstable vectors
matrix UQ = {}
loop jj = 1..rows(theta) --quiet
if abs(theta[jj])<0.9999999999
UQ |= Q[jj,]
endif
endloop
# Extract stable roots
matrix VLL = {}
loop jjj = 1..rows(theta) --quiet
if abs(theta[jjj]) >1.0000000001
VLL |= theta[jjj,]
endif
endloop
# [3] ELIMINATING UNSTABLE VECTORS
k = min({rows(SQ),cols(SQ)}) # # of predetermined vars
n = min({rows(UQ),cols(UQ)}) # # of jump vars
nk = {n,k}
# Stable V (eig mat)
diago = zeros(rows(VLL),rows(VLL))
diago[diag] = VLL
VL = inv(diago)
# Elements in Q
PA = UQ[1:n,1:n]
PB = UQ[1:n,n+1:n+k]
PC = SQ[1:k,1:n]
PD = SQ[1:k,n+1:n+k]
P = -inv(PA)*PB # X(t) = P*S(t)
PE = PC*P+PD
# SOLUTION
PX = inv(PE)*VL*PE
AA = Re(PX)
## ------------------ [5] SIMULATION proc ----------------- ##
# [4] TIME&INITIAL VALUES
t = 48 # Time span
# Initial Values
# state var + e
S1 = {0;0.06}
# [5] SIMULATION
Ss = S1
S = zeros(t,k)
loop i = 1..t --quiet
q = AA*Ss
S[i,] = q'
Ss = S[i,]'
endloop
SY = S1' | S
X = (Re(P)*SY')'
# Re-definition
ci = X[,1]
li = X[,2]
ki = SY[,1]
vi = SY[,2]
Yi = (PPX*XI')'
# [6] DRAWING FIGURES
gnuplot --matrix=Yi --time-series --with-lines --output=display { set linetype 3 lc rgb "#0000ff"; set title "Y"; set key rmargin; set xlabel "time"; set ylabel "IRF Y_t"; }
# put columns together and add labels
plotmat = X ~ SY
strings cnames = defarray("C", "L","K","V")
cnameset(plotmat, cnames)
scatters 1 2 3 4 --matrix=plotmat --with-lines --output=display
####-------------------------------------------------------------------#####
</hansl>
So I write to ask for suggestions to improve or "streamline" the code, or to help find errors that have escaped me.
All to try to improve my knowledge of gretl software and its scripting language: "hansl".
For example: how can I improve the jacobian calculation?
Thanks to everyone and have a good day.
1 day, 20 hours

Reading error from excel files created by Ox
by Fred Engst

Hi Allin and all other hard working member of the gretl team.
I’m having a hard time reading excel files created by "Ox Console version 8.02 (OS_X_64/U) (C) J.A. Doornik, 1994-2018”.
If I save a matrix in xls format, gretl gives me a message of “Failed to get workbook info”.
If I save the matrix in xlsx format, gretl skips the header and gives me generic variable names as in: v1, v2, …
Any suggestion for what I should do?
Fred
2 days, 14 hours

Hamilton trend-cycle decomposition
by Riccardo (Jack) Lucchetti

Hi all,
yesterday, after having taught my students the HP decomposition, I
wondered if I should also tell them that one of the greatest time-series
econometricians on Earth recently wrote a rather scathing paper entitled
"Why You Should Never Use the Hodrick-Prescott Filter", where he proposes
a simple alternative.
So this morning I rustled up a little script with Hamilton's filter. Here
it is:
<hansl>
function series hamcycle(series y, bool do_plot[1], string title[null])
h0 = 2 * $pd
h1 = h0 + 4
list PROJ = y(-h0 to -h1)
ols y 0 PROJ -q
# ht = $yhat
hc = $uhat
if do_plot
if !exists(title)
title = argname(y)
endif
print title
diff8 = y - y(-h0)
setinfo diff8 --graph-name="Random walk"
setinfo hc --graph-name="Regression"
list PLT = diff8 hc
plot PLT
options time-series with-lines
literal set linetype 1 lc rgb "#ff0000"
literal set linetype 2 lc rgb "#000000"
literal set key top right
printf "set title '%s'", title
end plot --output=display
endif
return hc
end function
# example
nulldata 300
setobs 4 1947:1
open fedstl.bin
data gdpc1 expgsc1 pcecc96
list Y = gdpc1 expgsc1 pcecc96
LY = logs(Y)
strings Titles = strsplit("GDP Exports Consumption")
k = 1
# reproduce part of figure 6
loop foreach i LY --quiet
hc = hamcycle($i*100,,Titles[k++])
endloop
</hansl>
Should we turn this into a function package?
-------------------------------------------------------
Riccardo (Jack) Lucchetti
Dipartimento di Scienze Economiche e Sociali (DiSES)
Università Politecnica delle Marche
(formerly known as Università di Ancona)
r.lucchetti(a)univpm.it
http://www2.econ.univpm.it/servizi/hpp/lucchetti
-------------------------------------------------------
3 days, 22 hours

MLE and binary (scalar) min and max operators
by Alecos Papadopoulos

Good evening. Will the mle command in gretl have any compatibility
problem if in the likelihood some of the parameters under estimation
appear also inside binary min and max operators
Namely, a thing like (bogus likelihood),
<hansl>
scalar a = starting value
scalar b = starting value
scalar s = starting value
mle logl = log(a) + log(Φ(ε / s + min(a-b,0))) + exp(-max(a,b))
...
</hansl>
I am not asking about convergence issues or negative logarithms, these
are model/data issues. Only if the mle command accepts in principle to
work with the binary min/max operators.
--
Alecos Papadopoulos PhD
Athens University of Economics and Business
web: alecospapadopoulos.wordpress.com/
skype:alecos.papadopoulos
4 days, 20 hours

Identification of SVARs using heteroskedasticity
by anzervas＠yahoo.com

Dear all (especially Sven and Riccardo),
A recent strand in SVAR literature uses heteroskedasticity to identify the structural shocks - those interested in the topic may read mainly Rigobon (Review of Economics and Statistics 2003), Lanne and Lutkepohl (Journal of Money Credit and Banking 2008) and / or Bacchiocchi and Fanelli (Oxford Bulletin of Economics and Statistics 2015). If one has 2 variance - covariance matrices, there is no need to add zero or other restrictions to identify the structural matrices A or B.
It is not clear to me how one can implement this on Gretl. Especially I did not find any way to implement this method using matrix operations, and the only possible way seems to employ an optimization algorithm. I have tried it, but in vain. Below I attach a script, formulated like in Lanne - Lutkepohl (2008) but it is not working properly. Especially, I cannot understand how to make the procedure go to a solution that respects the equality S1 = B*B'.
In addition, Rigobon (op. cit.) mentions that he uses GMM to solve the problem. I do not know if this is feasible in Gretl, but even if it is, it is not clear to me how one could write the GMM block to do it (though the manual mentions that one may use only matrices in GMM block equations).
Any suggestions / corrections are welcome. This also seems be a good functionality to add to the SVAR package (in addition to add AB model functionality to VECMs, for completeness purposes).
Kind regards,
Andreas
<begin hansl scipt>
set verbose off
# set lbfgs on
set bfgs_maxiter 50000
set bfgs_toler 0.0000000000000001
function scalar LnL(const matrix param, matrix S1, matrix S2)
A = {param[1],param[2],param[3],param[4]; \
param[5],param[6],param[6],param[8]; \
param[9],param[10],param[11],param[12]; \
param[13],param[14],param[15],param[16]}
L = zeros(rows(S1),cols(S1))
L[1,1] = param[17]
L[2,2] = param[18]
L[3,3] = param[19]
L[4,4] = param[20]
LL0 = -100*0.5*(ln(det(S1)) + tr(S1*inv(S1))) \
-100*0.5*(ln(det(S2)) + tr(S2*inv(S2)))
LLi = -100*0.5*(ln(det(A*A')) + tr(S1*inv(A*A'))) \
-100*0.5*(ln(det(A*L*A')) + tr(S2*inv(A*L*A')))
# dist = abs(LLi - LL0)
dist = maxc(abs((vech(S1)|vech(S2)) - (vech(A*A')|vech(A*L*A'))))
dist
return dist
end function
# structural shocks and matrix
E1 = mnormal(100,4)
E2 = mnormal(100,cols(E1)).*{0.1, 2.5, 0.4, 1.7}
W = 0.1*mrandgen(i, 1, 4, 4, 4)
# reduced form residuals
U1 = E1*W
U2 = E2*W
S1 = U1'U1/rows(U1)
S2 = U2'U2/rows(U2)
# initial parameters
param = 0.1*abs(mnormal(cols(U1)^2+cols(U1),1))
# minimization
ff=BFGSmin(¶m, LnL(param, S1, S2))
# bounds = seq(1,rows(param))'~ones(rows(param),2).*{-20, 20}
# bounds[17,] = {17, 0, 10}
# bounds[18,] = {18, 0, 10}
# bounds[19,] = {19, 0, 10}
# bounds[20,] = {20, 0, 10}
# ffc=BFGScmin(¶m, bounds, LnL(param, S1, S2))
param
B = {param[1],param[2],param[3],param[4]; \
param[5],param[6],param[6],param[8]; \
param[9],param[10],param[11],param[12]; \
param[13],param[14],param[15],param[16]}
L = zeros(rows(S2),cols(S2))
L[1,1] = param[17]
L[2,2] = param[18]
L[3,3] = param[19]
L[4,4] = param[20]
S1b = B*B'
S2b = B*L*B'
# Check that estimations reproduce reduced form covariance matrices and structural matrix
S1
S1b
L
S2
S2b
W
B
<end hansl scipt>
5 days, 23 hours