your package DFP
by Sven Schreiber

Dear Uriel,
I wonder why your gretl package "DFP" for the Dickey-Pantula test is
called like this. Where does the "F" come from? A bit of a trivial
question, but I found the acronym uninformative for me. Should the name
be changed?
thanks,
sven
9 years, 2 months

saving without .gdt
by Sven Schreiber

Hi,
don't know if that is new behavior: Save the dataset by entering (in the
dialog) a name without extension (say "hello"). Gretl saves hello.gdt.
Ok. In the main window it says "hello" without .gdt. When I press ctrl-S
for saving, I get the save-as dialog, instead of just quietly saving the
data.
Intended or bug?
(snapshot June 30th)
cheers,
sven
9 years, 2 months

Leads and lags reloaded
by Sven Schreiber

Hi,
while I was trying to respond to the recent message by José Francisco,I
stumbled across the following post:
http://lists.wfu.edu/pipermail/gretl-devel/2013-March/004379.html
which suggests to do something like:
<hansl>
open data9-7
list foo = PRIME(2 to -2)
print foo -o
</hansl>
And indeed this works. However, PRIME(+3 to +2) fails, instead you have
to swap the numbers around to PRIME(+2 to +3). (PRIME(-3 to -2) also
fails.) So it seems that when you want _only_ leads or _only_ lags, you
have to provide the numbers by increasing *absolute* value. I'm not sure
I see the deeper logic there.
I would vote for accepting every ordering, or to go for decreasing
(non-absolute!) value as in the standard case (-1 to -4).
Thanks,
sven
9 years, 2 months

--no-fd-corr in system
by fdiiorio＠unina.it

dear Gretl team,
I noticed some problem using --no-df-corr in system using gretl 10.10.1
(built date 2015-04-04)
Let the Klein1 model and consider the Consumption equation
the command
estimate "Kl1" method=ols
give the following results
Equation 1: OLS, using observations 1921-1941 (T = 21)
Dependent variable: C
coefficient std. error t-ratio p-value
---------------------------------------------------------
const 16.2366 1.30270 12.46 5.62e-010 ***
P 0.192934 0.0912102 2.115 0.0495 **
P_1 0.0898849 0.0906479 0.9916 0.3353
W 0.796219 0.0399439 19.93 3.16e-013 ***
but the std. error reported above refer to an estimation with NO DF correction
(see Calzolari, 2012, MPRA Paper No. 64415, pag.34,
http://mpra.ub.uni-muenchen.de/64415/)
while
the command
estimate "Kl1" method=ols --no-df-corr
give
Equation system, Kl1
Estimator: Ordinary Least Squares
Equation 1: OLS, using observations 1921-1941 (T = 21)
Dependent variable: C
coefficient std. error t-ratio p-value
---------------------------------------------------------
const 16.2366 1.17208 13.85 1.09e-010 ***
P 0.192934 0.0820650 2.351 0.0310 **
P_1 0.0898849 0.0815592 1.102 0.2858
W 0.796219 0.0359390 22.15 5.59e-014 ***
it seems that the results are associated to the wrong option.
Please, check also with the results from a
ols estimation just on the consumption equation
thanks
Francesca
************************************************************
Francesca Di Iorio
Dipartimento di Scienze Politiche
Universita' di Napoli Federico II
via L. Rodino' 22
I-80138 Napoli
tel. 081-2538280
fax. 081-2537466
e-mail fdiiorio(a)unina.it
----
5x1000 AI GIOVANI RICERCATORI
DELL'UNIVERSITÀ DI NAPOLI
Codice Fiscale: 00876220633
www.unina.it/Vademecum5permille
9 years, 2 months

scalar function args and NAs
by Allin Cottrell

A question has come up as I've thought about user-function
arguments (see also
http://lists.wfu.edu/pipermail/gretl-devel/2015-July/005850.html )
Given a scalar parameter, what do we want to do if the caller supplies
NA as an argument? What difference, if any, should it make if the
parameter is marked as "bool" or "int" rather than plain "scalar"?
The situation prior to my changes of the last couple of days was that
NA was always accepted. But along with fixing the bounds checking for
scalar arguments, I banned NA for bool and int parameters. I've now
had second thoughts (maybe this would break some existing scripts?)
and have reverted to the status quo ante.
But maybe this is something we want to think about. Are there real
use-cases where a function writer would want to accept NA for a bool
or int argument? Or would it be more convenient for the writer not to
have to bother with checking for NAs, in the knowledge that they would
be ruled out by gretl?
Allin
9 years, 2 months

Default value specification for scalar and integer arguments
by Sven Schreiber

Hi,
I want to ask whether the following (which is working!) is actually
according to the hansl spec. Consider the session at the end of the message.
Hansl allows to specify the default value [0] both for 'scalar' and
'int', without requiring minimum or maximum values. But the way I read
the paragraph "Function parameters: optional refinements" in the user
guide, this would seem to be actually a syntax error. Instead for
non-bool types it should be [::0].
Or if gretl is actually treating this as an implicit bool, what are the
consequences of that? None, i.e. can the function body still do
arbitrary calculations with the passed scalar value?
Also (and I feel like having deja-vus again) notice that in the check2()
function an integer parameter is specified, but if the caller passes
0.5, there is no type error, instead the argument is converted to its
integer component. I do not think this is obvious, is this documented
somewhere?
Thanks,
sven
? function void check1 (scalar in[0])
> print in
> end function
? function void check2 (int in[0])
> print in
> end function
? check1()
in = 0.00000000
? check1(3)
in = 3.0000000
? check1(1)
in = 1.0000000
? check1(0.5)
in = 0.50000000
? check2()
in = 0.00000000
? check2(3)
in = 3.0000000
? check2(1)
in = 1.0000000
? check2(1.6)
in = 1.00000000
9 years, 2 months

Suggestion for new unit root test formatting
by Sven Schreiber

Hi,
now for something completely different: As I mentioned in passing at the
conference, I have received some (mild) complaints about the output of
the ADF test in gretl. And indeed I think the arrangement of the
information is suboptimal. So here's a concrete suggestion with
before-and-after [need to view this with monospaced font]:
Now / before:
"
Augmented Dickey-Fuller test for LRM
including 4 lags of (1-L)LRM
sample size 50
unit-root null hypothesis: a = 1
test with constant
model: (1-L)y = b0 + (a-1)*y(-1) + ... + e
1st-order autocorrelation coeff. for e: 0.075
lagged differences: F(4, 44) = 4.290 [0.0051]
estimated value of (a - 1): -0.0591989
test statistic: tau_c(1) = -1.70189
asymptotic p-value 0.4303
"
And my suggestion / after:
"
LRM: Augmented Dickey-Fuller test
(H0: unit root, a = 1)
-------------------------------------
Test with constant
â - 1 = -0.0591989
Test stat tau_c(1) = -1.70189
Asympt. p-value = 0.4303
-------------------------------------
Test equation:
Dependent variable differenced: (1-L)LRM,
lagged level with coefficient a - 1,
including 4 lagged differences
(exclusion test F(4, 44) = 4.290 [0.0051]),
and constant term.
Residual 1st-order autocorr = 0.075
Sample size = 50
--------------------------------------
"
Now I'm not saying my suggestion is optimal, but I do think it would be
an improvement. If somebody directs me to the right places in the
source, I could try to make the corresponding changes.
What do you think?
cheers,
sven
9 years, 2 months

error message for indexing into empty matrix
by Sven Schreiber

Hi,
this:
matrix om = {}
matrix mu = om[1,]
produces a "data error". Wouldn't it be natural to call this an "index
out of bounds error" instead?
thanks,
sven
9 years, 2 months

smpl <condition> --restrict --permanent
by Allin Cottrell

By popular request we introduced (in gretl 1.9.91) the --permanent
option for the "smpl" command: this enables to the user to apply a
sample restriction and permanently shrink the dataset to the new
specification (as opposed to keeping the full dataset in the
background).
It has recently come to light that this can cause problems under
some conditions, in the context of a GUI session in particular, and
I'm trying to decide how to handle this.
Consider a GUI gretl session of this sort: you open bigdata.gdt,
estimate some models -- on the full dataset and/or on one or more
subsamples -- and save these "as icons". Then you decide that
bigdata is redundant for your purposes and you want to do a
permanent subsampling. What happens with your saved models?
There's an easy case: all the models were in fact estimated on the
particular subsample of bigdata that you'd like to impose
permanently (something we can detect and handle quite easily).
Otherwise, some or all of the saved models are going to become
zombies: in general there will be no way -- after the permanent
subsampling -- to do things like running tests, saving residuals as
series, plotting actual versus fitted and so on, since the
dataset(s) on which these models were estimated will have
disappeared.
So here's my proposal: in the circumstances described above, if
you issue a command to shrink the dataset permanently, gretl checks
to see whether all saved models fall under the "easy case". If so,
fine. If not, you get a warning that not all saved models can be
preserved, with a "Go ahead? Yes/No" dialog. If you say "Yes", we
destroy all the models that would turn into zombies.
Any comments/suggestions?
Allin
9 years, 2 months

different treatment of d.o.f. correction between sd()/sdc() and corr()/mcorr()
by Sven Schreiber

Hi,
I noticed that corr() and mcorr() produce the same results, but sd() and
sdc() seem to use different divisors. This "diff-in-diff" seems to me a
bit arbitrary, or is there some background story? Otherwise I'd say the
functions should all use the same convention. (which one I don't care)
thanks,
sven
example:
<hansl>
open denmark
ols LRM const LRY IDE
list lall = LRM LRY IDE
matrix mcr = mcorr({lall})
print mcr
matrix msd = sdc({lall})
print msd
loop foreach i lall
c = corr(LRM, lall.$i)
print c
s = sd(lall.$i)
print s
endloop
</hansl>
9 years, 2 months