Re: [Gretl-users] MLE with piece-wise density (Gretl-users Digest, Vol 97, Issue 6)
by Alecos Papadopoulos
That worked perfectly, thanks!
Alecos Papadopoulos
Athens University of Economics and Business, Greece
Department of Economics
cell:+30-6945-378680
fax: +30-210-8259763
skype:alecos.papadopoulos
On 4/2/2015 19:00, gretl-users-request(a)lists.wfu.edu wrote:
> There were a couple of syntax error when declaring/calling functions: (a)
> you didn't use the "return" keyword in your negbr/posbr functions, and (b)
> you shouldn't use type specifiers (scalar, series etc) when calling your
> functions. See below:
>
> <hansl>
> nulldata 4
>
> series Z = {-1, 1, 2, 1}
>
> scalar s1 = 1
> scalar s2 = 1
>
> function series negbr(series Z, scalar s1, scalar s2)
> return -ln(s1+s2)+ (1/s2)*Z
> end function
>
> function series posbr(series Z, scalar s1, scalar s2)
> return -ln(s1+s2)-(1/s1)*Z
> end function
>
> function series loglik(series Z, scalar s1, scalar s2)
> series liky = (Z>0)? posbr(Z, s1, s2): negbr(Z, s1, s2)
> return liky
> end function
>
> mle logl = loglik(Z, s1, s2)
> params s1 s2
> end mle --verbose
>
> </hansl>
10 years
Re: [Gretl-users] MLE with piecewise density (Gretl-users Digest, Vol 89, Issue 19)
by Alecos Papadopoulos
Good morning. A few months back I queried regarding executing MLE from
the script window with a piecewise density.
It was suggested that
<<
it should be quite easy if you use functions:
<pseudo-hansl>
function series f(series u, scalar a, scalar b)
[whatever]
end function
function series g(series u, scalar a, scalar b)
[whatever]
end function
function series loglik(series y, list X, matrix coef,
scalar a, scalar b)
series u = y - lincomb(X, coef)
series logl = (u>0) ? g(u, a, b) : f(u, a, b)
return logl
end function
mle ll = logl(y, X, coef, a, b)
coeff coef a b
end mle
</pseudo-hansl>
Hope this helps!
-------------------------------------------------------
Riccardo (Jack) Lucchetti
Dipartimento di Scienze Economiche e Sociali (DiSES)
>>
*Based on the above, I tried a simpler case, estimating parameters (no regression), (and with a sample of 3 so as to do it also by hand and check results), as follows**
(I use the density of the difference of two independent exponentials with different parameters)
*
nulldata 3
#series Z = {-1, 1, 2}
scalar s1 =1
scalar s2 = 1
function series negbr(series Z, scalar s1, scalar s2)
-ln(s1+s2)+ (1/s2)*Z
end function
function series posbr(series Z, scalar s1, scalar s2)
-ln(s1+s2)-(1/s1)*Z
end function
function series loglik(series Z, scalar s1, scalar s2)
series liky = (Z>0)? posbr(series Z, scalar s1, scalar s2): negbr(series Z, scalar s1, scalar s2)
return liky
end function
mle logl = loglik(Z, s1, s2)
params s1 s2
end mle --verbose
*and what I get as a reply is*
gretl version 1.9.92
Current session: 2015-02-04 06:48
> series liky = (Z>0)? posbr(series Z,
Expected ',' but found 'Z'
> series liky = (Z>0)? posbr(series Z,
Expected ':' but found 'Z'
Syntax error
*** error in function loglik, line 1
> series liky = (Z>0)? posbr(series Z, scalar s1, scalar s2): negbr(series Z, scalar s1, scalar s2)
Error executing script: halting
> end mle --verbose
*I guess this is pretty simple, and the problem is just my inexperience, but I cannot understand the expectations of the software here.* *So I am calling for help again.** Thank you.
*
Alecos Papadopoulos
Athens University of Economics and Business, Greece
Department of Economics
cell:+30-6945-378680
fax: +30-210-8259763
skype:alecos.papadopoulos
10 years
Re: [Gretl-users] decimal separators
by Stefano Fachin
Hi Allin, the plots are perfect, each installation follows accurately
its own national standard: "english" frequencies look like "0.05" and
"italian" ones like "0,05", exactly they are supposed to. I should blame
myself for using different installations (why did I do that in my two
PCs I do not really know), but since no one likes to do that and I am no
exception, I prefer to blame the Babel in national standards :-)
bye, and thanks again.
Stefano
--
_________________________________________________________________________
Stefano Fachin
Professore Ordinario di Statistica Economica
Dip. di Scienze Statistiche
Università di Roma "La Sapienza"
P.le A. Moro 5 - 00185 Roma - Italia
Tel. +39-06-49910834
fax +39-06-49910072
web http://stefanofachin.site.uniroma1.it/
---
Questa e-mail è priva di virus e malware perché è attiva la protezione avast! Antivirus.
http://www.avast.com
10 years
Gretl crashed after append
by Wingenroth, Thorsten
Hi,
as Jack suggested, I started learning Hansl.
I tried to implement a download from the website ariva.de which offers data on all sorts of securities (stocks, funds, etc.) traded in Germany. While the download is fine, getting the files into Gretl proved hard. The hansl file "elimThousSep.inp" contains the following line near the end:
append @strFile
This makes Gretl crash. It has something to do with the prior "open" command (second last line of code) because without, there is no crash.
I attach the file together with an additional small include (arivaTickers.inp) and the csv file. Gretl version is 1.9.92 on a Windows PC.
Thanks for your help!
Thorsten
Thorsten Wingenroth
Professor für Lehraufgaben BWL-Bank
Fakultät Wirtschaft | Studienzentrum Finanzwirtschaft (Center of Finance)
Duale Hochschule Baden-Württemberg Stuttgart
Baden-Wuerttemberg Cooperative State University Stuttgart
Herdweg 18 | 70174 Stuttgart
Fon +49 711 1849-766
thorsten.wingenroth(a)dhbw-stuttgart.de<mailto:thorsten.wingenroth@dhbw-stuttgart.de> | http://www.dhbw-stuttgart.de/bank
Master in Business Management - Banking & Finance
http://www.dhbw.de/master-finance
10 years
decimal separators
by Stefano Fachin
... can I add my two pence story?
I just realised that two plots produced to be included in the same paper
on two distinct PCs and relative gretl installations, one in Italian and
one in English, look different because of the ***bip*** different
national standards for decimal points (commas and dots). So tomorrow I
need to remember to redo one of them. Not much work, but nevertheless...
I HATE NATIONAL STANDARDS!
bye :-)
Stefano
--
_________________________________________________________________________
Stefano Fachin
Professore Ordinario di Statistica Economica
Dip. di Scienze Statistiche
Università di Roma "La Sapienza"
P.le A. Moro 5 - 00185 Roma - Italia
Tel. +39-06-49910834
fax +39-06-49910072
web http://stefanofachin.site.uniroma1.it/
---
Questa e-mail è priva di virus e malware perché è attiva la protezione avast! Antivirus.
http://www.avast.com
10 years
Thousand separator - compromise?
by Wingenroth, Thorsten
Hi,
here is a compromise:
Let's keep the proposed automatic detection in place. This will save hundreds of users thousands of hours.
Those five who have messy data should have the opportunity to turn it off by a parameter of the open or join command.
My point of view: User friendliness is what makes Gretl different from R. So keep going.
Kind regards,
Thorsten
10 years
thousands separator in delimited-text data
by Allin Cottrell
The post from Thorsten Wingenroth at
http://lists.wfu.edu/pipermail/gretl-users/2015-January/010606.html
and some of the follow-ups raised the issue of thousands separators
in "CSV" data files.
I said that such separators (',' in English-speaking locales and '.'
in many others) should never appear in data files intended to be
read by computers. I stand by that, but there's something here that
needs attention.
If a supposedly numeric string such as "10,233.45" or "10.233,45"
simply raised an error in gretl's CSV reader that would be OK, in my
opinion, but the complication is that our reader can handle
"string-valued" variables, and almost-numeric fields of this sort
will be accepted as string values, which can be quite confusing.
(Gretl will state that such-and-such variables have been taken as
string-valued, but a hasty user could well miss that.)
I've therefore added some code to gretl's CSV reader which attempts
to figure out if a "non-numeric" field is really a numeric field
with thousands separators, and if so handles it as numeric. This is
in CVS and snapshots. If people could test it, that would be
appreciated.
Our initial heuristic for detecting such a field is that it contains
nothing but digits, '.' and ',' (possibly with a leading minus). If
both '.' and ',' appear in a given field, we conclude that only the
right-most of these non-digits could be the decimal character and
the other might be a thousands separator. In addition, if two or
more instances of comma appear in a given field then comma cannot be
the decimal character but might be a thousands separator, and the
same goes for '.'.
Having guessed at a possible thousands separator in this way, we
then check the guess: it's wrong unless every instance of this
character is followed by exactly 3 digits.
If and only if we get a consistent result from such guessing and
checking across all observations, we make a second pass though the
data, stripping out the presumed thousands separator.
I've tested this using the "DAX" data file that Thorsten posted, in
the four possible cases:
1) The locale decimal character is '.'; the CSV file uses '.' for
thousands and ',' for decimal.
2) The locale decimal character is '.'; the CSV file uses ',' for
thousands and '.' for decimal.
3) The locale decimal character is ','; the CSV file uses '.' for
thousands and ',' for decimal.
4) The locale decimal character is ','; the CSV file uses ',' for
thousands and '.' for decimal.
All these cases are working OK with Thorsten's data.
Allin
10 years
bivariate probit in a loop
by Artur Bala
Dear all,
I'm currently estimating a bootstraped bivariate probit through a
progressive loop and retrieve each time the $yhat matrix. At some point,
the execution is interrupted with the "warning":
The statistic you requested is not available
>> genr series predict_external = $yhat[,1]
Is there a perfect prediction symptom behind this message isn't it?
Can one, in such a loop, skip cases where the MLE estimation is not
technically possible?
Best,
Artur
10 years