gretl crashed while trying a qqplot
by Fred Engst
Hi Allin,
As I was trying to figure out the relationship between ADF reported p-value vs. observed distribution of the slope estimation, an attempt on qqplot crashed gretl. It is a repeatable crash (report attached).
I then tried gretlcli, and the message it gave was:
? qqplot ct_p b_rho
Gnuplot is broken or too old: must be >= version 5.0
I”m not sure this is related to the crash for gretlcli can run gnuplot anyway.
Fred
attached is the crash report.
5 years, 8 months
None-uniform distribution of the p-value from ADF test in gretl when there is an intercept or trend?
by Fred Engst
>
> On Tue, 23 Apr 2019, Fred Engst wrote:
>
>> Hi all,
>
>> As I was trying to see how adf performs under different scenarios,
>> I found somewhat surprising results.
>>
>> ADF seems to work fine when there are either an intercept, or a
>> trend, or both. But when there are none, the distribution of the
>> resulting p-value become an uniform distribution.
>>
>> What did I do wrong?
>
> I guess you didn't do anything wrong. When the null hypothesis is
> true and a test is working as designed, its p-value will be
> uniformly distributed on (0,1). Then using marginal significance
> level alpha you'll reject 100*alpha percent of the time, and the
> test is properly sized. Funny how probability works.
>
> Allin
Thanks Allin.
That is interesting indeed. In other words, with an uniform distribution of the p-value [when y(t) = y(-1)], there is only 5% changes that we do reject the null of random walk at 5% level of significance.
However, for other parameters of the intercept or trend, the p-value distribution is not uniform. So their size is incorrect?
For example, when I set both the intercept and trend parameters to 0.1, I get the attached table.
It seems that there is only a 0.18% chance that I will reject the null (reading the 2nd column of the table).
I’m confused.
Fred
The following frequency table illustrates my point:
y = 0.1 +y(-1) + 0.1*t y = y(-1)
p-val ct p-val c p-val nc p-val ct p-val c p-val nc p-val
bin cum. cum. cum. cum. cum. cum.
0.000 0.04% 0.00% 0.00% 2.24% 2.27% 2.28%
0.025 0.08% 0.00% 0.00% 4.51% 4.52% 4.44%
0.050 0.18% 0.00% 0.00% 7.07% 6.80% 6.79%
0.075 0.28% 0.00% 0.00% 9.57% 9.26% 9.02%
0.100 0.35% 0.00% 0.00% 12.09% 11.70% 11.31%
0.125 0.41% 0.00% 0.00% 14.23% 14.17% 13.88%
0.150 0.47% 0.00% 0.00% 16.51% 16.65% 15.94%
0.175 0.51% 0.00% 0.00% 18.78% 19.24% 18.24%
0.200 0.65% 0.00% 0.00% 21.47% 21.59% 20.47%
0.225 0.79% 0.00% 0.00% 24.05% 24.12% 22.92%
0.250 0.93% 0.00% 0.00% 26.28% 26.66% 25.40%
0.275 1.15% 0.00% 0.00% 28.71% 29.02% 27.85%
0.300 1.31% 0.00% 0.00% 30.81% 31.51% 30.49%
0.325 1.53% 0.00% 0.00% 33.17% 33.77% 33.15%
0.350 1.71% 0.00% 0.00% 35.60% 36.16% 35.29%
0.375 1.93% 0.00% 0.00% 38.22% 38.81% 37.77%
0.400 2.19% 0.00% 0.01% 40.91% 41.46% 40.51%
0.425 2.40% 0.00% 0.01% 43.10% 43.93% 42.97%
0.450 2.67% 0.00% 0.02% 45.68% 46.36% 45.83%
0.475 2.92% 0.00% 0.02% 48.26% 48.99% 48.31%
0.500 3.22% 0.00% 0.03% 50.85% 51.40% 50.64%
0.525 3.60% 0.00% 0.06% 53.34% 53.76% 53.21%
0.550 3.87% 0.00% 0.12% 55.91% 56.12% 55.55%
0.575 4.22% 0.00% 0.16% 58.54% 58.48% 57.88%
0.600 4.73% 0.00% 0.32% 60.83% 60.69% 60.57%
0.625 5.12% 0.00% 0.49% 63.59% 63.01% 62.98%
0.650 5.63% 0.00% 0.81% 66.29% 65.69% 65.55%
0.675 6.13% 0.00% 1.39% 68.74% 68.24% 68.23%
0.700 6.71% 0.00% 2.30% 71.49% 70.59% 71.14%
0.725 7.35% 0.00% 3.71% 74.05% 72.90% 73.77%
0.750 8.06% 0.00% 6.04% 76.53% 75.43% 76.29%
0.775 9.06% 0.00% 9.63% 79.17% 77.91% 79.17%
0.800 10.03% 0.00% 15.20% 81.86% 80.50% 81.73%
0.825 11.09% 0.00% 23.84% 84.31% 83.13% 84.38%
0.850 12.67% 0.00% 36.05% 86.71% 85.75% 86.74%
0.875 14.80% 0.00% 51.83% 89.00% 88.25% 89.39%
0.900 17.99% 0.00% 71.48% 91.66% 91.06% 92.07%
0.925 22.44% 0.00% 89.52% 94.19% 93.94% 94.69%
0.950 31.22% 0.00% 99.00% 96.75% 96.80% 97.43%
1.000 100.00% 100.00% 100.00% 100.00% 100.00% 100.00%
# simulation of unit root process
# y(t)=b0+rho*y(t-1)+b1*time+error
# there are b0, rho, and b1 parameters to set
scalar N = 500 # series length
scalar R = 10000 # repeats or resampling
nulldata N --preserve
scalar rho = 1 # set rho from 0.0 to 1.0
scalar b0 = 0.1 # set intercept b0 = any value
scalar b1 = 0.1 # set trend b1 to any value
string outfilename = sprintf("unitroot,N=%d,b0=%3.1f,b1=%3.1f,rho=%3.2f.gdt",N,b0,b1,rho)
loop R --progressive --quiet
series e = normal(0,10) # error vector
series y = 0
y = b0 + rho*y(-1) + b1*index + e
ols y const y(-1) index --quiet
scalar b_c = $coeff(const)
scalar b_rho = $coeff[2]
scalar b_t = $coeff[3]
adf -1 y --ct --quiet
scalar ct_p = $pvalue
scalar ct_t = $test
adf -1 y --c --quiet
scalar c_p = $pvalue
scalar c_t = $test
adf -1 y --nc --quiet
scalar nc_p = $pvalue
scalar nc_t = $test
print b_c b_rho b_t ct_p ct_t c_p c_t nc_p nc_t
store "@outfilename" b_c b_rho b_t ct_p ct_t c_p c_t nc_p nc_t
endloop
open "@outfilename" #freq b_c --plot=display
#freq b_rho --plot=display
#freq b_t --plot=display
freq ct_p --plot=display --min=0 --binwidth=0.025
#freq ct_t --plot=display
freq c_p --plot=display --min=0 --binwidth=0.025
#freq c_t --plot=display
freq nc_p --plot=display --min=0 --binwidth=0.025
#freq nc_t --plot=display
5 years, 8 months
SVEC restrictions
by Olasehinde Timmy
Dear Sven
I am very happy with your last response about the jalpha and jbeta
matrices. However, I would like to know how to peruse the short run and the
long run restrictions. Are they similar to the conventional short and long
run matrix in jMulTi? Because, if not, it is contrary to how the SVAR adons
described it in the SVAR adons manual.
Regards
Timmy.
5 years, 8 months
An uniform distribution of the p-value from adf test in gretl when there is no intercept nor trend?
by Fred Engst
Hi all,
As I was trying to see how adf performs under different scenarios, I found somewhat surprising results.
ADF seems to work fine when there are either an intercept, or a trend, or both. But when there are none, the distribution of the resulting p-value become an uniform distribution.
What did I do wrong?
Fred
Here is the script to generate random walk series and then test to see how adf works or not working:
# simulation of unit root process
# y(t)=b0+rho*y(t-1)+b1*time+error
# there are b0, rho, and b1 parameters to set
scalar N = 50 # series length
scalar R = 1000 # repeats or resampling
nulldata N --preserve
scalar rho = 1 # set rho from 0.0 to 1.0
scalar b0 = 0 # set intercept b0 = any value
scalar b1 = 0 # set trend b1 to any value
string outfilename = sprintf("unitroot,N=%d,b0=%3.0f,b1=%3.0f,rho=%3.2f.gdt",N,b0,b1,rho)
loop R --progressive --quiet
series e = normal(0,10) # error vector
series y = 0
y = b0 + rho*y(-1) + b1*index + e
ols y const y(-1) index --quiet
scalar b_c = $coeff(const)
scalar b_rho = $coeff[2]
scalar b_t = $coeff[3]
adf -1 y --ct --quiet
scalar ct_p = $pvalue
scalar ct_t = $test
adf -1 y --c --quiet
scalar c_p = $pvalue
scalar c_t = $test
adf -1 y --nc --quiet
scalar nc_p = $pvalue
scalar nc_t = $test
print b_c b_rho b_t ct_p ct_t c_p c_t nc_p nc_t
store "@outfilename" b_c b_rho b_t ct_p ct_t c_p c_t nc_p nc_t
endloop
open "@outfilename"
freq b_c --plot=display
freq b_rho --plot=display
freq b_t --plot=display
freq ct_p --plot=display
#freq ct_t --plot=display
freq c_p --plot=display
#freq c_t --plot=display
freq nc_p --plot=display
#freq nc_t --plot=display
5 years, 8 months
(no subject)
by Olasehinde Timmy
Dear Sven,
I appreciate the time you alloted to developed the SVEC gui. However, the
$Jbeta and the $jalpha failed to work. It was showing "the statistics you
requested was not available". Please, how can I resolve this.
Regards
Timmy
5 years, 8 months
Re: Loading large datasets into gretl
by Allin Cottrell
On Wed, 17 Apr 2019, Logan Kelly wrote:
> I have students who are working with very big dataset--around 9
> million observations. I had one student try to load a 4 GB csv
> file into gretl, and gretl loaded it! But with some errors.
What sort of errors -- can you elaborate?
> So my question are
>
> 1. What is the largest data set one should expect gretl to handle?
Well, that's going to depend on how much RAM you have.
> 2. Are there any suggestions for handling large datasets in gretl?
For one thing, with many millions of observations any tiny, tiny
effect will be "statistically significant"; it's probably a good
idea to down-sample (perhaps at random) to an n in the hundreds of
thousands.
> 3. Is there a better file type than csv to import large datasets
> into gretl?
Not really; our CSV importer is about the most effective of our
various importers.
A general comment: In gretl, every data value is stored as a
"double" (a double-precision floating-point value, which occupies 64
bits or 8 bytes). But in some huge datasets many of the variables
may be representable in a much smaller data type, such as a single
byte (8 bits). If you're loading a 4 GB CSV file with a lot of 0s
and 1s as data values, those values will be expanded by a factor of
8 in gretl's in-memory version -- which may make the difference
between feasible and infeasible, for given RAM.
This is something we may want to think about in future. It will not
be easy to allow smaller data types for series but maybe that's
something we need to aim for, eventually.
Allin
5 years, 8 months
Loading large datasets into gretl
by Logan Kelly
Hello all,
I have students who are working with very big dataset--around 9 million observations. I had one student try to load a 4 GB csv file into gretl, and gretl loaded it! But with some errors. So my question are
1. What is the lagest data set one should expect gretll to handle?
2. Are there any suggestions for handling large datasets in gretl?
3. Is there a better file type than csv to import large datasets into gretl?
Thanks,
Logan
[https://drive.google.com/uc?id=1g-sFwIbD1zq3syHnPeupo1kNk9jQZXwe]
Logan Kelly, Ph.D.
Associate Professor and Chair, Dept. of Economics
University of Wisconsin-River Falls
p: (715) 425-4324 m: (401) 256-0986 f: (715) 425-0707
410 S. 3rd Street, River Falls, WI 54022, Room 27E South Hall
Click to schedule a meeting<https://calendly.com/kellyecon/uwrf>
5 years, 8 months
welcome to the new gretl lists
by Allin Cottrell
Hello all,
If all goes well, you are reading a post from the new gretl-users
and/or gretl-devel list(s).
Please make note of the new posting addresses.
--
Allin Cottrell
Department of Economics
Wake Forest University, NC
5 years, 8 months
Test
by r.lucchetti@univpm.it
foo bar baz
5 years, 8 months