Re: [Gretl-users] ADF Test
by artur.tarassow@googlemail.com
The answer is simple: for the s variable no lags are included. Hence, there is no need to add lagged first difference(s).
You can check this when you allow for lags and deactivate the automatic lag selection option.
Best,
Artur
-original message-
Subject: [Gretl-users] ADF Test
From: "(s) Philip Braithwaite" <philip.braithwaite(a)students.plymouth.ac.uk>
Date: 18/02/2012 18:23
Hi,
I've been running the ADF test for 2 variables, however when I view the regression results of the ADF test, it appears to perform it differently for each variable. The outputs for each variable are pasted below. The first one tests the differenced u variable against u_1 and d_u_1 (plus seasonal dummies), which you would expect for the test. However for s, it leaves out the d_s_1 element. I've found this happening with a number of other variables in different models.
Is this still performing a correct ADF test or should I be doing something differently?
Any information you could offer would be hugely appreciated
Augmented Dickey-Fuller test for u
including one lag of (1-L)u (max was 1)
sample size 93
unit-root null hypothesis: a = 1
test with constant plus seasonal dummies
model: (1-L)y = b0 + (a-1)*y(-1) + ... + e
1st-order autocorrelation coeff. for e: -0.023
estimated value of (a - 1): -0.121213
test statistic: tau_c(1) = -1.98793
asymptotic p-value 0.2924
Augmented Dickey-Fuller regression
OLS, using observations 1980:03-1987:11 (T = 93)
Dependent variable: d_u
coefficient std. error t-ratio p-value
---------------------------------------------------------
const 0.0512208 0.00498901 10.27 3.37e-016 ***
u_1 -0.121213 0.0609744 -1.988 0.2924
d_u_1 -0.290699 0.106952 -2.718 0.0081 ***
dm1 -0.0616078 0.00781483 -7.883 1.47e-011 ***
dm2 -0.0914671 0.00835130 -10.95 1.64e-017 ***
dm3 -0.0703498 0.00808719 -8.699 3.77e-013 ***
dm4 -0.0598935 0.00699811 -8.559 7.09e-013 ***
dm5 -0.0549386 0.00698092 -7.870 1.56e-011 ***
dm6 -0.0216324 0.00692478 -3.124 0.0025 ***
dm7 -0.0375700 0.00692756 -5.423 6.20e-07 ***
dm8 -0.0733871 0.00660600 -11.11 8.23e-018 ***
dm9 -0.0622728 0.00756849 -8.228 3.13e-012 ***
dm10 -0.0360162 0.00685525 -5.254 1.23e-06 ***
dm11 -0.0341986 0.00663768 -5.152 1.85e-06 ***
AIC: -536.222 BIC: -500.766 HQC: -521.906
Dickey-Fuller test for s
sample size 95
unit-root null hypothesis: a = 1
test with constant plus seasonal dummies
model: (1-L)y = b0 + (a-1)*y(-1) + e
1st-order autocorrelation coeff. for e: -0.008
estimated value of (a - 1): -1.26693
test statistic: tau_c(1) = -11.9177
p-value 0.002828
Dickey-Fuller regression
OLS, using observations 1980:02-1987:12 (T = 95)
Dependent variable: d_s
coefficient std. error t-ratio p-value
---------------------------------------------------------
const 0.378700 0.0383446 9.876 1.31e-015 ***
s_1 -1.26693 0.106307 -11.92 0.0028 ***
dm1 -0.424225 0.0774678 -5.476 4.64e-07 ***
dm2 -0.462089 0.0504077 -9.167 3.35e-014 ***
dm3 -0.381479 0.0512581 -7.442 8.79e-011 ***
dm4 -0.338558 0.0526217 -6.434 7.84e-09 ***
dm5 -0.113624 0.0535687 -2.121 0.0369 **
dm6 -0.182175 0.0651366 -2.797 0.0064 ***
dm7 -0.439919 0.0575612 -7.643 3.55e-011 ***
dm8 -0.432768 0.0505549 -8.560 5.39e-013 ***
dm9 -0.500302 0.0516082 -9.694 3.01e-015 ***
dm10 -0.543976 0.0504374 -10.79 2.14e-017 ***
dm11 -0.547869 0.0503964 -10.87 1.45e-017 ***
AIC: -154.374 BIC: -121.173 HQC: -140.958
Thanks
Philip Braithwaite
_______________________________________________
Gretl-users mailing list
Gretl-users(a)lists.wfu.edu
http://lists.wfu.edu/mailman/listinfo/gretl-users
12 years, 7 months
ADF Test
by (s) Philip Braithwaite
Hi,
I've been running the ADF test for 2 variables, however when I view the regression results of the ADF test, it appears to perform it differently for each variable. The outputs for each variable are pasted below. The first one tests the differenced u variable against u_1 and d_u_1 (plus seasonal dummies), which you would expect for the test. However for s, it leaves out the d_s_1 element. I've found this happening with a number of other variables in different models.
Is this still performing a correct ADF test or should I be doing something differently?
Any information you could offer would be hugely appreciated
Augmented Dickey-Fuller test for u
including one lag of (1-L)u (max was 1)
sample size 93
unit-root null hypothesis: a = 1
test with constant plus seasonal dummies
model: (1-L)y = b0 + (a-1)*y(-1) + ... + e
1st-order autocorrelation coeff. for e: -0.023
estimated value of (a - 1): -0.121213
test statistic: tau_c(1) = -1.98793
asymptotic p-value 0.2924
Augmented Dickey-Fuller regression
OLS, using observations 1980:03-1987:11 (T = 93)
Dependent variable: d_u
coefficient std. error t-ratio p-value
---------------------------------------------------------
const 0.0512208 0.00498901 10.27 3.37e-016 ***
u_1 -0.121213 0.0609744 -1.988 0.2924
d_u_1 -0.290699 0.106952 -2.718 0.0081 ***
dm1 -0.0616078 0.00781483 -7.883 1.47e-011 ***
dm2 -0.0914671 0.00835130 -10.95 1.64e-017 ***
dm3 -0.0703498 0.00808719 -8.699 3.77e-013 ***
dm4 -0.0598935 0.00699811 -8.559 7.09e-013 ***
dm5 -0.0549386 0.00698092 -7.870 1.56e-011 ***
dm6 -0.0216324 0.00692478 -3.124 0.0025 ***
dm7 -0.0375700 0.00692756 -5.423 6.20e-07 ***
dm8 -0.0733871 0.00660600 -11.11 8.23e-018 ***
dm9 -0.0622728 0.00756849 -8.228 3.13e-012 ***
dm10 -0.0360162 0.00685525 -5.254 1.23e-06 ***
dm11 -0.0341986 0.00663768 -5.152 1.85e-06 ***
AIC: -536.222 BIC: -500.766 HQC: -521.906
Dickey-Fuller test for s
sample size 95
unit-root null hypothesis: a = 1
test with constant plus seasonal dummies
model: (1-L)y = b0 + (a-1)*y(-1) + e
1st-order autocorrelation coeff. for e: -0.008
estimated value of (a - 1): -1.26693
test statistic: tau_c(1) = -11.9177
p-value 0.002828
Dickey-Fuller regression
OLS, using observations 1980:02-1987:12 (T = 95)
Dependent variable: d_s
coefficient std. error t-ratio p-value
---------------------------------------------------------
const 0.378700 0.0383446 9.876 1.31e-015 ***
s_1 -1.26693 0.106307 -11.92 0.0028 ***
dm1 -0.424225 0.0774678 -5.476 4.64e-07 ***
dm2 -0.462089 0.0504077 -9.167 3.35e-014 ***
dm3 -0.381479 0.0512581 -7.442 8.79e-011 ***
dm4 -0.338558 0.0526217 -6.434 7.84e-09 ***
dm5 -0.113624 0.0535687 -2.121 0.0369 **
dm6 -0.182175 0.0651366 -2.797 0.0064 ***
dm7 -0.439919 0.0575612 -7.643 3.55e-011 ***
dm8 -0.432768 0.0505549 -8.560 5.39e-013 ***
dm9 -0.500302 0.0516082 -9.694 3.01e-015 ***
dm10 -0.543976 0.0504374 -10.79 2.14e-017 ***
dm11 -0.547869 0.0503964 -10.87 1.45e-017 ***
AIC: -154.374 BIC: -121.173 HQC: -140.958
Thanks
Philip Braithwaite
12 years, 7 months
Random-effects panel-data
by Helgi Tomasson
I am doing a panel-data exercise from Greene (5th. ed. ) exercise 13.1
Why do I get precisely the same estimates in pooled OLS as in the
random-effects panel model?
Best regards
Helgi Tomasson
The data is below.
I get the following
Model 2: Random-effects (GLS), using 30 observations
Included 3 cross-sectional units
Time-series length = 10
Dependent variable: y
coefficient std. error t-ratio p-value
--------------------------------------------------------
const −0.747476 0.955953 −0.7819 0.4408
x 1.05896 0.0586557 18.05 5.84e-17 ***
Mean dependent var 15.09667 S.D. dependent var 7.252441
Sum squared resid 120.6687 S.E. of regression 2.039850
Log-likelihood −63.44593 Akaike criterion 130.8919
Schwarz criterion 133.6942 Hannan-Quinn 131.7884
'Within' variance = 3.0455
'Between' variance = 0.113088
theta used for quasi-demeaning = 0
GRETL gives me precicel the same answer on pooled-estimate and on
randome-effects estimate. I have done this exercise before with an older
vers
i t y x
1 1 13.32 12.85
2 1 20.30 22.93
3 1 8.85 8.65
1 2 26.30 25.69
2 2 17.47 17.96
3 2 19.60 16.55
1 3 2.62 5.48
2 3 9.31 9.16
3 3 3.87 1.47
1 4 14.94 13.79
2 4 18.01 18.73
3 4 24.19 24.91
1 5 15.80 15.41
2 5 7.63 11.31
3 5 3.99 5.01
1 6 12.20 12.59
2 6 19.84 21.15
3 6 5.73 8.34
1 7 14.93 16.64
2 7 13.76 16.13
3 7 26.68 22.70
1 8 29.82 26.45
2 8 10.00 11.61
3 8 11.49 8.36
1 9 20.32 19.64
2 9 19.51 19.55
3 9 18.49 15.44
1 10 4.77 5.43
2 10 18.32 17.06
3 10 20.84 17.87
12 years, 7 months
Gretl error/crash
by (s) Simon Grenville-Wood
Hi,
I'm currently attempting a multinomial logit model on a large sample of panel data.
I'm having a problem where I need to be able to create dummies for each individual in the sample to control for the individual fixed effects.
In order to do so while using the logit model, I need to dummify the person ID (PID) in the sample.
I've opened the gretl console and I enter the following commands:
discrete PID <----transform the variable, 19,421 values, to be discrete
dummify PID <----create dummies for each individual to control for fixed effects
The last command causes gretl to crash, giving a libcairo_32.dll error. This happens on whichever computer I am using. Occasionally, if I use the menus instead of the console, it will give an 'out of memory' error. My PC is definitely sufficiently powerful.
Does anyone have any ideas as to how I can get around this issue?
Thanks,
Simon
12 years, 7 months
Gretl crash
by (s) Simon Grenville-Wood
Hi,
I'm currently attempting a multinomial logit model on a large sample of panel data.
I'm having a problem where I need to be able to create dummies for each individual in the sample to control for the individual fixed effects.
In order to do so while using the logit model, I need to dummify the person ID (PID) in the sample.
I've opened the gretl console and I enter the following commands:
discrete PID <----transform the variable, 19,421 values, to be discrete
dummify PID <----create dummies for each individual to control for fixed effects
The last command causes gretl to crash, giving a libcairo_32.dll error. This happens on whichever computer I am using. Occasionally, if I use the menus instead of the console, it will give an 'out of memory' error. My PC is definitely sufficiently powerful.
Does anyone have any ideas as to how I can get around this issue?
Thanks,
Simon
12 years, 7 months
Bootstrap VAR models
by alexkakashi@libero.it
Hi,
I have the following question. Let us consider a trivariate VAR model. The
parameters of the model are estimated using:
system method=sur
equation Y const Y(-1) X(-1) Z(-1)
equation X const Y(-1) X(-1) Z(-1)
equation Z const Y(-1) X(-1) Z(-1)
end system
I'd like study Granger causality from X to Y and the critical values are
calculated using bootstrap method based on residuals. Is this procedure
implemented in gretl?
Best regards.
Alessandro
12 years, 7 months
MLE with endogenously determined moving average
by Johann Jaeckel
A couple of months ago I sent a question regarding a MLE with an
endogenously determined moving average over the list. In response, Allin
provided a generic script for a loop, which was exactly what I needed for
my estimation. (See mail below.)
Now, I would like to add a second independent variable, which, like the
first independent variable, should be split into a short-run (SR) and a
long-run (LR) coefficient:
Y = a + b1*X1_LR + e_(X1_LR) + b2*X1_SR + e_(X1_SR) + b3*X2_LR + e_(X2_LR)
+ b4*X2_SR + e_(X2_SR)
where all of the error terms are i.i.d. disturbances. X1_LR and X2_LR are
centered moving averages of the two independent variables, and X1_SR and
X2_SR are the deviations of the actual series from their moving averages.
As in the baseline model, the length of the moving average should be set
such that the likelihood function is maximized. The loop script below runs
the estimation for 'pmax' times, i.e. the maximum allowed length of the
moving average.
Since I am using the same value of 'pmax' for both independent variables, I
need a loop that runs 'pmax' to the square estimations. Put differently, I
want to determine, for which combination of p for X1 and X2 the likelihood
function is maximized.
Conceptually, this should be a relatively simple addition to the script.
However, I do not know how to add the necessary commands. Any help is
appreciated.
Date: Thu, 21 Jul 2011 10:46:54 -0400 (EDT)
> From: Allin Cottrell <cottrell(a)wfu.edu>
> Subject: Re: [Gretl-users] MLE with endogenouly determined moving
> average
> To: Gretl list <gretl-users(a)lists.wfu.edu>
> Message-ID: <alpine.LNX.2.00.1107211039570.12081(a)waverley.Belkin>
> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
>
> On Wed, 20 Jul 2011, Johann Jaeckel wrote:
>
> > It's my first time posting on this list. I hope the issue I am having is
> > appropriate for this forum. Any help is greatly appreciated.
> >
> > I want to estimate a simple model with one independent variable using
> > MLE with a twist.
> >
> > The twist is the following, the effect of the independent variable (X)
> > is split into a short run (SR) and a long run (LR) coefficient. My model
> > thus looks like this:
> >
> > Y = a + b1*X_LR + e_LR + b2*X_SR + e_SR
> >
> > where e_LR and e_SR are i.i.d. disturbances. X_LR is a centered moving
> > average of the independent variable and X_SR is the deviation of the
> > actual series from its moving average.
> >
> > The length of the moving average should be determined endogenously, i.e.
> > in such a way that the likelihood function is maximized.
> >
> > Now, I am capable of running a basic MLE in Gretl by specifying the
> > log-likelihood function and the derivatives. However, I am having
> > troubles with creating the short run and the long run series and in
> > particular with the endogenous determination of the length of the moving
> > average.
> >
> > I have a hunch that I need to iterate the MLE command itself, but I have
> > no clue how to implement this in a script.
>
> I don't know how helpful this is, but here's illustrative use
> of a loop:
>
> <hansl>
> nulldata 200
>
> series y = normal()
> series x = normal()
> # set the longest allowable MA
> pmax = 30
> # observations lost at start and end
> lost = int(pmax / 2)
> p_mle = 0
> llmax = -1e300
>
> loop p=2..pmax -q
> smpl --full
> series LR = movavg(x, p, 1)
> series SR = x - LR
> # ensure common sample size
> smpl +lost -lost
> ols y 0 LR SR --quiet
> printf "p = %02d, loglik = %f\n", p, $lnl
> if $lnl > llmax
> llmax = $lnl
> p_mle = p
> endif
> endloop
>
> printf "\nloglik maximized at %f for p = %d\n",
> llmax, p_mle
> </hansl>
>
> Allin Cottell
>
>
> ------------------------------
>
> _______________________________________________
> Gretl-users mailing list
> Gretl-users(a)lists.wfu.edu
> http://lists.wfu.edu/mailman/listinfo/gretl-users
>
> End of Gretl-users Digest, Vol 54, Issue 28
> *******************************************
>
--
Johann
12 years, 7 months
Deterministic trend in VAR
by Muheed Jamaldeen
Hi all,
Just a general VAR related question. When is it appropriate to include
a deterministic time trend in the reduced form VAR? Visually some of the
data series (not all) look like they have trending properties. In any case,
does the inclusion of the time trend matter if the process is stable and
therefore stationary (i.e. the polynomial defined by the determinant of the
autoregressive operator has no roots in and on the complex unit circle)
without the time trend term. Other than unit root tests, is there a better
way to test whether the underlying data generating process has a stochastic
or deterministic process?
I am mainly interested in the impulse responses.
Cheers,
Mj
12 years, 8 months
Decimal Point in Gnuplot
by Henrique Andrade
Dear Gretl Team,
I'm trying to plot some graphs with Gretl for Mac (snapshot 2012-02-07) but
Gnuplot only shows me "." as the decimal sign even though my system
language is Brazilian Portuguese. The only way to plot graphs with "," is
to explicitly use the option (set decimalsign ",") in the gnuplot command.
I can see this behavior on Lion and on Snow Leopard.
Best regards,
Henrique Andrade*
*
12 years, 8 months
[help] How to get the coefficients from SUR or System estimation
by caspisun@gmail.com
Dear. All
I am struggling how to get the coefficient from the SUR or system
estimation.
I know that, In case of simple regression, If I want to get the
coefficient to recycle it,
Y cosnt X1 X2
after the regression, we could get the coefficient from following command
genr aaa = $coeff(X1)
genr aaa =$coeff(const)
However, I could not find how to get the coefficients from SUR or System
estimation
for example, in the following SUR estimation
================================================
system name="Rotterdam"
equation wdq1 DivisiaQ ddp1 ddp2 ddp3
equation wdq2 DivisiaQ ddp1 ddp2 ddp3
equation wdq3 DivisiaQ ddp1 ddp2 ddp3
end system
restrict "Rotterdam"
b[1,3]-b[2,2]=0
b[1,4]-b[3,2]=0
b[2,4]-b[3,3]=0
end restrict
estimate "Rotterdam" method=sur --iterate
estimate "Rotterdam" method=3sls --iterate
===================================================
Any suggestions?
Sincerely
Sung Jin Lim
12 years, 8 months