Dear Sven,

Very much thank you (and all professors that kindly actively helped me to clarify things).

Just a last issue. In a previous email you wrote "let me add that this is just a comparison of the mechanical workings of the tools. All those results are likely invalid because the test is not valid in general with such an unrestricted exogenous variable which is not an impulse dummy, but is modeling some breaks or thresholds."

In most of the books and posts I have read, they always mention dummy variables to account for issues in the time series (seasonality, breaks, outliers)

However the dummy is to investigate the impact of a policy shift. For example, I have GDP (Gross Domestic Product) and NDB which is the amount outstanding of loans of a public bank. With the dummy variable I want to represent a policy of the government that instructed the bank to provide a higher percentage of the total loans to agricultors (for example). Then I built a dummy which is 1 during the periods where the policy was applied and set it as exogenous. As you can infer, with this dummy, my thinking was to investigate the implication for GDP (and NDB) of this policy.

Is this procedure wrong?

Pd: I want to acknowledge that I only have seen this strategy in the following master thesis (where BTW I think the author wrongly used the Johansen et al (2000) methodology because he probably did not correctly create the dummy variables and also I doubt the cointegration test for breaks is suitable for this case)

http://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=2796810&fileOId=2796812

And the author stated: "in the year 1979, China launched two crucial policies, that is, economic reform and opening-up policy and one-child policy. To capture these effects, a shift dummy variable is included in the cointegration estimations.".

The author included the shift dummy as exogenous and used the critical values considering the Johansen et al (2000) determination of the critical values

Thanks in advance

Reynaldo

Am 01.02.2019 um 09:30 schrieb Sven Schreiber:

urca:

test 10pct 5pct 1pct

r <= 1 | 5.67 6.50 8.18 11.65

r = 0 | 26.35 15.66 17.95 23.52

So that's the differing conclusions from the OP, and Gretl and tsDyn again agree, urca doesn't.

Some further evidence from Stata's documentation, see https://www.stata.com/manuals14/tsvecrank.pdf.

There are some critical values for the trace stat printed in the examples. Note: They have a 3-equation system, whereas in our (Reynaldo's) example we have 2 equations (2 endogenous). For the distribution (critical values) what matters is N - r_0 under H0, the number of I(1) trends under the null. You must not compare their r=0 case with our r=0 directly. This can be confusing here, but I hope I got it right.

With this in mind, Stata has (Examples 2 and 1):

20.04 where urca has 23.52 (N - r_0 = 2 at 1%)

6.65 where urca has 11.65 (N - r_0 = 1 at 1%)

15.41 where urca has 17.95 (N - r_0 = 2 at 5%)

3.76 where urca has 8.18 (N - r_0 = 1 at 5%)

Notice that with Stata's critical values the test conclusions from Reynaldo's example would agree with gretl (and tsDyn).

Both Stata and urca claim to use Osterwald-Lenum. Unfortunately I haven't been able to quickly grab a copy of that paper, so I couldn't check.

I repeat that I found some MacKinnon et al. paper which at first glance seemed to support urca.

In any case, gretl is in good company, whereas urca apparently isn't.

cheers,

sven

_______________________________________________ Gretl-users mailing list Gretl-users@lists.wfu.edu http://lists.wfu.edu/mailman/listinfo/gretl-users