On Tue, 1 Dec 2020, Sven Schreiber wrote:
Am 01.12.2020 um 16:09 schrieb Summers, Peter:
> Hi Allin,
>
> Your logic sounds similar to that of the PE Test (MacKinnon, White &
> Davidson, 1983, J of Econometrics) for comparing models with y vs log(y). I
> hadn't heard about it until a couple weeks ago while I was teaching about
> LM tests. Here's a link to the R petest() function:
>
https://www.rdocumentation.org/packages/lmtest/versions/0.9-38/topics/petest.
Hi, such a test is also available in gretl as part of the contributed
package BoxCoxFuncForm, me being a coauthor there. The reference we have
included in the helpt text is:
Testing Linear and Loglinear Regressions against Box-Cox Alternatives
Author(s): Russell Davidson and James G. MacKinnon
Source: The Canadian Journal of Economics / Revue canadienne
d'Economique,
Vol. 18, No. 3 (Aug., 1985), pp. 499-517
Stable URL:
https://www.jstor.org/stable/135016
However, I'm not sure if that is what Allin was asking for.
Thanks, Peter and Sven. But my focus is not so much on judging
whether using y or log(y) is better; at this point I'm convinced
that log(y) gives a substantially better fit.
Let me try to explain my problem more clearly. My dependent variable
y can be decomposed as y = y1 + y2, and I have data on the
two components. I'm looking for a formal test between these
alternatives:
Restricted: y = Xb + u
Unrestricted: y = y1 + y2 = Xb1 + v + Xb2 + w
where in the second case b1 and b2 are to be estimated by separate
regressions of y1 on X and y2 on X. (One might think of SUR, but X
is the same in the two cases so it's equivalent to per-equation
OLS.)
I think this would be relatively straightforward if y were just in
levels, but coming up with a valid test is complicated by the fact
that I actually want to use log(y), log(y1) and log(y2) in the
respective regressions.
Allin