On Mon, 15 Jun 2020, Sven Schreiber wrote:
Am 15.06.2020 um 03:44 schrieb Allin Cottrell:
> On Sun, 14 Jun 2020, Sven Schreiber wrote:
>> So maybe power turns around for non-Gaussian scenarios.
>
> Certainly relevant; thanks, Sven. I hadn't twigged that "Test2" is
> equivalent to Koenker's robust B-P version. I re-ran my test script with
> uniform errors (should probably try some other cases) and found:
>
> * Under H0 the original B-P test is "under-sized": rejects at much less
> than 5 percent frequency using alpha = 0.05. The size of the robust
> version is roughly right.
>
> * Under my H1, error=0.2*x*uniform(), the original B-P test still
> rejects with much higher frequency than the Koenker variant.
Obviously, if the original test were always correctly sized or
conservative _and_ had more power, it would be superior. My suspicion is
that for other kinds of violations of the assumptions it might be
oversized, however. But I don't know this specific literature, so far as
it exists.
I tried another run of my Monte Carlo, with t(12) errors. Original
B-P was somewhat oversized (around 0.075 or 0.08 for nominal 0.05)
but reasonably powerful. The Koenker robust variant was correctly
sized at 0.05 (as claimed) but had about 1/2 to 5/8 the power
against my H1 (with error multiplied by 0.2*x).
It seems the subsequent literature has kind of skated over Koenker's
1981 caveat, that the statistic is "not entirely satisfactory" since
"the power of the resulting test may be quite poor except under
idealized Gaussian conditions". I would add, _even_ under Gaussian
conditions.
Allin