Am 05.12.2016 um 19:19 schrieb Riccardo (Jack) Lucchetti:
 On Sun, 4 Dec 2016, Allin Cottrell wrote:
> It seems that under the null the p-value ought to be distributed
> uniformly on (0,1). That appears to the case for the chi-square test,
> but not at all for the two tests that employ the inverse normal
> transformation. 
I don't know this lottery and my discrete-valued statistics is probably 
lagging, but why would the normal distribution play a role here when you 
distribute 605 draws randomly over 69 bins? (Non-negativity / bounded 
support being only one of the issues perhaps?)
 The way I see it, the series z you're generating in the
"cdftest"
 function is not really normally distributed. Rather, is constructed in a
 way such that its frequency distribution resembles a Gaussian density,
 which wouldn't be guaranteed if data were truly normal. In other words,
 your normals are "too good to be true"; hence, your p-values are mostly
 very close to 1. 
Jack, I know you must mean something else than what you've written -- 
the data's density "too" Gaussian to be Gaussian??
cheers,
sven