Lema,
The short answer is not bother too much with Hausman tests.
The long answer is:
(1) Re-parameterize your model as a multilevel model;
(2) Decompose your variables into 'fixed' (within) and 'random' (between) components;
(3) Run it in R (through gretl, if you wish), and then;
(4) Do a joint F test of the equality of coefficients.
In R, it would look something like this:
<R>
library(car,lme4)
# assuming you have these packages installed in your R
mydata<-read.csv("/.../mydata.csv",header=T,sep=",",fill=T)
attach(mydata) # R purists like to lecture you about not using -attach()-; for now, ignore them
m_x1=rep(tapply(x1,groupvar,mean),tapply(x1,groupvar,length))
d_x1=x1-m_x1
m_x2=rep(tapply(x2,groupvar,mean),tapply(x2,groupvar,length))
d_x2=x2-m_x2
fit1<-lmer(y~(1|groupvar)+(1|timevar),REML=T)
summary(fit1)
fit2<-lmer(y~d_x1+m_x1+d_x2+m_x2+(1|groupvar)+(1|timevar),REML=T)
summary(fit2)
anova(fit1,fit2)
linearHypothesis(fit2,c("d_x1=m_x1","d_x2=m_x2"))
</R>
(NB: -timevar- should be ordered by way of a time counter starting at 1 *within* each of your -groupvars-.)
If you have run the above routine successfully, it means that you have now tested for the equality (or not) of your fixed and random effects, thus obviating the need for any Hausman tests.
I hope that helps and good luck.