On Thu, 9 Feb 2017, Sven Schreiber wrote:
 Hi,
 I think it is not currently possible to do restricted ML estimation with the 
 built-in estimators (apart from the special case OLS), otherwise I would have 
 formulated this as a question and sent to gretl-users instead. Before someone 
 answers "do your own mle block", I'm aware of that possibility but
that's not 
 the point.
 For example in the ordered probit model I have checked that one can test 
 restrictions on the thresholds (cut points) with a standard "restrict ... end 
 restrict" command block. This raised the question of why actually it isn't 
 possible (easily...) to impose that restriction in ML estimation.
 Note that the probit is just an example, in principle I guess this would 
 apply to many (all?) estimators that are built on ML and which already allow 
 to test a restriction on an element of $coeff.
 I could imagine that in some model contexts arbitrarily restricting some 
 parameter might induce ill-behaved likelihood function shapes. But in 
 principle it should be OK, no? 
In principle, for linear restrictions a general solution to what you 
propose could be achieved by resorting to the following apparatus: call g 
the score vector and express the linear restrictions as Rb = d, where b 
are the unrestricted parameters.
By working on the null space of R, you can turn the constraints from 
implicit to explicit form as b = S \theta + s (like we do in the SVAR 
package) so that the score vector for the unrestricted parameters becomes 
g'S (and the Hessian S'HS if needed) by the chain rule. Then you run BFGS 
or Newton on \theta and map back the solution to b via \hat{b} = 
S\hat{\theta} + s plus all the delta-method adjustments to $vcv. Clearly, 
as you say, there's no guarantee that the resulting problem should be well 
behaved, but I guess that's unavoidable.
The problem is that the possibility of going through the above should be 
contemplated in the C code, and at present isn't (and believe me, doing so 
for all the ML estimators we have would be a major piece of work). 
Moreover, in the special case of index models, the same effect can be 
achieved more simply, as linear restrictions on coefficients can often be 
re-expressed by a redefinition of the explanatory variables; see example 
below:
<hansl>
set verbose off
set seed 110217
nulldata 1024
x1 = uniform()
x2 = uniform()
y = 1 + x1 + x2 + normal() > 0
probit y 0 x1 x2 --p-values
pnames = varname($xlist) # used later
L0 = $lnl                # used later
# test b2 == b3 via Wald test
restrict
     b2 - b3 = 0
end restrict
# same restriction via LR
z = x1 + x2
probit y 0 z --quiet
L1 = $lnl
LR = 2 * (L0 - L1)
S = {1,0; 0, 1; 0, 1}
btilde = S*$coeff
v = qform(S, $vcv)
cs = btilde ~ sqrt(diag(v))
modprint cs pnames     # restricted model
printf "LR test = %g (pvalue = %g)\n", LR, pvalue(x, 1, LR)
</hansl>
As for nonlinear restrictions, things would be even more complex, and I 
don't think there's anything that can be done automatically.
-------------------------------------------------------
   Riccardo (Jack) Lucchetti
   Dipartimento di Scienze Economiche e Sociali (DiSES)
   Università Politecnica delle Marche
   (formerly known as Università di Ancona)
   r.lucchetti(a)univpm.it
   
http://www2.econ.univpm.it/servizi/hpp/lucchetti
-------------------------------------------------------