On Wed, 30 Apr 2014, Riccardo (Jack) Lucchetti wrote:
You have to use the delta method for that, which is arguably another
good
reason for computing analytical derivatives.
I wrote a few functions to compute a regression in which coefficients are
constrained to be strictly positive and sum to one. This was quite fun,
but not vey much tested. It would be VERY nice if someone did some more
work on this and turned it into a function package (hint, hint!)
<hansl>
set echo off
set messages off
function matrix shares(matrix b)
matrix e = exp(b | 0)
scalar den = sumc(e)
return e ./ den
end function
function matrix dshares(matrix b)
matrix p = shares(b)
scalar k = rows(b)
matrix ret = -p .* p[1:k]'
ret[diag] += p[1:k]
return ret
end function
function bundle OLS_shares(series y, list X)
scalar k = nelem(X)
# initalisation
ols y X --quiet
matrix b = ($coeff .> 0.01) ? $coeff : 0.01
matrix b = ln(b[1:k-1]) - ln(b[k])
nls y = lincomb(X, p)
p = shares(b)
J = dshares(b)
deriv b = {X} * J
end nls --quiet
bundle mdl
mdl["Xnames"] = varname(X)
mdl["coeff"] = p
mdl["vcv"] = qform(J, $vcv)
return mdl
end function
function void printout(bundle model)
string parnames = model.Xnames
matrix cs = model.coeff ~ sqrt(diag(model.vcv))
modprint cs parnames
end function
# ----------------------------------------------------
nulldata 1000
x1 = normal()
x2 = normal()
x3 = normal()
y = 0.3*x1 + 0.01*x2 + 0.69*x3 + normal()
list X = x1 x2 x3
b = OLS_shares(y, X)
printout(b)
</hansl>
-------------------------------------------------------
Riccardo (Jack) Lucchetti
Dipartimento di Scienze Economiche e Sociali (DiSES)
Università Politecnica delle Marche
(formerly known as Università di Ancona)
r.lucchetti(a)univpm.it
http://www2.econ.univpm.it/servizi/hpp/lucchetti
-------------------------------------------------------