I will leave the DEA analysis to someone else. Personally, I have
never dealt with a dataset for which DEA provided a plausible set of
results, let alone improved on what ones gets from SFA. Normally,
the set of comparators turns out to be empty or too small to be useful.
I will modify the packages to include the description strings and
then re-load them to the server.
Finally, I have been thinking about the panel data SFA. I will start
with the time invariant version which is little more than a set of
restrictions on the conventional model - e.g. u[i,t] = u[i] for all
t. That should mean that it is possible to produce equivalent
versions for all of my cross section models once I understand how
panel data can be manipulated conveniently.
There is one approach that would facilitate the implementation of
estimator. The log-likelihood function has the form
sum over cross section units { sum over time periods [ ... ] }
Thus, it would be easiest to write the log-likelihood and its
derivatives in this structure. However, if I understand the logic of
mle correctly, then every observation is treated identically so that
what mle hands to the maximiser is the contribution of that
observation to the log-likelihood and its gradients. If correct, the
problem is manageable but less efficient than if one could take
advantage of the structure of the log-likelihood to avoid repetition
of various calculations.
Gordon Hughes