Estimating fully observed recursive mixed-process models with cmp David Roodman

Preview:

Citation preview

Estimating fully observed recursivemixed-process models with cmp

David Roodman

Probit model:Link function (g) induces likelihoods for each possible outcome

y=g(y*)=1{y*>0}

Relabeling left graph for ε scale:“error link” function (h) induces likelihoods for each possible outcome

y=h(ε)=1{ε>–xi'β}

Area:

(h(ε)=g(x'β +ε))

Tobit (censored)

Ordered probit

Just change g() to get new models

With generalization, embraces multinomial and rank-ordered probit, truncated regression…

Given yi, determine feasible value(s) for ε

– If just one, Li = normal density at that point

– If a range, Li = cumulative density over range

For models that censor some observations (Tobit), L=Π Li combines cumulative and point densities.

Amemiya (1973): maximizing L is consistent

Compute likelihood same way

Multiple equations (SUR)

For each obs, likelihood reached as beforeGiven y, determine feasible set for ε and integrate

normal density over itFeasible set can be point, ray, square, half plane…

Cartesian product of points, line segments, rays, lines.

Bivariate probit

Suppose for obs i, yi1= yi2=0

Feasible range for ε is:

Integral of fε(ε)=φ(ε;Σ) over this:

Can use built-in binormal().

Similar for y=(0,1)′, (1,0)′, (1,1)′.

Mixed uncensored-probit

Suppose for obs i, we observe some y=(yi1, 0)′

Feasible range for ε is a ray:

Integral of fε(ε)=φ(ε;Σ) over this:

Integral of 2-D normal distribution over a ray.

Hard with built-in functions

Requires additional math

Conditional modeling—“c” in cmp

Model can vary by observation—depend on data–Worker retraining evaluation• Model employment for all subjects• Model program uptake only for those in cities where offered

– Classical Heckman selection modeling• Model selection (probit) for every observation• Model outcome (linear) for complete observations• Likelihood for incomplete obs is one-equation probit• Likelihood for complete obs is that on previous slide

–Myriad possibilities

Recursive systemsy’s can appear on RHS in each other’s equations

Matrix of y coefficients must be upper triangular

I.e.: System must have clearly defined stages. E.g.:– SUR (several equations, one stage)

– 2SLS

If system is fully modeled and truly recursive, then estimation is FIML

If system has simultaneity and the early equation stages instrument, then LIML

If system isRecursive

Fully observed (y’s appear in RHS but never y*’s)

then likelihoods developed for SUR still workCan treat y’s in RHS just like x’s

sureg and biprobit can be IV estimators!

Rarely understood, not proved in general in literatureGreene (1998): “surprisingly”…“seem not to be widely known”

Wooldridge (e-mail 2009): “I came to this realization somewhat late, although I’ve known it for a couple of years now.”

I prove, perhaps not rigorouslyMaybe too simple for great econometricians to bother publishing

Fact

General recursive, fully observed system

cmp can fit:conditional recursive mixed-process systems

Processes: Linear, probit, tobit, ordered probit, multinomial probit, interval regression, truncated regression

Can emulate:Built-in: probit, ivprobit , treatreg , biprobit, oprobit, mprobit, asmprobit, tobit, ivtobit, cnreg, intreg, truncreg, heckman, heckprob

User-written: triprobit, mvprobit, bitobit, mvtobit, oheckman, (partly) bioprobit

Required. One exp for each equation. Tell cmp model type for each eq and can vary by observation

Emulation examples

Heteroskedasticity can make censored models not just inefficient but inconsistent

-50

050

100

150

200

y

-20 0 20 40 60 80x

y* Censored valuesTrue model Fitted model

Tobit example: error variance rises with x

Implementation innovation: ghk2()Mata implementation of Geweke-Hajivassiliou-Keane

algorithm for estimating cumulative normal densities above dimension 2.

Differs from built-in ghkfast():Accepts lower as well as upper bounds

E.g., integrate over cube [a1,b1]× [a2,b2]× [a3,b3](otherwise requires 23 calls instead of 1)

Optimized for many observations & few simulation draws/observationDoes not “pivot” coordinates. Pivoting can improve precision, but creates discontinuities when draws are few. (ghkfast() now lets you turn off pivoting.)

Implementation innovation: “lfd1”In Stata ML, using an lf likelihood evaluator assumes

that (A1) for each eq,ml computes numerically with 2 calls per eq,

then analytically.And for Hessian, # of calls is quadratic in # of eq

Using a d1 evaluator, ml does not assume A1.But does (A2) require evaluator to provide scoresFor Hessian, # of calls in linear in # of parameters

Two unrelated changes create unnecessary trade-offml is missing an “lfd1” type that assumes A1 and A2—would make Hessian with # of calls linear in # of eq.

Solution: pseudo-d2. d2 routine efficiently takes over (numerical) computation of Hessian

Good for score-computing evaluators for which

Possible extensionsMarginal effects that reflect interactions between

equations

(Multi-level) random effects

Dropping full observability—y*’s on right

Rank-ordered multinomial probit

References

Roodman, David. 2009. Estimating fully observed recursive mixed-process models with cmp. Working Paper 168. Washington, DC: Center for Global Development.

Roodman, David, and Jonathan Morduch. 2009. The Impact of Microcredit on the Poor in Bangladesh: Revisiting the Evidence. Working Paper 174. Washington, DC: Center for Global Development.

Recommended