This course is now full. Please complete the online booking form to join the waiting list.
This one day workshop features presentations of recent theoretical and applied research relating to the study of programme evaluation and treatment effects.
Speakers include Charles Manski, Aleksey Tetenov, Toru Kitagawa, Zahra Siddique and Debopam Bhattacharya.
Charles Manski
Diagnostic Testing and Treatment Under Ambiguity: Using Decision Analysis to Inform Clinical Practice
Partial knowledge of patient health status and treatment response is a pervasive concern in medical decision making. Clinical practice guidelines make recommendations intended to optimize patient care, but optimization typically is infeasible with partial knowledge. To demonstrate, this paper studies a common scenario regarding diagnostic testing and treatment. A patient presents to a clinician, who obtains initial evidence on health status. The clinician can prescribe a treatment immediately or he can order a test yielding further evidence that may be useful in predicting treatment response. In the latter case, he prescribes a treatment after observation of the test result.
Toru Kitagawa
Covariate Selection and Model Averaging for Semiparametric Estimation of Treatment Effects (with Chris Muris)
This paper develops a data-driven way of selecting a propensity score specification, when one is interested in estimating the average treatment effects for treated (ATT) using a propensity score weighting method. Building on the idea of focussed information criterion (FIC), our approach aims to select the optimal specification of the propensity score, which minimizes the asymptotic mean squared error of the ATT estimator obtained via the local asymptotic approximation. As a smoothed version of the FIC-based model selection, this paper also considers an optimal way of averaging the ATT estimators over the candidate specifications. This model averaging problem is formulated as a statistical decision problem in a limit normal experiment, and the Bayes decision corresponding to the improper flat prior for the localisation parameters is proposed as an optimal averaging scheme.
Debopam Bhattacharya
Testing Academic Fairness of University Admissions under Selection on Unobservables
We develop an empirical method of testing whether a university admits applicants with the highest academic potential and identifying the extent of "admission-bias" when it doesn't. We assume that applicants who are better-qualified along every observable academic performance-indicator are stochastically better in characteristics unobservable to us but weighed positively by admission-tutors. This assumption yields informative bounds on differences in admission-thresholds faced by different demographic groups. Applying our methods to admissions-data at Oxford and using blindly-marked exam-performance as outcome, we find admission-thresholds are significantly higher for males and slightly so for private-school applicants. In contrast, average admission-rates are equal across gender and school-type.
Zahra Siddique
Partially Identified treatment Effects Under Imperfect Compliance: The Case of Domestic Violence
The Minneapolis Domestic Violence Experiment (MDVE) is a randomized social experiment with imperfect compliance which has been extremely influential in how police officers respond to misdemeanor domestic violence. This paper re-examines data from the MDVE, using recent
literature on partial identification to find recidivism associated with a policy that arrests misdemeanor domestic violence suspects rather than not arresting them. Using partially identified bounds on the average treatment effect I find that arresting rather than not arresting suspects
can potentially reduce recidivism by more than two-and-a-half times the corresponding intent-to-treat estimate and more than two times the corresponding local average treatment effect, even when making minimal assumptions on counterfactuals.
Aleksey Tetenov
Statistical Hypothesis Testing and Private Information
Null hypothesis testing is a conventional practice in policy evaluation, but it was not well supported by economic reasoning. I show how the hypothesis testing criterion could be rationally motivated as a screening device in settings where public policies are proposed by self-interested parties and how the test level should be set depending on the payoffs of policy proponents.
Download full programme