Dr Toru Kitagawa: all content

Showing 21 – 40 of 51 results

Working paper graphic

Inference on winners

Working Paper

Many empirical questions can be cast as inference on a parameter selected through optimization. For example, researchers may be interested in the effectiveness of the best policy found in a randomized trial, or the best-performing investment strategy based on historical data. Such settings give rise to a winner’s curse, where conventional estimates are biased and conventional confidence intervals are unreliable.

31 December 2018

Working paper graphic

Equality-minded treatment choice

Working Paper

This paper develops a method to estimate the optimal treatment assignment policy based on observable individual covariates when the policy objective is to maximize an equality-minded rank-dependent social welfare function, which puts higher weight on individuals with lower-ranked outcomes.

12 December 2018

Working paper graphic

Mostly harmless simulations? On the internal validity of empirical Monte Carlo studies

Working Paper

Currently there is little practical advice on which treatment effect estimator to use when trying to adjust for observable differences. A recent suggestion is to compare the performance of estimators in simulations that somehow mimic the empirical context. Two ways to run such ‘empirical Monte Carlo studies’ (EMCS) have been proposed.

27 September 2018

Working paper graphic

Testing identifying assumptions in fuzzy regression discontinuity designs

Working Paper

We propose a new specification test for assessing the validity of fuzzy regression discontinuity designs (FRD-validity). We derive a new set of testable implications, characterized by a set of inequality restrictions on the joint distribution of observed outcomes and treatment status at the cut-off. We show that this new characterization exploits all the information in the data useful for detecting violations of FRD-validity.

20 August 2018

Working paper graphic

Non-Bayesian updating in a social learning experiment

Working Paper

In our laboratory experiment, subjects, in sequence, have to predict the value of a good. We elicit the second subject’s belief twice: first (“first belief”), after he observes his predecessor’s action; second (“posterior belief”), after he observes his private signal. Our main result is that the second subjects weigh the private signal as a Bayesian agent would do when the signal confirms their first belief; they overweight the signal when it contradicts their first belief.

4 July 2018

Working paper graphic

Inference on winners

Working Paper

Many questions in econometrics can be cast as inference on a parameter selected through optimization. For example, researchers may be interested in the effectiveness of the best policy found in a randomized trial, or the best-performing investment strategy based on historical data.

10 May 2018

Working paper graphic

Measurement error and rank correlations

Working Paper

This paper characterizes and proposes a method to correct for errors-in-variables biases in the estimation of rank correlation coeffcients (Spearman's ρ and Kendall's τ).

12 April 2018

Working paper graphic

Posterior distribution of nondifferentiable functions

Working Paper

This paper examines the asymptotic behavior of the posterior distribution of a possibly nondifferentiable function g(θ), where θ is a finite-dimensional parameter of either a parametric or semiparametric model. The main assumption is that the distribution of a suitable estimator θn, its bootstrap approximation, and the Bayesian posterior for θ all agree asymptotically.

3 October 2017

Working paper graphic

Who should be treated? Empirical welfare maximization methods for treatment choice

Working Paper

One of the main objectives of empirical analysis of experiments and quasi-experiments is to inform policy decisions that determine the allocation of treatments to individuals with different observable covariates. We study the properties and implementation of the Empirical Welfare Maximization (EWM) method, which estimates a treatment assignment policy by maximizing the sample analog of average social welfare over a class of candidate treatment policies. The EWM approach is attractive in terms of both statistical performance and practical implementation in realistic settings of policy design. Common features of these settings include: (i) feasible treatment assignment rules are constrained exogenously for ethical, legislative, or political reasons, (ii) a policy maker wants a simple treatment assignment rule based on one or more eligibility scores in order to reduce the dimensionality of individual observable characteristics, and/or (iii) the proportion of individuals who can receive the treatment is a priori limited due to a budget or a capacity constraint. We show that when the propensity score is known, the average social welfare attained by EWM rules converges at least at n-1=2 rate to the maximum obtainable welfare uniformly over a minimally constrained class of data distributions, and this uniform convergence rate is minimax optimal. We examine how the uniform convergence rate depends on the richness of the class of candidate decision rules, the distribution of conditional treatment effects, and the lack of knowledge of the propensity score. We other easily implementable algorithms for computing the EWM rule and an application using experimental data from the National JTPA Study.

19 May 2017

Working paper graphic

Uncertain identification

Working Paper

Uncertainty about the choice of identifying assumptions is common in causal studies, but is often ignored in empirical practice. This paper considers uncertainty over models that impose different identifying assumptions, which, in general, leads to a mix of point- and set-identified models. We propose performing inference in the presence of such uncertainty by generalizing Bayesian model averaging. The method considers multiple posteriors for the set-identified models and combines them with a single posterior for models that are either point-identified or that impose non-dogmatic assumptions. The output is a set of posteriors (post-averaging ambiguous belief) that are mixtures of the single posterior and any element of the class of multiple posteriors, with weights equal to the posterior model probabilities. We suggest reporting the range of posterior means and the associated credible region in practice, and provide a simple algorithm to compute them. We establish that the prior model probabilities are updated when the models are "distinguishable" and/or they specify different priors for reduced-form parameters, and characterize the asymptotic behavior of the posterior model probabilities. The method provides a formal framework for conducting sensitivity analysis of empirical findings to the choice of identifying assumptions. In a standard monetary model, for example, we show that, in order to support a negative response of output to a contractionary monetary policy shock, one would need to attach a prior probability greater than 0.32 to the validity of the assumption that prices do not react contemporaneously to such a shock. The method is general and allows for dogmatic and non-dogmatic identifying assumptions, multiple point-identified models, multiple set-identified models, and nested or non-nested models.

18 April 2017

Journal graphic

A Test for Instrument Validity

Journal article

This paper develops a specification test for instrument validity in the heterogeneous treatment effect model with a binary treatment and a discrete instrument.

1 September 2015

Working paper graphic

Model averaging in semiparametric estimation of treatment effects

Working Paper

This paper proposes a data-driven way of averaging the estimators over the candidate specifications in order to resolve the issue of specification uncertainty in the propensity score weighting estimation of the average treatment effects for treated (ATT).

13 August 2015