Skip to main content

API-208: Program Evaluation

Go Search
Home
About
New Atlas
Atlas, A-Z
Atlas Maps
MPP/MPA Programs
Subjects
Core Topics
Illustrative Courses
Topic Encyclopedia
Concept Dictionary
Competencies
Career Tips
IGOs
Best Practices Project


 

 

Harvard Kennedy School

API-208: Program Evaluation - Estimating Program Effectiveness with Empirical Analysis

Description: Program evaluation comprises a set of statistical tools for assessing the impact of public interventions. This methodological course will develop students' skills in quantitative program evaluation. Students will study a variety of evaluation designs (from random assignment to quasi-experimental evaluation methods) and analyze data from actual evaluations, such as the National Job Training Partnership Act Study. The course evaluates the strengths and weaknesses of alternative evaluation methods. This course meets the PhD requirement for empirical methods.

Source: At http://www.hks.harvard.edu/degrees/teaching-courses/course-listing/api-208 (accessed 25 January 2013).

Additional course description from the syllabus

Evaluating the effectiveness of public programs is important, since it can help us decide which programs are working and which are not working, and why. The goal of the course is to prepare students to design, conduct, and critique empirical evaluations of public programs. We will study how to use statistical techniques to evaluate the effects of public programs, focusing experimental and quasi-experimental (observational) methods.

Commentary by the Atlas editors: The class titles provide an excellent list of teaching topics for the Evaluation and Performance Measurement subject.

Evaluation Research for Public Policy
Fundamental Identification Problem: Causality, Countrfactual Responses, Heterogeneity, Selection
Measures of Location and Dispersion
Conditional Mean Function
Randomized Studies
Threats to Internal and External Validity
Asymptotical Distributions
Fisher's Exact Test
Pre-estimation Diagnostics
Comparison of Experimental and Observational Studies
Approximating Experiments with Observational Data
Study Design
Simpson's Paradox
Matching Estimators
Regression
Assessing the Confounding Effects of Unobserved Factors
Sensitivity Testing
Difference-in-Difference Estimators
Synthetic Control Methods
Instrumental Variables
Local Average Treatment Effects
Distributional Effects
Regression Discontinuity Design

Nonparametric Bounds

Page created by: Ian Clark, last updated 22 February 2013. The content presented on this page, except in the Commentary, is drawn directly from the source(s) cited above, and consists of direct quotations or close paraphrases.

 Syllabus

API-208 Syllabus Spring 2013, Alexis Diamond.pdfAPI-208 Syllabus Spring 2013, Alexis Diamond

Important Notices
© University of Toronto 2008
School of Public Policy and Governance