Skip to main content

PPM-114M: Evaluation and Performance Management

Go Search
Home
About
New Atlas
Atlas, A-Z
Atlas Maps
MPP/MPA Programs
Subjects
Core Topics
Illustrative Courses
Topic Encyclopedia
Concept Dictionary
Competencies
Career Tips
IGOs
Best Practices Project


 

Normed Course Outlines

PPM-114M: Evaluation and Performance Management

Description: This normed course outline covers the methodologies and tools of evaluation often used to assess public programs, and will familiarize students with concepts, methods and applications of evaluation research. As well, this subject will equip students with a critical lens that could help students read evaluation research critically; ability to anticipate and/or improve a given program based on evaluation results; and understanding of performance management system (as well as the implementation and usage of such systems) to manage an organization.

Learning Outcomes: On successful completion of this course, students will have the skills and knowledge to be able to appropriately utilize and interpret results of the following theories and principles, taking account of the concepts noted below, to the analysis of public policy and management problems.

  • Evaluation Purposes, Types and Questions
  •  Fundamental Identification Problem: Causality, Counterfactual Responses, Heterogeneity, Selection
  •  Assessing the Confounding Effects of Unobserved Factors
  •  Sensitivity Analysis         
  • Data Collection Strategies
  •  Performance Measurement and Performance Management

Concepts to be learned: Sensitivity Analysis; Intangible Benefits of Programs; Result; Results Chain; Beneficiaries; Best Practices; Departmental Evaluation Plan; Departmental Performance Report; Economy(in evaluation); Evaluation Criteria; Evaluation Products; Formative Evaluation; Gaming (in evaluation); Gender-based Analysis (GBA); Impact Evaluation; Open Systems Approach; Program Evaluation; Project/Program Objective; Terms of Reference (in evaluation); Attribution; Baseline Information; Efficiency; Epistemology; Evaluation Assessment; Needs Assessment; Policy Outputs vs. Outcomes; Program Logic; With-versus-Without; Causal Chain; Causal Images; Causal Relationship; Evaluability; Single Difference (in impact evaluation); Diagnostic Procedures; Logic Model; Case Studies; Elite Interviews; Literature Review; Performance; Performance Expectations; Performance Monitoring; Productivity in the Public Sector; Results Based Management; Results-Based Reporting; Performance Reporting; Performance Story; Benchmark; Expected Result; Intermediate Outcome; Lessons Learned; Neutrality; Objectivity; Outcome; Outputs; Performance Audit; Performance Criteria; Performance Indicator; Performance Measure; Performance Measurement; Performance Measurement Strategy.

Normed Topics in this Normed Course Outline

Like other normed topics on the Atlas, each of these has a topic description, links to core concepts relevant to the topic, learning outcomes, a reading list drawn from available course syllabi, and a series of assessment questions.

Course Syllabi Sources for this Normed Course Outline: University of Toronto: PPG-1008 & PPG-2021; Carleton PADM-5420; Harvard Kennedy School: API-208 & MLD-101B; NYU Wagner School: GP-2170 & GP-2171; American University: PUAD-604; Rutgers: 34:833:632; Maryland: PUAF-689Xl; University of Southern California: PPD-560; Northern Carolina State University: PA-511

Recommended Readings:

Week 1:  Evaluation Purposes, Types and Questions

Carol H. Weiss (1998) Evaluation: Methods for Studying Programs & Policies 2nd edition. Prentice Hall. Chapter 1-3.

Mertens, Donna M., and Wilson, Amy T., Program Evaluation Theory and Practice: A Comprehensive Guide. New York: The Guilford Press, 2012. Chapter 8.

W.K. Kellogg Foundation. “Logic Model Development Guide.” Battle Creek, Michigan: W.K. Kellogg Foundation, 2004. Chapters 3 and 4.

Week 2: Fundamental Identification Problem: Causality, Counterfactual Responses, Heterogeneity, Selection

Carol H. Weiss (1998) Evaluation: Methods for Studying Programs & Policies 2nd edition. Prentice Hall. Chapter 8-9

Bornstein D. (2012). “The Dawn of the Evidence-Based Budget.” NY Times, May 30, 2012. Available at: http://opinionator.blogs.nytimes.com/2012/05/30/worthy-of-government-funding-prove-it/

Angrist, J.D. and A.B. Krueger (2000), "Empirical Strategies in Labor Economics," in A. Ashenfelter and D. Card eds. Handbook of Labor Economics, vol. 3. New York: Elsevier Science. Sections 1 and 2.

Imbens, G.W. and J.M. Wooldridge (2009) "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, vol. 47(1), 5-86.

Khandker, S.R., Koolwal, G.B., & Samad, H.A. (2010). Basic issues of evaluation (p. 18 – 29). Excerpt of chapter 2 in Handbook on impact evaluation: Quantitative methods and practices. Washington, DC: The World Bank.

Week 3: Assessing the Confounding Effects of Unobserved Factors

Rosenbaum, P.R. (2005), "Sensitivity Analysis in Observational Studies," Encyclopedia of Statistics in Behavioral Science, vol. 4, 1809-1814.

Imbens, G.W. (2003), "Sensitivity to Exogeneity Assumptions in Program Evaluation," American Economic Review (Papers & Proceedings), vol. 93(2), 126-132.

Rosenbaum, P.R. (2002), Observational Studies. New York: Springer-Verlag. Chapter 4.

Rosenbaum, P.R. and D.B. Rubin (1983), "Assessing Sensitivity to an Unobserved Binary Covariate in an Observational Study with Binary Outcome," Journal of the Royal Statistical Society. Series B, vol. 45(2), 212-218.

Week 4: Sensitivity Analysis         

Anderson, David, et al. Quantitative methods for business. Cengage Learning, 2012. Chapter 4.

McKenzie, Richard B., and Gordon Tullock. "Anything Worth Doing Is Not Necessarily Worth Doing Well." The New World of Economics. Springer Berlin Heidelberg, 2012. 25-42.

Merrifield, J. (1997). Sensitivity analysis in benefit-cost analysis: A key to increased use and acceptance. Contemporary Economic Policy, 15, p.82-92.

John Graham, “Risk and Precaution,” transcript of remarks delivered at AEI-Brookings Joint Center conference, “Risk, Science, and Public Policy: Setting Social and Environmental Priorities,” October 12, 2004. http://georgewbush-whitehouse.archives.gov/omb/inforeg/speeches/101204_risk.html

Week 5: Data Collection Strategies

Carol H. Weiss (1998) Evaluation: Methods for Studying Programs & Policies 2nd edition. Prentice Hall. Chapter 6 & 8.

Mertens, Donna M., and Wilson, Amy T., Program Evaluation Theory and Practice: A Comprehensive Guide. New York: The Guilford Press, 2012. Chapter 10. 

Wholey, Joseph S., Harry Hatry, Kathryn Newcomer. Handbook of Practical Program Evaluation, 2nd Edition. San Francisco: Jossey-Bass, 2010. Chapter 14 (11-13 and 15-18 optional).

Week 6: Performance Measurement and Performance Management

Dave Ulrich, Delivering Results: A New Mandate for Human Resource Professionals, Boston: Harvard Business School Press, 1998. (Introduction and Chapter 1).

Ebrahim, A., & Rangan, V. K. (2010). The limits of nonprofit impact: A contingency framework for measuring social performance. Boston, MA: Harvard Business School Working Paper. http://www.hbs.edu/research/pdf/10-099.pdf

Kotter, J. P. (1990). What leaders really do. Harvard Business Review, 68(3), 103-111.

Donald Moynihan et al, Performance Regimes Amidst Governance Complexity, Journal of Public Administration Research and Theory (JPART), Jan. 2011, vol, 21, p, 141-155

Laurence J. O’Toole, Jr., Treating Networks Seriously:  Practical and Research-Based Agendas in Public Administration, PAR, Jan/Feb 1997, vol. 57, no. 1, p. 45-52. 

Lester M. Salamon, The New Governance and the Tools of Public Action:  An Introduction, Chapter 1 (p. 1-47) in the Tools of Government:  A Guide to the New Governance, edited by Lester M. Salamon, Oxford University Press, 2002  

Sample Assessment Questions:

1a) Define the following terms: Sensitivity Analysis; Intangible Benefits of Programs; Result; Results Chain; Beneficiaries; Best Practices; Departmental Evaluation Plan; Departmental Performance Report;Economy(in evaluation); Evaluation Criteria; Evaluation Products; Formative Evaluation; Gaming (in evaluation); Gender-based Analysis (GBA); Impact Evaluation; Open Systems Approach; Program Evaluation; Project/Program Objective; Terms of Reference (in evaluation). 1b) What is a summative evaluation? How does a summative evaluation differ from a formative evaluation? 1c) What are policy outcomes? Why is it preferable for evaluation strategies to measure policy outcomes as opposed to inputs or outputs? 1d) What is gaming? Please provide a (real or hypothetical) example. 1e) What is the role of the “terms of reference” in the evaluation process? 1f) What is meant by the term “intangible benefits of programs?” Why is this an important concept for program evaluators to understand?

2a) Define the following terms: Attribution; Baseline Information; Efficiency; Epistemology; Evaluation Assessment; Needs Assessment; Policy Outputs vs. Outcomes; Program Logic; With-versus-Without; Causal Chain; Causal Images; Causal Relationship 2b) What is a confounding variable? Why is this an important concept for policy/program evaluators to understand? 2c) What does the term "counterfactual" mean? Why is this an important concept for policy/program evaluators to understand?

3a) Define the following terms:  EvaluabilitySingle Difference (in impact evaluation) 3b) Identify one program, policy or government activity that is particularly difficult to evaluate in terms of efficiency and effectiveness. In a 3-5 page short paper, describe the evaluation challenges involved and identify some possible strategies to overcome those challenges and evaluate the program/policy/activity in question.

4a) Define the following terms: Diagnostic Procedures; Logic Model 4b) What is sensitivity analysis? Why is this an important topic for students of public administration to study? 4c) What is a logic model? Draw a mock logic model for any public policy/program of your choice.

5a) Define the following terms: Case Studies; Elite Interviews; Literature Review 5b) What are case studies? What are some of the advantages and disadvantages of case studies as a tool for gathering information about the effectiveness of specific policy choices? Discuss in a short 2-3 page paper.

6a) Define the following terms: Performance; Performance Expectations; Performance Monitoring; Productivity in the Public Sector; Results Based Management; Results-Based Reporting; Performance Reporting; Performance Story; Benchmark; Expected Result; Intermediate Outcome; Lessons Learned; Neutrality; Objectivity; Outcome; Outputs; Performance Audit; Performance Criteria; Performance Indicator; Performance Measure; Performance Measurement; Performance Measurement Strategy 6b) What is the difference between performance measurement and evaluation? 6c) What is the role of performance measurement in the policy cycle? 6d) What are three characteristics of useful performance indicators? For the policy or program of your choice, identify two potential performance indicators that would be useful for performance measurement purposes and describe in one paragraph why they are potentially valuable measures of program effectiveness or efficiency.

Page created by: James Ban on 8 July 2015


Important Notices
© University of Toronto 2008
School of Public Policy and Governance