Skip to main content

Justice, Fairness, Inclusion, and Performance.

Learning Agendas Can Produce Performance and Evaluation Evidence

February 15, 2018

The Evidence-based Policymaking Commission strongly recommends that agencies use learning agendas for evaluation plans to build the base of evidence that can be used to make program and government decisions. They result in a structured approach for setting priorities for conducting rigorous empirical research about “what works” and what works “best.” Learning agendas can also include plans for needed research and analysis relevant to a whole range of evidence activities, including statistical performance analysis. This can help promote a culture of evidence.

Performance Management vs. Program Evaluation

Program evaluation and performance management are closely linked, but each serves a unique purpose. The unique roles thus typically require the two sets of activities to be organizationally separate. The overlapping roles, however, require that they coordinate to build evidence that can be used to improve program results.

Performance analysis and program evaluation provide different types of evidence about the results of programs and services. Evaluations are typically done by outside third party researchers; use a variety of data sources, determined based on the study question; and often estimate net impacts—what would happen without the intervention--that is compared to a counterfactual. Performance management is done in-house, uses administrative program data, and tracks progress towards management goals and objectives that managers and staff can use, ideally in real time, but without a counterfactual comparison.

Performance Management in Learning Agendas

Since performance management and program evaluation each produce information about the results of programs, it can be useful to include some performance-relevant analysis in an agency’s learning agenda can be very useful, complementing the regular and ongoing performance management analysis conducted by the performance management staff. The U.S. Department of Labor provides an example of the types of performance-related studies that can be incorporated into an agency’s learning agenda.

The Department of Labor’s Chief Evaluation Officer coordinates the learning agenda process. In collaboration with evaluation specialists in the Chief Evaluation Office (CEO), each operating agency in the department prepares an annual learning agenda that reflects their priority research topics and questions, including studies that are related to performance measures and outcomes that could provide evidence about how to improve performance:

The Performance Management Center (PMC) leads the Labor Department’s performance management activities, but the CEO coordinates with PMC in several ways. Rigorous evaluations help policy makers and administrators understand why public programs may or may not be meeting their goals, the relative effectiveness of different strategies to achieve goals, and how informed evidence can help identify what needs to change to improve results. In fact, evaluations contribute evidence that feeds into the performance management process. Examples of CEO studies that address performance measures and measurement are:

  • Collaborative logic model projects with agency staff to develop or refine formal performance measures, particularly to support the development of outcome measures rather than focusing only on outputs.
  • Analysis of factors (e.g., activities or outputs) associated with outcome measures to consider definitional refinements or new measures to more fully capture performance. In one study, management data from workers’ compensation programs were analyzed to identify factors associated with the rate at which individuals return to work after receiving compensation payments because of a work-related injury. Another CEO study analyzed performance metrics capturing the extent to which local programs are providing statutorily required priority services to veterans and their spouses.
  • Statistical analysis of the outcomes of employment-related services to subgroups such as women, ethnic minorities, and veterans returning from active duty.
  • Statistical analysis and program assessments to inform the development of potential new measures, such as an assessment of alternative metrics for employer services performance measures, as required under the Workforce Innovation and Opportunity Act.

Culture of Evidence

Moving from a strict compliance culture to an evidence culture requires collecting, maintaining and analyzing data and information to address evaluation and performance questions of interest. A strong evidence culture is promoted by having a systematic, active, empirically based, and comprehensive program of research and evaluation, consciously coordinated with management and operations through strategic planning and performance management.

Demetra Nightingale, Institute Fellow, Urban Institute and Professorial Lecturer, Trachtenberg School of Public Policy and Public Administration, George Washington University; Former position with U.S. Government: Chief Evaluation Officer, U.S. Department of Labor.