2017 Federal Index


Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and research/learning agenda(s) and did it publicly release the findings of all completed evaluations in FY17?

Score
9
Administration for Children and Families (HHS)
  • ACF’s evaluation policy addresses the principles of rigor, relevance, transparency, independence, and ethics and requires ACF program, evaluation, and research staff to collaborate. For example, the policy states, “ACF program offices will consult with OPRE in developing evaluation activities.” And, “There must be strong partnerships among evaluation staff, program staff, policy-makers and service providers.” ACF established its Evaluation Policy in November 2012, and published it in the Federal Register in August 2014.
  • ACF’s Office of Planning, Research, and Evaluation (OPRE) proposes an evaluation plan to the Assistant Secretary each year in areas in which Congress has provided authority and funding to conduct research and evaluation.
  • ACF’s annual portfolio reviews describe recent work and ongoing learning agendas in the areas of family self-sufficiency, child and family development, and family strengthening, including work related to child welfare, child care, Head Start, Early Head Start, strengthening families, teen pregnancy prevention and youth development, home visiting, self-sufficiency, welfare and employment. Examples include findings from Head Start CARES; the BIAS project; multiple reports from the first nationally representative study of early care and education in over 20 years; early findings on the Maternal, Infant and Early Childhood Home Visiting program; and a report on challenges and opportunities in using administrative data for evaluation.
  • ACF’s evaluation policy requires that “ACF will release evaluation results regardless of findings…Evaluation reports will present comprehensive findings, including favorable, unfavorable, and null findings. ACF will release evaluation results timely – usually within two months of a report’s completion.” ACF has publicly released the findings of all completed evaluations to date. In 2016, OPRE released nearly 100 publications.
Score
8
Corporation for National and Community Service
  • CNCS has an evaluation policy that presents 5 key principles that govern the agency’s planning, conduct, and use of program evaluations: rigor, relevance, transparency, independence, and ethics.
  • CNCS has an evaluation plan/learning agenda that is updated annually based on input from agency leadership as well as from emerging evidence from completed studies. This agenda is reflected in the CNCS Congressional Budget Justifications each year (see Fiscal Year 2016 (pp. 55-56) and Fiscal Year 2017 (pp. 5-6, 55-56).
  • The CNCS Office of Research and Evaluation has built a portfolio of evidence around the agency’s mission and its programs through research studies conducted by university-based scholars, program evaluations conducted by independent third parties, agency performance metrics, and analyses of nationally representative statistics. A report synthesizing findings from FY16 and early FY17 may be found here. In terms of the agency’s research and learning agenda for Fiscal Year 2017 and beyond, there are a few examples worth noting. Two projects – Building Evidence for Service Solutions and Scaling Evidence Based Models – each have project periods of 5 years (a base year with up to 4 option years) and reflect the goal of learning across agency programs and systematically building evidence where there is little or none and bringing to scale effective models as demonstrated through scientific evidence. Similarly, the agency’s second Research Grant competition builds on the first cohort of grantees (3 year study periods) and encourages knowledge building around the agency’s mission and its programs.
  • CNCS creates four types of reports for public release: research reports produced directly by research and evaluation staff, research conducted by third party research firms and overseen by research and evaluation staff, reports produced by CNCS-funded research grantees, and evaluation reports submitted by CNCS-funded program grantees. All reports completed and cleared internally are posted to the Evidence Exchange, an electronic repository for reports. This virtual repository was launched in September 2015. Since it launched, a total of 79 research reports have been made available to the public (eight in FY15; 43 in FY16; and 28 in FY17 thus far.
  • In FY16 CNCS developed Evaluation Core Curriculum Courses which are presented to its grantees through a webinar series and is available on the CNCS website along with other evaluation resources. The courses are designed to help grantees and other stakeholders easily access materials to aid in conducting or managing program evaluations. In addition to these courses, R & E staff hosted workshops for all 4 regional (staff) trainings in FY17 that focused on how to apply findings from research and evaluation studies to daily operations (i.e., AmeriCorps and Senior Corps programs).
Score
9
Millennium Challenge Corporation
  • In March 2017, MCC published a revised Policy for Monitoring and Evaluation that further codifies MCC’s experience ensuring all programs develop and follow comprehensive Monitoring & Evaluation (M&E) plans that adhere to MCC’s standards. Further, this new policy ensures MCC alignment with the recently passed Foreign Aid Transparency and Accountability Act of 2016. The monitoring component of the M&E Plan lays out the methodology and process for assessing progress towards Compact (i.e., grant) objectives. It identifies indicators, establishes performance targets, and details the data collection and reporting plan to track progress against targets on a quarterly basis. The evaluation component identifies and describes the evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. Pursuant to MCC’s M&E policy, every project must undergo an independent evaluation to assess MCC’s impact. Once evaluation reports are finalized, they are published on the MCC Evaluation Catalog. To date, 78 interim and final reports have been publicly released, with several additional evaluations expected to be completed and released in the coming months. MCC also produces periodic reports for internal and external consumption on results and learning, and holds agency-wide sessions that help to translate evaluation results into lessons learned for future compact development.
  • MCC extended its efforts to facilitate open access to its de-identified evaluation data by expanding the number of documented evaluations to 120 studies, and by documenting the procedures it has established to ensure both confidentiality protections and the usefulness of these public data. These procedures were developed to ensure a consistent approach across all of MCC’s evaluation data to balancing the dual objectives of protecting the survey respondents’ confidentiality while maintaining acceptable levels of comparability to the original data. MCC posted a public access version of its Microdata Documentation and De-Identification Guidelines on its website in February 2017 to provide guidance to those preparing and using its public access evaluation data. And in March 2017 MCC published a paper that details the processes MCC has established and discusses key lessons learned in seeking to achieve a consistent, optimal balance between benefits and risks when releasing evaluation data to the public.
  • For fiscal year 2017, MCC has pursued a robust research and learning agenda around better use of its data and evidence for programmatic impact. Broadly, the Department for Policy and Evaluation is focused on learning around MCC’s promotion of policy and institutional reforms (PIR). This includes analytical efforts around cost-benefit analysis of PIR, implementation modalities of PIR, and the sustainability of PIR as a result of MCC compacts. After a sustained learning agenda around its evaluations, this year the M&E division is focused on the use of its monitoring data for real-time learning within compacts. They are seeking to better understand how and when monitoring data are used and how its results can feed back into compact decisions. Finally, the newly launched Star Report is an Agency-wide learning effort to systematically capture how and why compacts achieved certain results. The Star Report includes learning formal inflection points at each stage of the compact – development, implementation, and closure – to promote and disseminate learning and evidence for compacts in implementation and future compacts.
Score
7
Substance Abuse and Mental Health Services Administration
  • SAMHSA’s Evaluation Policy and Procedure (P&P), revised and approved in May 2017, provides guidance across the agency regarding all program evaluations. Specifically, the Evaluation P&P describes the demand for rigor, compliance with ethical standards, and compliance with privacy requirements for all program evaluations conducted and funded by the agency. The Evaluation P&P serves as the agency’s formal evaluation plan and includes a new process for the public release of final evaluation reports, including findings from evaluations deemed significant. The Evaluation P&P sets the framework for planning, monitoring, and disseminating findings from significant evaluations.
  • Results from significant evaluations will be available on SAMHSA’s website, a new step SAMHSA is taking with its newly-approved Evaluation P&P, starting in the Fall of 2017. Significant evaluations include those that have been identified by the Center Director as providing compelling information and results that can be used to make data driven, evidence-based, and informed decisions about behavioral health programs and policy. The following criteria is used to determine whether an evaluation is significant: 1) whether the evaluation was mandated by Congress; 2) whether there are high priority needs in states and communities; 3) whether the evaluation is for a new or congressionally-mandated program; 4) the extent to which the program is linked to key agency initiatives; 5) the level of funding; 6) the level of interest from internal and external stakeholders; and 7) the potential to inform practice, policy, and/or budgetary decision-making.
  • CBHSQ is currently leading agency-wide efforts to build SAMHSA’s learning agenda. Via this process, we have developed agency-wide Learning Agenda templates in the critical topic areas of opioids, serious mental illness, serious emotional disturbance, suicide, health economics and financing, and marijuana; learning agendas focused on other key topic areas such as alcohol are underway as well. Other topics, such as cross-cutting issues related to vulnerable populations, are interwoven through these research plans. Through this multi-phased process, CBHSQ is systematically collecting information from across the agency regarding research and analytic activities, analyzing and organizing this information into a guiding framework to be used for decision-making related to priorities and resource allocation. SAMHSA began this process in early 2017 and plans to complete it in the winter of 2018. SAMHSA has developed a template for the issue of opioid abuse, the first topic we tackled in this effort and thus the most complete at this point in time and has been used in determining research questions along with the current activities underway across the agency that are relevant to these areas. The template follows the construct outlined by OMB in the publication entitled Analytical Perspectives; Budget of the U.S. Government; Fiscal Year 2018.
  • SAMHSA’s Data Integrity Statement outlines how CBHSQ adheres to federal guidelines designed to ensure the quality, integrity, and credibility of statistical activities.
  • SAMHSA’s National Behavioral Health Quality Framework, aligned with the U.S. Department of Health and Human Services’ National Quality Strategy, is a framework to assist providers, facilities, payers, and communities better track and report the quality of behavioral health care. Through this framework, SAMHSA “ proposes a set of core measures to be used in a variety of settings and programs, as well as in the evaluation and quality assurance efforts.” These metrics are focused primarily on high-rate behavioral health events such as depression, alcohol misuse, and tobacco cessation, all of which impact health and health care management and thus affect a large swath of the U.S. population.
Score
8
U.S. Agency for International Development
  • USAID has an agency-wide Evaluation Policy published in 2011, which was updated in October 2016 to reflect revisions made to USAID’s Automated Directives System (ADS) Chapter 201: Program Cycle Operational Policy, released in September 2016. The policy updates changed evaluation requirements to simplify implementation and increase the breadth of evaluation coverage. The updates also seek to strengthen evaluation dissemination and utilization. The agency released a report in 2016 to mark the five-year anniversary of the policy.
  • All final USAID evaluation reports are available on the Development Experience Clearinghouse except for a small number of evaluations that are not. For FY2015 and FY2016, USAID has created infographics that show where evaluations took place, across which sectors, and include short narratives that describe findings from selected evaluations and how that information informed decision-making.
  • USAID field missions are required to have an evaluation plan, and all USAID missions and offices provide an internal report on an annual basis on completed, ongoing and planned evaluations, including evaluations planned to start anytime in the next three fiscal years.
  • All Washington Bureaus may develop annual evaluation action plans that review evaluation quality and use within the Bureau, and identify challenges and the priorities for the year ahead, including support to Missions. LER works with bureau M&E points-of-contact to review implementation of these action plans on a quarterly basis and provides support as appropriate and feasible. LER uses the evaluation action plans as a source for Agency-wide sharing of successes and challenges to improving evaluation quality and use.
  • Given USAID’s decentralized structure, individual programs, offices, bureaus and missions may develop learning agendas to guide their research and evaluation efforts. USAID’s current learning agenda efforts are decentralized and vary in focus, centering on regions, technical areas (e.g. democracy and governance, health systems, and food security), or cross-cutting efforts. In March 2017, LER published a report titled, “ Learning Agenda Landscape Analysis” which provides a summary of 19 learning agendas across USAID and compiles promising practices for developing and using learning agendas. Learning agendas enable USAID to identify knowledge gaps and identify how monitoring, evaluation, field research, and other learning activities can be designed to fill those gaps, thereby generating evidence that – when coupled with adaptive management practices – improves decision-making and facilitates continuous organizational improvement.
  • LER is in the process of developing a learning agenda to answer a few priority questions on how Program Cycle policy requirements are being perceived and implemented across the Agency. The answers to those questions will help USAID better target capacity building support to staff and partners for more effective programs and may inform future updates to the policy.
  • All final USAID evaluation reports are available on the Development Experience Clearinghouse except for approximately five percent of evaluations completed each year that are not public due to principled exceptions to the presumption in favor of openness guided by OMB Bulletin 12-01 Guidance on Collection of U.S. Foreign Assistance Data. For FY2015 and FY2016, USAID began to visualize where evaluations took place and across which sectors. The graphic also includes short narratives that describe findings from selected evaluations and how that information informed decision-making.
  • Beginning in 2016, USAID’s Office of Policy, within PPL, began conducting assessments of the implementation of the Agency’s suite of development policies to understand how a policy has impacted Agency programming and processes. So far two assessments have been completed, examining implementation of the Gender Equality and Female Empowerment Policy and the Democracy, Human Rights, and Governance Strategy; and two more policies are undergoing assessment: the Development Response to Violent Extremism and Insurgency Policy and the Youth in Development Policy.
  • Since September 2016, USAID multi-year Country Development Cooperation Strategies now require a learning plan that outlines how missions will incorporate learning into their programming, including activities like regular portfolio reviews, evaluation recommendation tracking and dissemination plans, and other analytic processes to better understand the dynamics of their programs and their country contexts. In addition to mission strategic plans, all projects and activities are now also required to have integrated monitoring, evaluation, and learning plans.
Score
8
U.S. Department of Education
  • ED has a scientific integrity policy to ensure that all scientific activities (including research, development, testing, and evaluation) conducted and supported by ED are of the highest quality and integrity, and can be trusted by the public and contribute to sound decision-making. The policy may be accessed here.
  • IES, and PPSS, in concert with the EPG, work with program offices and ED leadership on the development of ED’s annual evaluation plan. This plan is implemented through ED’s annual spending plan process.
  • In addition, IES prepares and submits to Congress a 2-year biennial, forward-looking evaluation plan covering all mandated and discretionary evaluations of education programs funded under the Elementary and Secondary Education Act, as amended by the Every Student Succeeds Act (P.L. 114-95) (ESSA). IES and PPSS work with programs to understand their priorities, design appropriate studies to answer the questions being posed, and share results from relevant evaluations to help with program improvement. This serves as a research and learning agenda for ED.
  • ED’s FY 2016 Annual Performance Report and FY 2018 Annual Performance Plan includes a list of ED’s current evaluations, organized by subject matter area. IES publicly releases findings from all of its completed, peer-reviewed evaluations on the IES website and also in the Education Resources Information Center (ERIC). IES announces all new evaluation findings to the public via a Newsflash and through social media. Finally, IES regularly conducts briefings on its evaluations for ED, the Office of Management and Budget, Congressional staff, and the public.
  • Finally, IES manages the Regional Educational Laboratory ( REL) program, which supports districts, states, and boards of education throughout the United States to use research and evaluation in decision making. The research priorities are determined locally, but IES approves the studies and reviews the final products. All REL studies are made publicly available on the IES website.
Score
8
U.S. Dept. of Housing & Urban Development
  • HUD’s Office of Policy Development and Research (PD&R) has published an evaluation policy that establishes core principles and practices of PD&R’s evaluation and research activities. The six core principles are rigor, relevance, transparency, independence, ethics, and technical innovation.
  • PD&R’s evaluation policy guides HUD’s research planning efforts, known as research roadmapping. Key features of research roadmapping include reaching out to internal and external stakeholders through a participatory approach; making research planning systematic, iterative, and transparent; driving a learning agenda by focusing on research questions that are timely, forward-looking, policy-relevant, and leverage HUD’s comparative advantages and partnership opportunities; and aligning research with HUD’s strategic goals and areas of special focus. HUD also employs its role as convener to help establish frameworks for evidence, metrics, and future research.
  • HUD’s original “Research Roadmap FY14-FY18” and “Research Roadmap: 2017 Update” constitute the core of HUD’s learning agenda. The roadmaps are strategic, five-year plans for priority program evaluations and research to be pursued given a sufficiently robust level of funding. PD&R also integrated its evaluation plan into HUD’s FY14-FY18 Strategic Plan (see pp. 57-63) to strengthen the alignment between evaluation and performance management. During FY16, PD&R used similar principles and methods to refresh the Roadmap to address emerging research topics. PD&R’s fiscal year budget requests include annual research plans drawn from the Roadmap. Actual research activities are substantially determined by Congressional funding and guidance.
  • The Research Roadmap serves as a long-term evaluation plan and the core of HUD’s learning agenda. HUD also develops annual evaluation plans, consisting of a list of specific research priorities, as requested by Congress.
  • PD&R’s policy (p.87950) is to publish and disseminate all evaluations that meet standards of methodological rigor in a timely fashion. Additionally, PD&R includes language in research and evaluation contracts that allows researchers to independently publish results, even without HUD approval, after not more than 6 months. PD&R has occasionally declined to publish reports that fell short of standards for methodological rigor. Completed evaluations and research are summarized in HUD’s Annual Performance Report (see pp.123–131) at the end of each fiscal year, and reports are posted on PD&R’s website, HUDUSER.gov.
Score
9
U.S. Department of Labor
  • DOL has an Evaluation Policy Statement that formalizes the principles that govern all program evaluations in the Department, including methodological rigor, independence, transparency, ethics, and relevance. In addition, the Chief Evaluation Office publicly communicates the standards and methods expected in DOL evaluations in formal procurement statements of work.
  • DOL also develops, implements, and publicly releases an annual Evaluation Plan (i.e., Department-level learning agenda) which includes planned projects with each of DOL’s operating agencies. Agency learning agendas, developed by CEO in partnership with each operating agency, form the basis for the DOL’s Evaluation Plan. The 2016 Evaluation Plan was posted in the Federal Register. The 2017 plan will be posted on the CEO website once finalized before the end of the fiscal year.
  • Once contracts are awarded for new evaluation studies, they are posted on the Current Studies page of CEO’s website for the public to see everything currently underway as well as timelines for study completion and publication of results
  • All DOL reports and findings are publicly released and posted on the complete reports section of CEO website. The Chief Evaluation Officer has the “authority to approve, release, and disseminate evaluation reports” (as per the DOL Evaluation Policy). DOL agencies also post and release their own research and evaluation reports.
Back to the Index

Visit Results4America.org