2016 Federal Index
Common Evidence Standards / What Works Designations
Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding decisions and did it disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY17? (Example: What Works Clearinghouses)
Score
9
9
Administration for Children and Families (HHS)
Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding decisions and did it disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY17? (Example: What Works Clearinghouses)
- ACF has established a common evidence framework adapted for the human services context from the framework for education research developed by the U.S. Department of Education and the National Science Foundation. The ACF framework, which includes the six types of studies delineated in the ED/NSF framework, aims to (1) inform ACF’s investments in research and evaluation, and (2) clarify for potential grantees and others the expectations for different types of studies.
- ACF maintains an online clearinghouse of evidence reviews of human services interventions. These reviews rate the quality of evaluation studies using objective standards vetted by technical experts and applied by trained, independent reviewers, and similar to those used by other agencies such as the U.S. Department of Education’s What Works Clearinghouse and the U.S. Department of Labor’s CLEAR. The clearinghouse includes results of the reviews in a searchable format as well as comprehensive details about the review standards and process. Reviews to date have covered teen pregnancy prevention; home visiting; marriage education and responsible fatherhood; and employment and training; and include both ACF-sponsored and other studies.
Score
8
8
Corporation for National and Community Service
- CNCS also adapted the evidence framework used by its Social Innovation Fund and the Investing in Innovation Fund at ED and included it as part of the AmeriCorps State and National program’s FY16 and FY 17 grant competitions. The evidence framework used in the FY16 and 17 AmeriCorps competition was revised from FY15 to make it more consistent with what is used in other federal agencies.
- In March 2015, CNCS released Phase I of the CNCS Evidence Exchange, a virtual repository of reports intended to help CNCS grantees and other interested stakeholders find information about evidence- and research- based national service and social innovation programs. Phase 1 includes a database of single study reports with some additional descriptive information about the study, as well as a systematic review of the national service evidence base. Phase 2 in FY16 added studies as grantees completed their independent evaluations and submitted reports to CNCS. In FY17 CNCS focused on disseminating final reports as studies were completed and ensuring that the functionality of the site made the information as accessible as possible.
Score
8
8
Millennium Challenge Corporation
- MCC uses common, rigorous, evidence-based selection criteria to ensure objectivity in country selection for grant awards. To be eligible for selection, countries must first pass the MCC 2017 Scorecard – a collection of 20 independent, third-party indicators that objectively measure a country’s policy performance in the areas of economic freedom, investing in people, and ruling justly. The criteria for passing the 2017 Scorecard are applied universally to all low- and lower-middle-income candidate countries. MCC’s Board of Directors then considers 3 key factors for selecting countries: 1) a country’s performance on the 2017 Scorecard; 2) the opportunity to reduce poverty and generate economic growth; and 3) availability of funds. An in-depth description of the country selection procedure can be found in the annual Selection Criteria and Methodology report.
- MCC’s model is based on a set of core principles deemed essential for development assistance to be effective – good governance, country ownership, focus on results, and transparency. In pursuing these, MCC has created a Principles into Practice series which describes how to make these principles operational. All of MCC’s evaluations are published on the MCC Evaluation Catalog. Associated data, upon which evaluations are based, are published when confidentiality concerns are adequately addressed.
- MCC is undertaking internal research and analysis to understand where and how its published evaluations and datasets are utilized. This effort underscores MCC’s commitment to transparency and learning as MCC seeks to widen the understanding and use of the evidence it produces. The results of this analysis will guide future efforts on evidence-based learning. MCC will finalize baseline metrics around learning and evaluation utilization in September 2017 and then continue tracking global use of MCC knowledge products on a quarterly basis.
- In FY17, MCC sought to strengthen its outreach and dissemination of results in more innovative ways. In July 2016 MCC held its first evidence workshop in El Salvador with more than 180 policymakers, practitioners, and researchers. The workshop – Closing the Gap: Strengthening the Ties between Evaluation and Policy – was an opportunity for MCC, the Government of El Salvador, and other partners to share evaluation results in education and investment climate projects and find ways to incorporate that knowledge into new programs and policies. As part of the workshop, participants committed to use the lessons learned to improve education, gender, and legal and regulatory policy to make the business climate more competitive and help ensure that better educated students can find higher paying jobs in El Salvador. MCC has worked closely with the implementing entity in El Salvador and the President’s Technical Secretariat to organize follow-up M&E trainings, scheduled for the summer 2017.
- To further bring attention to MCC’s evaluation and evidence focus, MCC launched an evaluation newsletter called Statistically Speaking in January 2017. This newsletter highlights recent evidence and learning from MCC’s programs with a special emphasis on how MCC’s evidence can offer practical policy insights for policymakers and development practitioners in the United States and in partner countries. It also seeks to familiarize a wider audience with the evidence and results of MCC’s investments.
- Finally, MCC is developing an enhanced results framework that will better communicate the full picture of the impact of its programs and enrich programmatic learning. Currently in draft form, the framework will help MCC consolidate impacts across projects, compacts, and sectors to assess overall impact at an organizational level.
Score
7
7
Substance Abuse and Mental Health Services Administration
- There is great diversity across SAMHSA programming, ranging from community-level prevention activities to residential programs for pregnant and post-partum women with substance misuse issues. While this diversity allows SAMHSA to be responsive to a wide set of vulnerable populations, it limits the utility of a common evidence framework for the entire agency. Within Centers (the Center for Substance Abuse Prevention, the Center for Substance Abuse Treatment, and the Center for Mental Health Services), consistent evidence frameworks are in use and help to shape the process of grant-making (e.g., Center staff are familiar with the pertinent evidence base for their particular portfolios). At the programmatic level, staff review the state-of-the-art for a particular topic area to facilitate grantee adoption and implementation of evidence-based practices (EBPs). While staff awareness of EBPs varies, a systematic approach to evidence classification remains to be developed. Most Center staff rely on the National Registry of Evidence-based Programs and Practices to identify evidence-based programs for grantee implementation.
- SAMHSA has universal language about using evidence-based practices (EBPs) that is included in its Funding Opportunity Announcements (FOAs) (entitled Using Evidence-Based Practices (EBPs)). This language includes acknowledgement that, “EBPs have not been developed for all populations and/or service settings” thus encouraging applicants to “provide other forms of evidence” that a proposed practice is appropriate for the intended population. Specifically, the language states that applicants should, 1) document that the EBPs chosen are appropriate for intended outcomes, 2) explain how the practice meets SAMHSA’s goals for the grant program, 3) describe any modifications or adaptations needed for the practice to meet the goals of the project, 4) explain why the EBP was selected, 5) justify the use of multiple EBPs, if applicable, and 6) discuss training needs or plans to ensure successful implementation. Lastly, the language includes resources the applicant can use to understand EBPs. Federal grants officers work in collaboration with the SAMHSA Office of Financial Resources to ensure that grantee funding announcements clearly describe the evidence standard necessary to meet funding requirements.
- In 2011, based on the model of the National Quality Strategy, SAMHSA developed the National Behavioral Health Quality Framework (NBHQF). With the NBHQF, SAMHSA proposes a set of core measures to be used in a variety of settings and programs, as well as in evaluation and quality assurance efforts. The proposed measures are not intended to be a complete or total set of measures a payer, system, practitioner, or program may want to use to monitor quality of its overall system or the care or activities it provides. SAMHSA encourages such entities to utilize these basic measures as appropriate as a consistent set of indicators of quality in behavioral health prevention, promotion, treatment, and recovery support efforts across the nation.
- SAMHSA’s National Registry of Evidence-Based Programs and Practices (NREPP) is an online registry for over 450 substance use and mental health interventions that meet minimum review requirements. Its purpose is to “help people learn more about available evidence-based programs and practices and determine which of these may best meet their needs.” By providing a clearinghouse for evidence-based practices, SAMHSA is working to provide timely and relevant scientific knowledge for practical application. In 2015/2016, NREPP underwent a significant set of revisions to ensure a more rigorous review process prior to posting to the registry. While NREPP previously accepted only voluntarily submitted programs, resulting in key gaps due to non-submission for a variety of reasons, the new approach allows SAMHSA to fill registry gaps.
- Before being posted to NREPP, interventions undergo a review process that ensures reliability by “tak[ing] into account the methodological rigor of evaluation studies, the size of the program’s impact on an outcome, the degree to which a program was implemented as designed, and the strength of the program’s conceptual framework.” The review process results in an “outcome evidence rating” of Effective, Promising, Ineffective, and Inconclusive. Additionally NREPP provides tools to help decision makers use the information in the best way possible.
- In addition, the NREPP Learning Center was revamped and launched in June 2017 as a companion site to the more traditional registry focused on emerging practices and uptake and adoption of evidence-based programs and practices. Stakeholder input from diverse individuals was sought to guide the development of the new Learning Center. With its June launch, the Learning Center now provides a home for emerging practices and programs of critical interest to vulnerable populations that may not have the resources to engage in rigorous evaluation or for whom traditional evaluation techniques are not culturally appropriate. In addition, the Learning Center engages stakeholders interested in selecting, adopting and implementing a new program, offering a variety of learning tools including videos that describe the activities from implementer and developer perspectives.
Score
8
8
U.S. Agency for International Development
- USAID has a scientific research policy that sets out quality standards for research. USAID’s Program Cycle policy includes specific evidence standards for strategic planning, project design, monitoring, and evaluation. For example USAID policy requires evidence and data to assess the development context, challenges, and opportunities in all of USAID’s country strategies. Similarly, all USAID projects must include a detailed analytical phase in the Project Appraisal Document.
- USAID does most of its Agency-wide engagement around evidence and frameworks for “what works” through its board membership and funding (along with other donors) of the International Initiative for Impact Evaluations (3ie) which funds impact evaluations and systematic reviews that generate evidence on what works in development programs and why. Rather than creating a separate “what works” clearinghouse, USAID has chosen to work with 3ie and other development partners to support 3ie’s database of impact evaluations relevant to development topics (includes over 2,500 entries to date), knowledge gap maps and systematic reviews that pull the most rigorous evidence and data from across donors. 3ie also houses a collection of policy briefs that examine findings from its database of impact evaluations on overarching policy questions to help policymakers and development practitioners improve development impact through better evidence. Various USAID bureaus or operating units have funded 3ie to produce evidence gap maps on topics such as: science, technology, innovation and partnership; state-society relations; and productive safety nets.
- USAID technical bureaus provide guidance based on evidence of “what works” by sector that applies to all relevant Agency programs. USAID’s Bureau for Democracy, Conflict and Humanitarian Assistance (DCHA), for example, includes the Center of Excellence on Democracy, Rights, and Governance, which publishes evidence-based standards for what works in this field. The DRG Center convenes leading scholars from a range of fields to work with USAID to study, analyze, and assess the effectiveness of its initiatives and programs in DRG, using this data to shape programming. In addition, USAID established the Evaluating Democracy and Governance Effectiveness (EDGE) Initiative, to supply and apply sophisticated tools to measure the impact of democracy, human rights, and governance work, and infuse evidence-based programmatic decision-making throughout the DRG portfolio. In another example, USAID’s Global Health Bureau has a strategic framework that presents details in Annex 1 on specific evidence-based strategies, targets, and approaches for achieving goals within each technical area under the health priorities.
- Several USAID Bureaus also synthesize all the evaluations relevant to a specific sector to summarize key findings and identify gaps in knowledge that then inform sector learning agendas. For example, in March 2016, the Bureau for Food Security (BFS) published a synthesis report summarizing findings from 196 evaluations of Feed the Future projects that focused on the six themes outlined in the Feed the Future Learning Agenda. Across the themes, the synthesis illuminated trends and patterns summarized in the points found below the graphic. BFS shared these trends with relevant staff and stakeholders engaged in designing new projects, or updating sector strategies and policies. The synthesis also identified gaps where more evaluation research is needed, helping to inform the design of future evaluations that can contribute to the body of knowledge on food security to improve the design and management of interventions in the agriculture, resilience, and nutrition sectors by specifically addressing Learning Agenda questions. The synthesis helped to inform the Feed the Future Global Performance Evaluation, and the Global Food Security Strategy.
Score
10
10
U.S. Department of Education
- ED’s evidence standards for its grant programs, as outlined in the Education Department General Administrative Regulations (EDGAR), build on ED’s What Works ClearinghouseTM (WWC) evidence standards. ED uses these same evidence standards in all of its discretionary grant competitions that use evidence to direct funds to applicants proposing to implement projects that have evidence of effectiveness and/or to build new evidence through evaluation (see Question #8 below for more detail). As noted above, EPG has coordinated the development of revised evidence definitions and related selection criteria for competitive programs that align with the Every Student Succeeds Act (ESSA) and streamline and clarify provisions for grantees. These revised definitions align with ED’s suggested criteria for States’ implementation of ESSA’s four evidence levels, included in ED’s non-regulatory guidance, Using Evidence to Strengthen Education Investments.
- Additionally, IES and the National Science Foundation issued a joint report that describes six types of research studies that can generate evidence about how to increase student learning in 2013. These principles are based, in part, on the research goal structure and expectations of IES’s National Center for Education Research (NCER) and National Center for Special Education Research (NCSER). NCER and NCSER communicate these expectations through their Requests for Applications and webinars that are archived on the IES website and available to all applicants.
- ED’s What Works ClearinghouseTM (WWC) identifies studies that provide valid and statistically significant evidence of effectiveness of a given practice, product, program, or policy (referred to as “interventions”), and disseminates summary information and reports on the WWC website. The WWC has reviewed more than 10,000 studies that are available in a searchable database, including a commitment to review all publicly available evaluation reports generated under i3 grants. In FY 2016, 37 i3 grant evaluation reports, containing 48 studies were reviewed and are included in the searchable database. In fall 2016, ED revised and enhanced the WWC website to make evidence easier to access, including through the “Find What Works” tool that makes it easier to find relevant educational programs and interventions, and improving navigation.
Score
7
7
U.S. Dept. of Housing & Urban Development
- HUD’s Policy Development and Research (PD&R) office provides evidence of “what works” primarily through HUD USER, a portal and web store for program evaluations, case studies, and policy analysis and research; the Regulatory Barriers Clearinghouse; and through initiatives such as Innovation of the Day, Sustainable Construction Methods in Indian Country, and the Consumer’s Guide to Energy-Efficient and Healthy Homes. This content is designed to provide current policy information, elevate effective practices, and synthesize data and other evidence in accessible formats. Through these resources, researchers and practitioners can see the full breadth of work on a given topic (e.g., rigorous established evidence, case studies of what’s worked in the field, and new innovations currently being explored) to inform their work.
Score
9
9
U.S. Department of Labor
- DOL uses the Cross-agency Federal Evidence Framework for evaluation planning and dissemination.
- DOL’s Clearinghouse for Labor Evaluation and Research (CLEAR) is an internet-based evidence clearinghouse of evaluation reports that reviews designs, methodologies, and findings according to specific standards developed by technical work groups. Each study is 2016 Scored and given a “causal evidence rating” according to the scoring rubric in the standards. CLEAR is a user-friendly, searchable website, that includes academic quality reviews for each study included in the system, appropriate for peer academic researchers, potential evaluation contractors submitting technical proposals, program practitioners seeking information on “what works”, policy makers, and the general public.
- DOL uses the CLEAR evidence guidelines and standards when discretionary program grants awarded using evidence-informed or evidence-based criteria. The published guidelines and standards are thus used in grants for evidence-based programs demonstrations and in reviewing evaluations in the structured evidence reviews conducted in CLEAR. Requests for proposals also indicate the CLEAR standards apply to all CEO evaluations. Also, DOL has a “Department Evaluation Policy Statement” that formalizes the principles that govern all program evaluations in the Department, including methodological rigor, independence, transparency, ethics, and relevance. In addition, CEO publicly communicates the standards and methods expected in all DOL evaluations, and the standards are incorporated into formal procurement statements of work, with scoring for awards based on the standards.
- Additionally, DOL collaborates with other agencies (HHS, ED-IES, NSF, CNCS) on refining cross-agency evidence guidelines and developing technological procedures to link and share reviews across clearinghouses. The Interagency Evidence Framework conveys the categories of evaluations, the quality review of evaluation methodologies and results, and the use of evaluation finings. The framework is accepted Department-wide.