2016 Federal Index


Use of Evidence in 5 Largest Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its 5 largest competitive grant programs in FY17? (Examples: Tiered-evidence frameworks; evidence-based funding set-asides; priority preference points or other preference scoring; Pay for Success provisions)

Score
7
Administration for Children and Families (HHS)
  • In FY17 the 5 largest competitive grant programs are: 1) Head Start ($8.6 billion); 2) Unaccompanied Children Services ($1.4 billion); 3) Early Head Start-Child Care Partnerships ($0.6 billion) 4) Transitional and Medical Services ($0.5 billion); and 5) Preschool Development Grants ($0.25 billion).
  • ACF’s template (see p. 14 in Attachment C) for grant announcements includes two options, requiring grantees to either 1) collect performance management data that contributes to continuous quality improvement and is tied to the project’s logic model, or 2) conduct a rigorous evaluation for which applicants must propose an appropriate design specifying research questions, measurement and analysis.
  • In FY12, ACF established the Head Start Designation Renewal System requiring Head Start ($8.6 billion in FY17) grantees to compete for grants moving forward if they failed to meet criteria related to service quality, licensing and operations, and fiscal and internal control.
  • ACF’s Personal Responsibility Education Program ($69.8 million in FY17) includes three individual discretionary grant programs that support evidence-based competitive grants that teach youth about abstinence and contraception to prevent pregnancy and sexually transmitted infections.
  • To receive funds through ACF’s Community Based Child Abuse Prevention (CBCAP) program, ($39.6 million in FY17) states must “demonstrate an emphasis on promoting the increased use and high quality implementation of evidence-based and evidence-informed programs and practices.” CBCAP defines evidence-based and evidence-informed programs and practices along a continuum with four categories: Emerging and Evidence-Informed; Promising; Supported; and Well Supported. Programs determined to fall within specific program parameters will be considered to be “evidence informed” or “evidence-based” practices (EBP), as opposed to programs that have not been evaluated using any set criteria. ACF monitors progress on the percentage of program funds (most recently 61.1% in FY15) directed towards evidence-based and evidence-informed practices.
Score
9
Corporation for National and Community Service
  • CNCS is operating three competitive grant programs in FY17: 1) AmeriCorps State and National program (excluding State formula grant funds) ($232,372,470 million in FY17); 2) Senior Corps RSVP program ($49 million in FY17); and 3) the Social Innovation Fund (SIF) ($50 million in FY16). (SIF funding was zeroed out by Congress in FY17, but CNCS still spent FY16 SIF funds in FY17.)
  • CNCS’s AmeriCorps State and National Grants Program (excluding State formula grant funds), application (see pp. 10-14) allocated up to 31 points out of 100 to organizations that submit applications supported by performance and evaluation data in FY17. Specifically, up to 19 points can be assigned to applications with theories of change supported by relevant research literature, program performance data, or program evaluation data; and up to 12 points can be assigned for an applicant’s incoming level of evidence with the highest number of points awarded to strong levels of evidence. These categories of evidence are modeled closely on the levels of evidence defined in the Social Innovation Fund.
  • From FY10-FY16, SIF provided competitive grants to non-profit grant-making organizations to help them grow promising, evidence-based solutions that address pressing economic opportunity, healthy futures, and youth development issues in low- income communities. The FY14-16 Omnibus Appropriations Acts allowed CNCS to invest up to 20% of SIF funds each year in Pay for Success initiatives.
Score
8
Millennium Challenge Corporation
  • MCC awards all of its agency funds through two competitive grant programs: Compact and Threshold programs (whose budgets for FY17 were $671.2 and $29.9 million respectively). Both types of grants require demonstrable, objective evidence to support the likelihood of project success in order to be awarded funds. For country partner selection, MCC uses twenty different indicators within the categories of economic freedom, investing in people, and ruling justly to determine country eligibility for program assistance. These indicators (see MCC’s FY2017 Guide to the Indicators) are collected by independent third parties. When considering granting a second compact, MCC further considers whether countries have (1) exhibited successful performance on their previous compact; (2) exhibited improved 2017 Scorecard policy performance during the partnership; and (3) exhibited a continued commitment to further their sector reform efforts in any subsequent partnership. As a result, the Board has an even higher standard when selecting countries for subsequent compacts.
  • Following country selection, MCC conducts a constraints analysis to identify the most binding constraints to private investment and entrepreneurship that hold back economic growth. The results of this analysis enable the country, in partnership with MCC, to select compact or threshold activities most likely to contribute to sustainable, poverty-reducing growth. Due diligence, including feasibility studies where applicable, are conducted for each potential investment. MCC also performs Cost-Benefit Analysis to assess the potential impact of each project, and estimates an Economic Rate of Return (ERR). MCC uses a 10% ERR hurdle to more effectively prioritize and fund projects with the greatest opportunity for maximizing impact. MCC then recalculates ERRs at compact closeout, drawing on information from MCC’s monitoring data inter alia, in order to test original assumptions and assess the cost effectiveness of MCC programs. In connection with the ERR, MCC conducts a Beneficiary Analysis, which seeks to describe precisely which segments of society will realize the project benefits. It is most commonly used to assess the impact of projects on the poor, but it has broader applicability that allows for the estimation of impact on populations of particular interest, such as women, the aged, children, and regional or ethnic sub-populations.
  • In line with MCC’s M&E policy, MCC projects are required to submit quarterly Indicator Tracking Tables showing progress toward projected targets. MCC also requires independent evaluations of every project to assess progress in achieving outputs and outcomes throughout the lifetime of the project and beyond.
  • In February 2017, MCC issued new Compact Development Guidance. This guidance codifies the MCC’s commitment to using evidence to inform country and project selection by requiring that each project meet certain investment criteria like generating high economic returns, including clear metrics for results, and supporting the long-term sustainability of results.
Score
7
Substance Abuse and Mental Health Services Administration
  • The following represents SAMHSA’s 5 largest competitive grant programs for which funds were appropriated in FY17: 1) Opioid State Targeted Response ($500 million in FY17); 2) Children’s Mental Health Initiative ($119 million in FY17); 3) Strategic Prevention Framework ($119.5 million in FY17); 4) Substance Abuse Treatment Criminal Justice ($78 million in FY17); and 5) Targeted Capacity Expansion – General ($67.2 million in FY17).
  • The President’s Budget request for SAMHSA for FY18 stipulates “that up to 10% of amounts made available to carry out the Children’s Mental Health Initiative may be used to carry out demonstration grants or contracts for early interventions with persons not more than 25 years of age at clinical high risk of developing first episode of psychosis.” Specifically, funds from this set-aside should address whether community-based interventions during the prodrome phase can prevent further development of serious emotional disturbances and eventual serious mental illness, and the extent to which evidence-based early interventions can be used to delay the progression of mental illness, reduce disability, and/or maximize recovery.
  • SAMHSA has universal language about using evidence-based practices (EBPs) that is included in its Funding Opportunity Announcements (FOAs) (entitled Using Evidence-Based Practices (EBPs)). This language includes acknowledgement that, “EBPs have not been developed for all populations and/or service settings” thus encouraging applicants to “provide other forms of evidence” that a proposed practice is appropriate for the intended population. Specifically, the language states that applicants should, 1) document that the EBPs chosen are appropriate for intended outcomes, 2) explain how the practice meets SAMHSA’s goals for the grant program, 3) describe any modifications or adaptations needed for the practice to meet the goals of the project, 4) explain why the EBP was selected, 5) justify the use of multiple EBPs, if applicable, and 6) discuss training needs or plans to ensure successful implementation. Lastly, the language includes resources the applicant can use to understand EBPs. SAMHSA shares evidence-based program and practice language with grantees as they compete for SAMHSA grants and describe the types of program/practice implementation they hope to engage in to address the needs of their particular target populations and communities. The review criteria contained in the FOA make clear that applicants proposing to use programs and practices with a more robust evidence base will receive higher scores and thus greater support for their funding application.
  • The President’s Budget for SAMHSA for FY18 plans to implement a tiered evidence approach in the Screening, Brief Intervention, and Referral to Treatment (SBIRT) program, which will allow for funding allocations and awards based on the implementation of both innovative practices or programs and more standard programming. Grant funding will be tied to the particular approach taken by the grantee. At the present time, SAMHSA does not use preference points to link funds to evidence of effectiveness; however, the 10% set-aside includes language to suggest that the Coordinated Specialty Care model is a first episode approach of importance to this work.
  • Among SAMHSA’s standard terms and conditions of all grant funding is the requirement that grantees collect and report evaluation data to ensure the effectiveness and efficiency of its programs under the Government Performance and Results (GPRA) Modernization Act of 2010 (P.L. 102-62). In addition, grantees must comply with performance goals and expected outcomes described in Funding Opportunity Announcements (FOAs), which may include participation in an evaluation and/or local performance assessment. While exemplar FOAs are not available to be shared publicly at this juncture, SAMHSA is developing the first tiered evidence FOA that will be funded in FY2018, a key step to incentivize innovative practice/program models among grantees. While exemplar FOAs are not available to be shared publicly at this juncture, SAMHSA is developing the first tiered evidence FOA that will be funded in FY2018, a key step to incentivize innovative practice/program models among grantees.
Score
8
U.S. Agency for International Development
  • USAID is committed to using evidence of effectiveness in all of its competitive contracts, cooperative agreements, and grants, which comprise the majority of the Agency’s work. USAID has rebuilt its planning, monitoring, and evaluation framework to produce and use evidence through the introduction of a new Program Cycle, which systematizes use of evidence across all decision-making regarding grants and all of USAID’s work. The Program Cycle is USAID’s particular framing and terminology to describe a common set of processes intended to achieve more effective development interventions and maximize impacts. The Program Cycle acknowledges that development is not static and is rarely linear, and therefore stresses the need to assess and reassess through regular monitoring, evaluation, and learning. Thus the different components of the Program Cycle mutually reinforce each other by having learning and adapting integrated throughout. The Program Cycle encourages planning and project management innovations to increase the cost-effectiveness and lasting impact of development cooperation.
  • USAID is committed to using evidence of effectiveness in all of its competitive contracts, cooperative agreements, and grants, which comprise the majority of the Agency’s work. USAID’s Program Cycle policy ensures evidence from monitoring, evaluation and other sources informs decisions at all levels, including during strategic planning, project and activity design and implementation. This Program Cycle is USAID’s particular framing and terminology to describe a common set of processes intended to achieve more effective development interventions and maximize impacts. The Program Cycle acknowledges that development is not static and is rarely linear, and therefore stresses the need to assess and reassess through regular monitoring, evaluation, and learning.
  • In 2013, USAID reformed its policy for awarding new contracts to elevate past performance to comprise 20 to 30 percent of the non-cost evaluation criteria. For assistance, USAID does a “risk assessment” to review an organization’s ability to meet the goals and objectives outlined by the Agency. This can be found in ADS 303, section 303.3.9. Contractor performance is guided by USAID operational policy ADS 302, section 302.3.8.7. As required in FAR Subpart 42.15, USAID must evaluate contractor performance using the Contractor Performance Assessment Reporting System (CPARS). Information in CPARS, while not available to the public, is available for Contracting Officers across the Government to use in making determinations of future awards.
  • USAID has also instituted a policy called the Acquisition and Assistance Review and Approval Document (AARAD) process where all contracts, grants, and cooperative agreements over $100 million are reviewed by the Administrator prior to being awarded and all awards over $50 million are reviewed by the relevant Assistant Administrators. Included in the AARAD review are several key factors that include: Policy Relevant, Commitment to Sustainable Results, Feasibility, and Value for Money. This policy ensures that results, evidence, and long-term strategies are incorporated into all of USAID’s major programs. In addition, it ensures senior level accountability on USAID’s biggest programs. This policy is outlined in ADS 300. USAID guidance for competitive grants is also available online.
Score
9
U.S. Department of Education
  • ED’s five largest competitive grant programs in FY17 are:1) TRIO ($950 million in FY17); 2) GEAR UP ($340 million in FY17); 3) Charter Schools Grants ($342 million in FY17); 4) Teacher and School Leader Incentive Program (TSL) ($200 million in FY17); and 5) Comprehensive Literacy Development Grants ($190 million in FY17.
  • ED uses evidence of effectiveness when making awards in all 5 of these largest competitive grant programs. 1) ED awards competitive priorities for grant applicants in Upward Bound (over $300 million in FY17) and Upward Bound Math & Science (approximately $48 million in FY17) who demonstrate they will use strategies with moderate evidence of effectiveness. 2) ED awards competitive priorities for grant applicants in GEAR UP state and partnership grants (approximately $70 million in new awards in FY17) who demonstrate they will use strategies with moderate evidence of effectiveness. 3) Under the Charter Schools Grants, ED makes awards through the Replication and Expansion of High-Quality Charter Schools program ($approximately $57 million in new awards in FY17) to applicants based on their demonstrated success in improving student outcomes. 4) The TSL statute requires applicants to provide a description of the rationale for their project and describe how the proposed activities are evidence-based, and grantees are held to these standards in the implementation of the program. 5) Under the Comprehensive Literacy Development program ED included an absolute priority directing States to ensure that their subgrants to districts support projects that are based on moderate evidence when such evidence is available.
  • The Evidence Planning Group (EPG) advises program offices on ways to incorporate evidence in grant programs, including use of evidence as an entry requirement or priority to encourage the use of practices where there is evidence of effectiveness, and/or an exit requirement or priority to build new evidence. As noted in response to Question #4, for the past several years, ED has reported publicly on Performance.gov on its Agency Priority Goal (APG) focused on directing an increasing percentage of funds available for new competitive awards towards projects that are supported by evidence. Funding decisions are not yet final for FY17 grant competitions, but ED is on track to meet its target for that APG.
  • In FY16, ED conducted 12 competitions that included the use of evidence beyond a logic model. In FY17 not all decisions have been finalized yet, but so far ED has announced the following 12 competitions that include the use of evidence beyond a logic model: 1) Strengthening Institutions Program; 2) TRIO-Upward Bound; 3) TRIO-Upward Bound Math & Science; 4) GEAR-UP; 5) National Professional Development; 6) Education Innovation and Research; 7) Supporting Effective Educator Development (2 cohorts); 8) Professional Development for Arts Educators; 9) Jacob K. Javits Gifted and Talented Students Education; 10) Stepping Up Technology Implementation; 11) Model Demonstration Projects to Improve Algebraic Reasoning for Students with Disabilities in Middle and High School; and 12) Magnet Schools Assistance Program.
  • The Education Innovation and Research program ($100 million in FY17) provides competitive grants to create, develop, implement, replicate, or take to scale entrepreneurial, evidence-based, field-initiated innovations to improve student achievement and attainment for high-need students; and rigorously evaluate such innovations. ED’s FY18 budget prioritizes funding evidence-based activities. For example, the budget includes $370 for the EIR program, an increase of $270 million over the FY17 enacted level.
  • Additionally, ESEA requires that ED give priority to applicants demonstrating strong, moderate, or promising levels of evidence within the following seven competitive grant programs: Literacy Education for All, Results for the Nation; Supporting Effective Educator Development; School Leader Recruitment and Support; Statewide Family Engagement Centers; Promise Neighborhoods; Full-Service Community Schools; and Supporting High-Ability Learners and Learning.
Score
7
U.S. Dept. of Housing & Urban Development
  • In FY17 HUD’s major competitive grant programs are: 1) Continuum of Care Homeless Assistance ($2.0 billion); 2) Disaster Assistance/National Disaster Resilience Competition ($300 million); 3) Choice Neighborhoods Grants program ($125 million); 4) Service Coordinators program ($75 million); 3) Family Self-Sufficiency Program ($75 million); 4) Indian Community Development Block Grant Program ($60 million); and 5) Housing Counseling Assistance ($50 million).
  • Decisions regarding the design, funding, and implementation of all HUD competitive grant programs are evidence-based, as specified in funding criteria in HUD’s FY17 Notice of Funding Availability (NOFA). The “Achieving Results and Program Evaluation” factor (see p.13), provides funding priority for applicants that demonstrate effective use of evidence in identifying or selecting the proposed practices, strategies, or programs proposed in the application, and requires all grantees to cooperate in HUD-funded research and evaluation studies (see p. 14). Another factor, “Past Performance,” provides: “In evaluating applications for funding HUD will take into account an applicant’s past performance in managing funds, including, but not limited to….meeting performance targets as established in the grant agreement….” (see p. 13). The “Achieving Results and Program Evaluation” factor and “Past Performance” factor are two of five factors considered that total 100 points, and 2 additional preference points are available for evidence that activities will support other HUD initiatives.
  • Competitive grants in the Continuum of Care program account for most HUD grant resources in FY17, and serve homeless populations by providing permanent supportive housing and rapid rehousing services. The FY16 NOFA allocated $1.6 billion using a 200-point scale that provides 27 preference points for development and use of a standard Homeless Management Information System and Point-in-Time counts that support performance measurement (see pp. 37–38), 40 preference points for providing a panel of System Performance Measures for local homeless outcomes (see p.40), and a total of 15 points for each of four target populations for the extent to which grantees achieve reductions in homelessness or make progress toward reducing homelessness as demonstrated for the relevant homeless outcome metrics (see pp. 41–42).
  • The National Disaster Resilience Competition, which was HUD’s second-largest competitive grant program in FY16, used evidence about disaster resilience, including benefit/cost analysis, to ensure that disaster funding improves communities’ ability to withstand and recover more quickly from future disasters, hazards, and shocks rather than simply recreating the same vulnerabilities. The tiered funding approach awarded implementation grants after evaluating evidence from the FY15 framing grants.
  • HUD partnered with the U.S. Department of Justice in issuing demonstration grants that use Pay for Success financing to reduce homelessness and prisoner recidivism by providing permanent supportive housing through a demonstration using the “housing first” model.
Score
8
U.S. Department of Labor
  • In FY17, the five largest competitive grant programs awarded were: 1) Senior Community Service Employment Program (SCSEP) National Grants ($140 million), 2) America’s Promise Job Driven Grant Program ($111 million), 3) YouthBuild ($80 million), 4) ApprenticeshipUSA State Expansion Grants ($50.5 million), and 5) Reentry Projects ($66 million).
  • All have been or will be involved in evaluations designed by CEO and the relevant agencies, and require or encourage (through language in the funding announcement and proposal review criteria) grantees to use evidence-based models or strategies in grant interventions and/or test, by participating in an evaluation, new interventions that theory or research suggest are promising.
  • DOL includes rigorous evaluation requirements in all competitive grant programs, involving either: 1) full participation in a national evaluation as a condition of grant receipt; 2) an independent third-party local or grantee evaluation with priority incentives for rigorous designs (e.g., tiered funding, scoring priorities, bonus scoring for evidence-based interventions or multi-site rigorous tests), or 3) full participation in an evaluation as well as rigorous grantee (or local) evaluations. For example, SCSEP includes language that states, “By accepting grant funds, grantees agree to participate in the evaluation, should they be selected, including making records on participants, employers, and funding available and providing access to program operating personnel and participants, as specified by the evaluator(s) under the direction of ETA, including after the period of operation.”
  • The America’s Promise grant program aims to build partnerships focused on key priorities, including using evidence-based design. The grant funding announcement outlined a number of evidence-based strategies and encouraged their use as well as evaluation of promising strategies to build evidence. For instance, the funding announcement stated that, “sector strategies can increase the employability, employment, earnings, and outcomes of job seekers, and at the same time benefit employers through improved worker productivity, job retention, and enhanced profitability. For example, studies randomly assigning people to job training programs with sector partnerships found that participants were employed at a higher rate with higher earnings (an additional $4,500 over 24 months) than those who went through other employment and training programs. For applicants that already have sector strategies in place, we are interested in seeing data and demonstration of strong outcomes for job seekers and employers alike that are outlined later in this document….”
  • The America’s Promise grant program aims to build partnerships focused on key priorities, including using evidence-based design. The grant funding announcement outlined a number of evidence-based strategies and encouraged their use. In reviewing grant proposals, 54 points (out of 100) were given for “project design” (and additional 10 points were given for the expected outcomes and outputs section). The funding announcement, and specifically the project design section outlined evidence based strategies and provided links to resources to make it easier for bidders to identify effective strategies and support improved outcomes. For instance, the funding announcement stated that, “sector strategies can increase the employability, employment, earnings, and outcomes of job seekers, and at the same time benefit employers through improved worker productivity, job retention, and enhanced profitability. For example, studies randomly assigning people to job training programs with sector partnerships found that participants were employed at a higher rate with higher earnings (an additional $4,500 over 24 months) than those who went through other employment and training programs. For applicants that already have sector strategies in place, we are interested in seeing data and demonstration of strong outcomes for job seekers and employers alike that are outlined later in this document…” Additionally, the funding announcement required participation in a national evaluation as a condition of award.
  • The YouthBuild funding announcement required applicants to demonstrate how their project design is informed by the existing evidence base on disadvantaged youth serving social programs, and in particular disadvantaged youth workforce development programs.
  • The ApprenticeshipUSA grant program strives to meet DOL’s goal to double and further diversify Registered Apprenticeships across the country. The grant funding announcement encourages applicants to “use program models with demonstrated evidence of success in serving the targeted population(s), especially models shown by rigorous program evaluations to have positive impacts on participants’ employment and earnings outcomes.” Further, allowable activities include program administration to improve program efficiency, program quality and outcome measurement such as project management, data collection and grant reporting, and grant monitoring and evaluation.
  • The Reentry Projects grant program requires applicants to propose evidence-based and informed interventions, or new interventions that theory or research suggests are promising, (or a combination of both) that lead to increased employment outcomes for their target populations and must frame their goals and objectives to address this issue; applicants are able to select and implement different program services and/or features of program models. The grant funding announcement includes examples of previous studies and evaluations that DOL has conducted on reentry programs, as well as other evidence-based and promising practices, and applicants were encouraged to review these resources prior to designing their intervention.
  • The TechHire Partnership grants rapidly train workers for and connect them to well-paying, middle- and high-skilled, and high-growth jobs across a diversity of H-1B industries. It used a tiered-evidence framework where evidence-based design was a requirement of grant award, and for grantees that requested between $4 and $5 million, applications were assessed by a panel of experts along a continuum of innovation and evidence, ranging from strong to moderate to preliminary. Grantees receiving more than $4 million must also plan to replicate, at multiple sites and/or with the targeted and other populations, strategies that have been shown by prior research to have evidence of positive impacts on education and/or employment outcomes.
Back to the Index

Visit Results4America.org