2017 Federal Index


U.S. Agency for International Development

Score
8
Leadership

Did the agency have a senior staff member(s) with the authority, staff, and budget to evaluate its major programs and inform policy decisions affecting them in FY17? (Example: Chief Evaluation Officer)

  • USAID’s Office of Learning, Evaluation and Research (LER) in the Bureau for Policy, Planning, and Learning (PPL) provides guidance, tools and technical assistance to USAID staff and partners to support monitoring, evaluation and learning practices, some of which can be found online. The LER Director oversaw approximately 20 staff and an estimated $17 million budget in 2017.
  • LER, bureaus, and independent offices hold several contracts that USAID missions and offices can use for building staff capacity in monitoring, evaluation and learning, and for commissioning evaluations and monitoring services directly. For example, LER manages the Monitoring and Evaluation Services Indefinite Delivery Indefinite Quantity (EVAL-ME IDIQ) contract, which allows USAID missions and Washington Offices, using their own funds, to competitively bid statements of work among 14 pre-approved companies that have been selected for their monitoring and evaluation capabilities, shortening and simplifying the process for contracting an independent evaluation team. LER also manages a classroom training program in monitoring and evaluation for USAID staff.
  • The LER Director participates in the USAID Administrator’s Leadership Council (ALC), a senior level bi-weekly meeting chaired by the USAID Administrator and attended by Assistant Administrators and select Agency Senior Staff, when the agenda includes issues related to evaluation. The LER Director also informs policy decisions across the agency by providing input into working groups and reviewing statements, draft memos and other policy products.
  • One of LER’s primary objectives is to strengthen USAID’s capacity in the fields of program monitoring, evaluation, and learning. For example, through a contract commissioned by LER to expand monitoring and evaluation capacity at USAID individual USAID, Offices and Missions can access experts for short term assistance or longer term fellows with expertise in monitoring, evaluation, learning, and project design. Fellows work with a specific mission or office for 6 months to up to 2 years. Another contract managed by LER provides organizational learning support, including helping USAID offices develop learning agendas and use monitoring and evaluation for learning and program adaptation. To build staff capacity in designing or commissioning impact evaluations funded by missions or offices, LER has hosted clinics on Impact Evaluation to provide USAID staff with tools, resources and hands-on support to design an impact evaluation for a future program activity. In addition to providing general capacity-building services in the form of training, clinics, technical assistance, and fellowships, LER staff occasionally manage evaluations directly or participate on evaluation teams for evaluations funded by LER or for those funded by other parts of the Agency. LER also coordinates several cross-agency working groups organized to support Learning champions and monitoring and evaluation specialists throughout the Agency.
Score
8
Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and research/learning agenda(s) and did it publicly release the findings of all completed evaluations in FY17?

  • USAID has an agency-wide Evaluation Policy published in 2011, which was updated in October 2016 to reflect revisions made to USAID’s Automated Directives System (ADS) Chapter 201: Program Cycle Operational Policy, released in September 2016. The policy updates changed evaluation requirements to simplify implementation and increase the breadth of evaluation coverage. The updates also seek to strengthen evaluation dissemination and utilization. The agency released a report in 2016 to mark the five-year anniversary of the policy.
  • All final USAID evaluation reports are available on the Development Experience Clearinghouse except for a small number of evaluations that are not. For FY2015 and FY2016, USAID has created infographics that show where evaluations took place, across which sectors, and include short narratives that describe findings from selected evaluations and how that information informed decision-making.
  • USAID field missions are required to have an evaluation plan, and all USAID missions and offices provide an internal report on an annual basis on completed, ongoing and planned evaluations, including evaluations planned to start anytime in the next three fiscal years.
  • All Washington Bureaus may develop annual evaluation action plans that review evaluation quality and use within the Bureau, and identify challenges and the priorities for the year ahead, including support to Missions. LER works with bureau M&E points-of-contact to review implementation of these action plans on a quarterly basis and provides support as appropriate and feasible. LER uses the evaluation action plans as a source for Agency-wide sharing of successes and challenges to improving evaluation quality and use.
  • Given USAID’s decentralized structure, individual programs, offices, bureaus and missions may develop learning agendas to guide their research and evaluation efforts. USAID’s current learning agenda efforts are decentralized and vary in focus, centering on regions, technical areas (e.g. democracy and governance, health systems, and food security), or cross-cutting efforts. In March 2017, LER published a report titled, “ Learning Agenda Landscape Analysis” which provides a summary of 19 learning agendas across USAID and compiles promising practices for developing and using learning agendas. Learning agendas enable USAID to identify knowledge gaps and identify how monitoring, evaluation, field research, and other learning activities can be designed to fill those gaps, thereby generating evidence that – when coupled with adaptive management practices – improves decision-making and facilitates continuous organizational improvement.
  • LER is in the process of developing a learning agenda to answer a few priority questions on how Program Cycle policy requirements are being perceived and implemented across the Agency. The answers to those questions will help USAID better target capacity building support to staff and partners for more effective programs and may inform future updates to the policy.
  • All final USAID evaluation reports are available on the Development Experience Clearinghouse except for approximately five percent of evaluations completed each year that are not public due to principled exceptions to the presumption in favor of openness guided by OMB Bulletin 12-01 Guidance on Collection of U.S. Foreign Assistance Data. For FY2015 and FY2016, USAID began to visualize where evaluations took place and across which sectors. The graphic also includes short narratives that describe findings from selected evaluations and how that information informed decision-making.
  • Beginning in 2016, USAID’s Office of Policy, within PPL, began conducting assessments of the implementation of the Agency’s suite of development policies to understand how a policy has impacted Agency programming and processes. So far two assessments have been completed, examining implementation of the Gender Equality and Female Empowerment Policy and the Democracy, Human Rights, and Governance Strategy; and two more policies are undergoing assessment: the Development Response to Violent Extremism and Insurgency Policy and the Youth in Development Policy.
  • Since September 2016, USAID multi-year Country Development Cooperation Strategies now require a learning plan that outlines how missions will incorporate learning into their programming, including activities like regular portfolio reviews, evaluation recommendation tracking and dissemination plans, and other analytic processes to better understand the dynamics of their programs and their country contexts. In addition to mission strategic plans, all projects and activities are now also required to have integrated monitoring, evaluation, and learning plans.
Score
10
Resources

Did the agency invest at least 1% of program funds in evaluations in FY17? (Examples: Impact studies; implementation studies; rapid cycle evaluations; evaluation technical assistance, and capacity-building)

  • In FY 2016, USAID missions and offices reported completing 145 evaluations with resources totaling approximately $49 million and managing another 323 ongoing evaluations, many that span more than one year, with total ongoing budgets estimated to reach $201 million. Overall spending on evaluations completed or ongoing in FY16 ($250 million) represents about 1.1% of USAID’s $23 billion FY16 program budget.
  • This amount does not include the budget for the Office of Learning, Evaluation, and Research which primarily focuses on monitoring, evaluation, and learning capacity building and technical assistance ($16 million FY 2016) or the investment in the Demographic and Health Surveys (DHS) ($189 million total in FY13-FY18) or surveys funded by other sector programs that often make up some of the underlying data used in many evaluations.
Score
8
Performance Management / Continuous Improvement

Did the agency implement a performance management system with clear and prioritized outcome-focused goals and aligned program objectives and measures, and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY17? (Example: Performance stat systems)

  • USAID partners with the U.S. Department of State to jointly develop and implement clear strategic goals and objectives. Indicators measuring progress on strategic goals and objectives from across the Agency are collected, in part, through the Performance Plan Report (PPR) and reported through the Annual Performance Report (APR). In FY2016, USAID and the U.S. Department of State’s annual reporting system, FACTS Info, and processes were redesigned to improve data collection and ease of data use for continuous improvement.
  • USAID’s Performance Improvement Officer (PIO) leads Agency efforts to use data for decision-making and improve performance and operational efficiency and effectiveness. Angelique M. Crumbly serves as the USAID Performance Improvement Officer. The PIO coordinates tracking of Cross Agency Priority (CAP) and Agency Priority Goal (APG) progress; leverages performance management reviews to conduct deep-dives into evidence; and oversees business process reviews and other assessments to ensure that the Agency more efficiently and effectively achieves its mission and goals.
  • USAID’s strategic plan, annual performance plan and report, and other performance reports are publicly available. USAID and the U.S. Department of State are in the process of developing the Joint Strategic Plan for FY18 – FY22. The most recent reports are available at the links below:
  • USAID is actively engaged in the President’s Management Council which is developing the next generation of CAP goals. USAID reports on APG and CAP goals on www.performance.gov. These goals help the Agency improve performance, efficiency, and effectiveness, while holding the Agency accountable to the public. USAID assesses progress and challenges toward meeting the goals quarterly during data-driven reviews with Agency leadership. USAID will develop new APGs as part of the FY 2018-2022 Joint Strategic Plan.
  • USAID field missions develop Country Development Cooperation Strategies (CDCS) with clear goals and objectives and a performance management plan that identifies expected results, performance indicators to measure those results, plans for data collection and analysis, and periodic review of performance measures to use data and evidence to adapt programs for improved outcomes.
  • In addition to measuring program performance, USAID measures operations performance management to ensure that the Agency achieves its development objectives and aligns resources with priorities.
Score
9
Data

Did the agency collect, analyze, share, and use high-quality administrative and survey data – consistent with strong privacy protections – to improve (or help other entities improve) federal, state, and local programs in FY17? (Examples: Model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; open data policies)

  • USAID has an open data policy which in addition to setting requirements for how USAID data is tagged, submitted, and updated, also established the Development Data Library (DDL) as the Agency’s repository of USAID-funded, machine readable data created or collected by the Agency and its implementing partners.
  • In FY16, USAID began developing the Development Information Solution (DIS) – a suite of IT tools designed to harness the richness of USAID’s data across our offices and operating units, improve efficiencies across the entire Program Cycle, and support the ability to tell one cohesive story about how well USAID’s activities are achieving their goals.
  • USAID is committed to advancing the many efforts that are currently underway to increase the transparency of funding and programming, with the understanding that quality information and reporting can help stakeholders manage and monitor aid resources more effectively. In November 2011, the United States became a signatory to the International Aid Transparency Initiative (IATI) – a voluntary, multi-stakeholder initiative that created a data standard for publishing foreign assistance spending data and allowing comparison across publishers. In July 2015, USAID produced a cost management plan (CMP) in order to improve the number of fields reported to IATI as well as to institutionalize the process by which the Agency reports its data.
  • USAID also created and maintains the Foreign Aid Explorer, a site that reports comprehensively on U.S. government foreign assistance, from 1946 – present.
  • The USAID GeoCenter uses data and analytics to improve the effectiveness of USAID’s development programs by geographically assessing where resources will maximize impact. The GeoCenter team works directly with field missions and Washington-based bureaus to integrate geographic analysis into the strategic planning, design, monitoring, and evaluation of USAID’s development programs. The GeoCenter also provides important data-centered trainings to USAID staff.
  • USAID’s Economic Analysis and Data Services (EADS) unit has a public web site to share data and also provides data analysis tools, including the aforementioned Foreign Aid Explorer. EADS also provides USAID staff, partners, and the public with analytical products and a platform for querying data. EADS also provides a training available to all USAID staff called Finding and Using Development Data, focused on helping USAID staff best utilize development data in their specific fields.
  • USAID uses data and evidence to inform policy formulation, strategic planning, project design, project management and adaptation, program monitoring and evaluation, and learning what works, through a framework called the Program Cycle.
  • USAID’s Monitoring Country Progress (MCP) system is an empirical analytical system which tracks and analyzes country progress to facilitate country strategic planning including country graduation from USG foreign assistance programs.
  • USAID also publishes spending data alongside program results on the Dollars to Results page of the USAID website. Dollars to Results provides illustrative information on USAID’s impact around the world by linking annual spending to results.
  • To help inform the U.S. Government’s aid transparency agenda, USAID conducted three aid transparency country pilot studies in Zambia (May 2014), Ghana (June 2014), and Bangladesh (September 2014). The final report summarizes findings from the three pilots and USAID is implementing many of the recommendations. For example, USAID tested the streamlining of direct IATI data reporting into Bangladesh’s Aid Information Management Systems (AIMS). This effort led to improved data quality, more comprehensive reporting and a decrease in the reporting burden for the mission.
Score
8
Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding decisions and did it disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY17? (Example: What Works Clearinghouses)

  • USAID has a scientific research policy that sets out quality standards for research. USAID’s Program Cycle policy includes specific evidence standards for strategic planning, project design, monitoring, and evaluation. For example USAID policy requires evidence and data to assess the development context, challenges, and opportunities in all of USAID’s country strategies. Similarly, all USAID projects must include a detailed analytical phase in the Project Appraisal Document.
  • USAID does most of its Agency-wide engagement around evidence and frameworks for “what works” through its board membership and funding (along with other donors) of the International Initiative for Impact Evaluations (3ie) which funds impact evaluations and systematic reviews that generate evidence on what works in development programs and why. Rather than creating a separate “what works” clearinghouse, USAID has chosen to work with 3ie and other development partners to support 3ie’s database of impact evaluations relevant to development topics (includes over 2,500 entries to date), knowledge gap maps and systematic reviews that pull the most rigorous evidence and data from across donors. 3ie also houses a collection of policy briefs that examine findings from its database of impact evaluations on overarching policy questions to help policymakers and development practitioners improve development impact through better evidence. Various USAID bureaus or operating units have funded 3ie to produce evidence gap maps on topics such as: science, technology, innovation and partnership; state-society relations; and productive safety nets.
  • USAID technical bureaus provide guidance based on evidence of “what works” by sector that applies to all relevant Agency programs. USAID’s Bureau for Democracy, Conflict and Humanitarian Assistance (DCHA), for example, includes the Center of Excellence on Democracy, Rights, and Governance, which publishes evidence-based standards for what works in this field. The DRG Center convenes leading scholars from a range of fields to work with USAID to study, analyze, and assess the effectiveness of its initiatives and programs in DRG, using this data to shape programming. In addition, USAID established the Evaluating Democracy and Governance Effectiveness (EDGE) Initiative, to supply and apply sophisticated tools to measure the impact of democracy, human rights, and governance work, and infuse evidence-based programmatic decision-making throughout the DRG portfolio. In another example, USAID’s Global Health Bureau has a strategic framework that presents details in Annex 1 on specific evidence-based strategies, targets, and approaches for achieving goals within each technical area under the health priorities.
  • Several USAID Bureaus also synthesize all the evaluations relevant to a specific sector to summarize key findings and identify gaps in knowledge that then inform sector learning agendas. For example, in March 2016, the Bureau for Food Security (BFS) published a synthesis report summarizing findings from 196 evaluations of Feed the Future projects that focused on the six themes outlined in the Feed the Future Learning Agenda. Across the themes, the synthesis illuminated trends and patterns summarized in the points found below the graphic. BFS shared these trends with relevant staff and stakeholders engaged in designing new projects, or updating sector strategies and policies. The synthesis also identified gaps where more evaluation research is needed, helping to inform the design of future evaluations that can contribute to the body of knowledge on food security to improve the design and management of interventions in the agriculture, resilience, and nutrition sectors by specifically addressing Learning Agenda questions. The synthesis helped to inform the Feed the Future Global Performance Evaluation, and the Global Food Security Strategy.
Score
9
Innovation

Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY17? (Examples: Prizes and challenges; behavioral science trials; innovation labs/accelerators; performance partnership pilots; demonstration projects or waivers with strong evaluation requirements)

  • USAID established the U.S. Global Development Lab (the Lab) in 2014 to increase the application of science, technology, innovation, and partnerships to extend the Agency’s development impact in helping to end extreme poverty. The Lab does this by working closely with colleagues across the Agency and by bringing together a diverse set of partners to discover, test, and scale breakthrough innovations to solve development challenges faster, cheaper and more sustainably. The Lab is the home for the Monitoring, Evaluation, Research and Learning Innovations program (MERLIN) to source, co-design, implement and test solutions that innovate on traditional approaches to monitoring, evaluation, research and learning.
  • In the past six years, through the Global Development Lab, USAID and its partners have launched eight Grand Challenges for Development: Saving Lives at Birth (2011), All Children Reading (2011), Powering Agriculture: An Energy Grand Challenge for Development (2012), Making All Voices Count (2012), Securing Water for Food (2013), Fighting Ebola (2015), Combating Zika and Future Threats (2016), Scaling Off-Grid Energy (2016), and Ensuring Effective Health Supply Chains (2017). GCDs are robust partnerships that leverage each partner’s strengths to engage new solvers through incentive prizes, challenge grant funding, and crowdsourcing to capture learnings, acceleration support services, and generate awareness to identify the most promising solutions, test them, and scale those that are proven to work.
  • Development Innovation Ventures (DIV) is USAID’s tiered, evidence-based innovation fund. It awards grant financing to innovative solutions to development challenges on the basis of rigorous evidence of impact, cost-effectiveness, and potential to scale via the public and/or private sectors. It awards funding across three stages, ranging from under $100,000 for piloting early-stage innovations to up to $15 million for scaling evidence-backed innovations. The DIV model is designed to find breakthrough solutions, minimize risk and maximize impact through staged financing, rigorously test impacts and cost-effectiveness, and scale proven solutions. To date, DIV-supported innovations have impacted 12.6 million beneficiaries and mobilized external financing of $446 million, a nearly 5:1 leverage ratio on DIV’s approximately $90 million in grants awarded. Of the over 90 DIV grants that have been completed, over 90% of grantees collected evidence of impact and more than 60% of innovations had attributable, causal positive impact on development outcomes. (In July 2017, USAID announced it would no longer be accepting applications for new DIV awards, until further notice.)
  • USAID also supports innovation through the external Global Innovation Fund (GIF), a private fund co-founded by USAID and based on the DIV model. Like DIV, GIF invests in social innovations to improve the lives of millions of people in the developing world, but, as a private fund, GIF is also able to provide debt and equity financing.
  • USAID is working with the Australian Department of Foreign Assistance and Trade (Australian Aid), the Korea International Cooperation Agency (KOICA), and the Bill & Melinda Gates Foundation to fund and promote the Global Innovation Exchange (the Exchange), a free, online platform that connects social entrepreneurs with the funding and other resources they need to be impactful. Already home to nearly $123.8 million funding opportunities, more than 5,500 innovations, and more than 18,500 users, the Exchange is rapidly expanding, reaching nearly every corner of the world.
  • The Innovation Design and Advisory Team (iDesign) helps advance USAID’s culture of innovation and intrapreneurship through testing, application and mainstreaming of innovative design and problem-solving processes. iDesign offers customized advisory services and training programs to help Agency offices determine when, how, and with whom to innovate in programs to achieve more cost-effective impact. It applies the latest models for open and collaborative program design, human-centered design and systems thinking, pay for results tools, venture fund models and lean and adaptive management principles.
  • USAID’s Applied Innovation team works with programs and implementing partners, including contractors and grantees, to capture learning and accelerate innovations supported by USAID. The Applied Innovation team is working to expand innovation adoption across USAID’s programming, and test the theory that innovations can enhance development impact, save time and resources, and improve programmatic efficiencies.
  • USAID’s Higher Education Solutions Network (HESN) program is a partnership with seven competitively awarded universities working with partners worldwide. Leveraging nearly equal investments from each institution, the universities have established eight Development Labs that have built a global research network of 685 partners from 69 countries. Through HESN, USAID has been able to harness the ingenuity of students and faculty to create or test over 300 innovations, which have helped USAID missions reach their development goals and improved the lives of 2.3 million beneficiaries.
  • USAID’s Research and Development (R&D) Hub for Monitoring and Evaluation (M&E) helps agency staff discover when emerging M&E approaches may be appropriate for their particular learning needs, working within the space of complexity-aware M&E, context monitoring, monitoring without indicators, and M&E for adaptive management. While champions in USAID and implementing partners have been experimenting with emerging approaches, evidence-based, practical resources on how to apply these approaches have not been systematically created and shared. The R&D Hub for M&E in LER plays the role of connector by linking champions and conducting research on and documenting emerging M&E approaches that have been helpful in various circumstances. The R&D Hub collects data on the use of innovative approaches to M&E within the agency; the US Global Development Lab’s MERLIN is one example of a partner in learning.
  • USAID’s Research and Development (R&D) Hub for Monitoring and Evaluation (M&E) helps agency staff determine best fit for emerging M&E approaches to specific contexts and programs.
Score
8
Use of Evidence in 5 Largest Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its 5 largest competitive grant programs in FY17? (Examples: Tiered-evidence frameworks; evidence-based funding set-asides; priority preference points or other preference scoring; Pay for Success provisions)

  • USAID is committed to using evidence of effectiveness in all of its competitive contracts, cooperative agreements, and grants, which comprise the majority of the Agency’s work. USAID has rebuilt its planning, monitoring, and evaluation framework to produce and use evidence through the introduction of a new Program Cycle, which systematizes use of evidence across all decision-making regarding grants and all of USAID’s work. The Program Cycle is USAID’s particular framing and terminology to describe a common set of processes intended to achieve more effective development interventions and maximize impacts. The Program Cycle acknowledges that development is not static and is rarely linear, and therefore stresses the need to assess and reassess through regular monitoring, evaluation, and learning. Thus the different components of the Program Cycle mutually reinforce each other by having learning and adapting integrated throughout. The Program Cycle encourages planning and project management innovations to increase the cost-effectiveness and lasting impact of development cooperation.
  • USAID is committed to using evidence of effectiveness in all of its competitive contracts, cooperative agreements, and grants, which comprise the majority of the Agency’s work. USAID’s Program Cycle policy ensures evidence from monitoring, evaluation and other sources informs decisions at all levels, including during strategic planning, project and activity design and implementation. This Program Cycle is USAID’s particular framing and terminology to describe a common set of processes intended to achieve more effective development interventions and maximize impacts. The Program Cycle acknowledges that development is not static and is rarely linear, and therefore stresses the need to assess and reassess through regular monitoring, evaluation, and learning.
  • In 2013, USAID reformed its policy for awarding new contracts to elevate past performance to comprise 20 to 30 percent of the non-cost evaluation criteria. For assistance, USAID does a “risk assessment” to review an organization’s ability to meet the goals and objectives outlined by the Agency. This can be found in ADS 303, section 303.3.9. Contractor performance is guided by USAID operational policy ADS 302, section 302.3.8.7. As required in FAR Subpart 42.15, USAID must evaluate contractor performance using the Contractor Performance Assessment Reporting System (CPARS). Information in CPARS, while not available to the public, is available for Contracting Officers across the Government to use in making determinations of future awards.
  • USAID has also instituted a policy called the Acquisition and Assistance Review and Approval Document (AARAD) process where all contracts, grants, and cooperative agreements over $100 million are reviewed by the Administrator prior to being awarded and all awards over $50 million are reviewed by the relevant Assistant Administrators. Included in the AARAD review are several key factors that include: Policy Relevant, Commitment to Sustainable Results, Feasibility, and Value for Money. This policy ensures that results, evidence, and long-term strategies are incorporated into all of USAID’s major programs. In addition, it ensures senior level accountability on USAID’s biggest programs. This policy is outlined in ADS 300. USAID guidance for competitive grants is also available online.
Score
N/A
Use of Evidence in 5 Largest Non-Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its 5 largest non-competitive grant programs in FY17? (Examples: Evidence-based funding set-asides; requirements to invest funds in evidence-based activities; Pay for Success provisions)

  • USAID does not administer non-competitive grant programs.
  • USAID does contribute funding to multilateral institutions known as Public International Organizations (PIOs), which are listed here, and include the World Bank, UN, and multi-donor funds such as the Global Fund. A Public International Organization (PIO) is an international organization composed principally of countries. In these specific cases, USAID funds are part of overall US Government funding for these partner institutions. These funds become subject to the monitoring and evaluation requirements of the organization that receives them. For example, the Global Fund has a performance-based funding system, which bases funding decisions on a transparent assessment of results against time-bound targets. USAID’s ADS chapter 308 provides more information on how PIOs are defined and includes guidance related to due diligence required prior to awarding grants to PIOs.
Score
7
Repurpose for Results

In FY17, did the agency shift funds away from or within any practice, program, or policy that consistently failed to achieve desired outcomes? (Examples: Requiring low-performing grantees to re-compete for funding; removing ineffective interventions from allowable use of grant funds; proposing the elimination of ineffective programs through annual budget requests)

  • USAID’s updated operational policy for planning and implementing country programs has incorporated a set of tools and practices called Collaborating, Learning, and Adapting (CLA), that include designing adaptable activities that build in feedback loops; using flexible implementing mechanisms; and adopting a management approach that includes consulting with partners about how implementation is evolving and what changes need to be made. Through the Program Cycle, USAID encourages managing projects and activities adaptively, responding to rigorous data and evidence and shifting design and/or implementation accordingly.
  • USAID uses rigorous evaluations to maximize its investments. A recent independent study found that 71 percent of USAID evaluations have been used to modify and/or design USAID projects. Below are a few recent examples where USAID has shifted funds and/or programming decisions based on performance:
    • Tunisia: A USAID enterprise reform project in Tunisia used adaptive management to build feedback loops and flexibility into their project design, learning from monitoring data and adapting programming based on the evidence. The project’s components regularly collaborate in order to follow leads and identify opportunities. As a result, the project more than doubled initial targets – and created approximately 10% of the total net new jobs created in the Tunisian economy.
    • Cambodia: Findings from a mid-term evaluation of USAID activities under a public health project are informing the design of a new project to more efficiently integrate activities and enable donors and implementing partners to collaborate more easily. Findings are also contributing to the phasing out of some poor performing project components.
    • Peru: In response to a 2016 evaluation of a conflict mitigation activity which found that the intervention did not address root causes of conflict, USAID incorporated key changes for the design and award process of future programming, requiring new activities to address root causes of conflict for the particular context.
    • Vietnam: Based on the findings of a 2016 mid-term evaluation and subsequent stakeholder consultations, USAID’s Vietnam Forest and Delta Program reduced its scope and focused more on local needs, resulting in more efficient use of resources and positive feedback from government officials and implementing partners.
    • Ethiopia: A mid-term impact evaluation of the Feed the Future (FtF) portfolio in Ethiopia found that more than half of the communities in the project’szone of influence had not been sufficiently reached by programming. Based on these findings, USAID is already in the process of reducing the number of interventions and their geographic coverage for the final years of the project in Ethiopia. USAID plans to assess and learn from the re-focusing effort to inform programming in other countries.
  • USAID’s Securing Water for Food: A Grand Challenge for Development (SWFF) selected the highest potential water-for-food innovations and is providing grant funds and ongoing assistance to support business development. SWFF starts as a competition, but the winners must continually show results to receive a new tranche of funding. To move forward, grantees must achieve technical and financial milestones, such as increased crop yields and total product sales. Of the first 15 awardees, nine received Year 2 funding; six did not, because they did not meet the target number of end-users/customers in a cost-effective way and because their model was not deemed sustainable without USAID funding. By using milestone-based funding, SWFF has helped over one million farmers and other customers grow more than 3,000 tons of food and save almost 2 billion liters of water. In addition, SWFF innovators have formed more than 125 partnerships and secured more than $10 million in leveraged funding.
  • DIV, another example of USAID directing funds based on evidence of effectiveness, is mentioned earlier in the Innovation section.
Back to the Index

Visit Results4America.org