2017 Federal Index

Millennium Challenge Corporation


Did the agency have a senior staff member(s) with the authority, staff, and budget to evaluate its major programs and inform policy decisions affecting them in FY17? (Example: Chief Evaluation Officer)

  • MCC’s Monitoring and Evaluation (M&E) Division, which falls within the Department of Policy and Evaluation (DPE), has a staff of 23 and an estimated FY17 budget of $21.2 million in due diligence (DD) funds. These resources are used to directly measure high-level outcomes and impacts in order to assess the attributable effects of MCC’s programs and activities. Departments throughout the agency have a total of $71.9 million in DD funds in FY17. The M&E Managing Director, as well as the Vice President for the Department of Policy and Evaluation, have the authority to execute M&E’s budget and inform policy decisions affecting independent evaluations. The M&E Managing Director participates in technical reviews of proposed investments as well as in regular monitoring meetings in order to inform policy and investment decisions. The Vice President sits on the Agency’s Investment Management Committee which examines the evidence base for each investment before it is approved by the MCC Board and conducts regular oversight over the compact (i.e., grant program) development process. The MCC also recently appointed a new Chief Economist in DPE to oversee and strengthen the economic evidence base used for program development, including economic growth diagnostics, beneficiary analyses, and cost-benefit analyses.
Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and research/learning agenda(s) and did it publicly release the findings of all completed evaluations in FY17?

  • In March 2017, MCC published a revised Policy for Monitoring and Evaluation that further codifies MCC’s experience ensuring all programs develop and follow comprehensive Monitoring & Evaluation (M&E) plans that adhere to MCC’s standards. Further, this new policy ensures MCC alignment with the recently passed Foreign Aid Transparency and Accountability Act of 2016. The monitoring component of the M&E Plan lays out the methodology and process for assessing progress towards Compact (i.e., grant) objectives. It identifies indicators, establishes performance targets, and details the data collection and reporting plan to track progress against targets on a quarterly basis. The evaluation component identifies and describes the evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. Pursuant to MCC’s M&E policy, every project must undergo an independent evaluation to assess MCC’s impact. Once evaluation reports are finalized, they are published on the MCC Evaluation Catalog. To date, 78 interim and final reports have been publicly released, with several additional evaluations expected to be completed and released in the coming months. MCC also produces periodic reports for internal and external consumption on results and learning, and holds agency-wide sessions that help to translate evaluation results into lessons learned for future compact development.
  • MCC extended its efforts to facilitate open access to its de-identified evaluation data by expanding the number of documented evaluations to 120 studies, and by documenting the procedures it has established to ensure both confidentiality protections and the usefulness of these public data. These procedures were developed to ensure a consistent approach across all of MCC’s evaluation data to balancing the dual objectives of protecting the survey respondents’ confidentiality while maintaining acceptable levels of comparability to the original data. MCC posted a public access version of its Microdata Documentation and De-Identification Guidelines on its website in February 2017 to provide guidance to those preparing and using its public access evaluation data. And in March 2017 MCC published a paper that details the processes MCC has established and discusses key lessons learned in seeking to achieve a consistent, optimal balance between benefits and risks when releasing evaluation data to the public.
  • For fiscal year 2017, MCC has pursued a robust research and learning agenda around better use of its data and evidence for programmatic impact. Broadly, the Department for Policy and Evaluation is focused on learning around MCC’s promotion of policy and institutional reforms (PIR). This includes analytical efforts around cost-benefit analysis of PIR, implementation modalities of PIR, and the sustainability of PIR as a result of MCC compacts. After a sustained learning agenda around its evaluations, this year the M&E division is focused on the use of its monitoring data for real-time learning within compacts. They are seeking to better understand how and when monitoring data are used and how its results can feed back into compact decisions. Finally, the newly launched Star Report is an Agency-wide learning effort to systematically capture how and why compacts achieved certain results. The Star Report includes learning formal inflection points at each stage of the compact – development, implementation, and closure – to promote and disseminate learning and evidence for compacts in implementation and future compacts.

Did the agency invest at least 1% of program funds in evaluations in FY17? (Examples: Impact studies; implementation studies; rapid cycle evaluations; evaluation technical assistance, and capacity-building)

  • MCC’s FY17 investment in monitoring and evaluation to date is $15.1 million, which amounts to 5% of Compact spending for FY17 ($302.5 million). This investment exceeds the FY16 spending in which MCC invested over $14.4 million in M&E, roughly 3.4% of Compact spending for FY16 ($428.2 million).
Performance Management / Continuous Improvement

Did the agency implement a performance management system with clear and prioritized outcome-focused goals and aligned program objectives and measures, and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY17? (Example: Performance stat systems)

  • MCC monitors progress towards compact results on a quarterly basis using performance indicators that are specified in each Compact’s M&E Plans. The M&E Plans specify indicators at all levels (process, output, and outcome) so that progress towards final results can be tracked. Every quarter each partner country submits an Indicator Tracking Table (ITT) that shows actual performance of each indicator relative to the baseline level that was established before the activity began and the performance targets that were established in the M&E Plan. Key performance indicators and their accompanying data by country are updated every quarter and published online. MCC reviews this data every quarter to assess whether results are being achieved and integrates this information into project management decisions.
  • MCC also supports the creation of multidisciplinary ‘compact development teams’ to manage the development and implementation of each Compact program. Teams usually include the following members: Coordinator, economist, private sector development specialist, social inclusion and gender integration specialist, technical specialists (project specific), M&E specialist, environmental and social performance specialist, Legal, and financial management and procurement specialists. From the earliest stages, these teams develop project logics and M&E frameworks supported by data and evidence, and use them to inform the development of the projects within each Compact program. Teams meet frequently to gather evidence, discuss progress, make project design decisions, and solve problems. They are encouraged to use the lessons from completed evaluations to inform their work going forward.
  • MCC is in the process of implementing a new reporting system that will enhance MCC’s credibility around results, transparency, and accountability. The “Star Report” captures key information that will provide a framework for results and improve the reporting process for compact lifecycles. For each compact, systematic evidence will be collected on performance indicators, evaluation results, partnerships, sustainability efforts, and learning, among other elements; and critically, this information will all be available in one report. Through this new reporting system, MCC will be able to provide better reporting of compact performance to public audiences, such as Congress, other development agencies, and the academic community. Each compact will have a Star Report published roughly three months after completion. The next MCC compact to close under this new system will be Cabo Verde in November 2017.
  • MCC hosts regular “colleges” in which MCC counterparts from partner countries are invited to a weeklong set of meetings and workshops to discuss best practices, strengthen collaboration, and improve strategies for effectively implementing projects. MCC held such an event in March 2017 focused on Monitoring & Evaluation and Economic Analysis. Thirty-nine M&E Directors and Economists were in attendance from 23 partner countries, in addition to MCC staff and colleagues representing other USG agencies. MCC also hosts “colleges” for MCC and MCA staff in areas of sector expertise at which M&E staff present results of recent evaluations in the sector as a learning opportunity.

Did the agency collect, analyze, share, and use high-quality administrative and survey data – consistent with strong privacy protections – to improve (or help other entities improve) federal, state, and local programs in FY17? (Examples: Model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; open data policies

  • MCC’s M&E Division oversees the publication of anonymized evaluation data to MCC’s public Evaluation Catalog. In the Catalog, partner countries, as well as the general public, can access the microdata and results of independent evaluations for MCC-funded projects, and public use versions of the data used in those evaluations. The M&E plans and tables of key performance indicators are available online by compact and by sector. All evaluation data is meticulously reviewed by MCC’s internal Disclosure Review Board prior to posting to ensure that respondents’ privacy is protected.
  • MCC’s Economic Analysis division publishes constraints analysis reports and interactive, downloadable Economic Rate of Return (ERR) spreadsheets that include the description of the project, including its economic rationale; the expected project impacts, including detailed cost and benefit estimates; and a tool allowing users to modify key assumptions and study the effects of those modifications on the project’s returns.
  • As part of its Data2x commitment, MCC and other donors are increasing the amount of gender data released and helping to improve international data transparency standards.
  • MCC is a founding partner of the Governance Data Alliance, a collaborative effort by governance data producers, consumers, and funders to improve the quality, availability, breadth, and use of governance data.
  • MCC has a partnership with the President’s Emergency Plan for AIDS Relief (PEPFAR) which is helping to increase the availability and quality of development-related data in selected countries. The Data Collaboratives for Local Impact program supports innovative and country-led approaches that promote evidence-based decision-making for programs and policies that address HIV/AIDS, global health, gender equality, and economic growth in sub-Saharan Africa. Data Collaboratives projects are strengthening the availability and use of data to improve lives and empower citizens to hold governments and donors more accountable for results. The program aligns with broader U.S. government efforts to maximize the effectiveness of U.S. foreign assistance and with the Global Data Partnership’s efforts to promote data collaboration to achieve the Sustainable Development Goals (SDGs).
Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding decisions and did it disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY17? (Example: What Works Clearinghouses)

  • MCC uses common, rigorous, evidence-based selection criteria to ensure objectivity in country selection for grant awards. To be eligible for selection, countries must first pass the MCC 2017 Scorecard – a collection of 20 independent, third-party indicators that objectively measure a country’s policy performance in the areas of economic freedom, investing in people, and ruling justly. The criteria for passing the 2017 Scorecard are applied universally to all low- and lower-middle-income candidate countries. MCC’s Board of Directors then considers 3 key factors for selecting countries: 1) a country’s performance on the 2017 Scorecard; 2) the opportunity to reduce poverty and generate economic growth; and 3) availability of funds. An in-depth description of the country selection procedure can be found in the annual Selection Criteria and Methodology report.
  • MCC’s model is based on a set of core principles deemed essential for development assistance to be effective – good governance, country ownership, focus on results, and transparency. In pursuing these, MCC has created a Principles into Practice series which describes how to make these principles operational. All of MCC’s evaluations are published on the MCC Evaluation Catalog. Associated data, upon which evaluations are based, are published when confidentiality concerns are adequately addressed.
  • MCC is undertaking internal research and analysis to understand where and how its published evaluations and datasets are utilized. This effort underscores MCC’s commitment to transparency and learning as MCC seeks to widen the understanding and use of the evidence it produces. The results of this analysis will guide future efforts on evidence-based learning. MCC will finalize baseline metrics around learning and evaluation utilization in September 2017 and then continue tracking global use of MCC knowledge products on a quarterly basis.
  • In FY17, MCC sought to strengthen its outreach and dissemination of results in more innovative ways. In July 2016 MCC held its first evidence workshop in El Salvador with more than 180 policymakers, practitioners, and researchers. The workshop – Closing the Gap: Strengthening the Ties between Evaluation and Policy – was an opportunity for MCC, the Government of El Salvador, and other partners to share evaluation results in education and investment climate projects and find ways to incorporate that knowledge into new programs and policies. As part of the workshop, participants committed to use the lessons learned to improve education, gender, and legal and regulatory policy to make the business climate more competitive and help ensure that better educated students can find higher paying jobs in El Salvador. MCC has worked closely with the implementing entity in El Salvador and the President’s Technical Secretariat to organize follow-up M&E trainings, scheduled for the summer 2017.
  • To further bring attention to MCC’s evaluation and evidence focus, MCC launched an evaluation newsletter called Statistically Speaking in January 2017. This newsletter highlights recent evidence and learning from MCC’s programs with a special emphasis on how MCC’s evidence can offer practical policy insights for policymakers and development practitioners in the United States and in partner countries. It also seeks to familiarize a wider audience with the evidence and results of MCC’s investments.
  • Finally, MCC is developing an enhanced results framework that will better communicate the full picture of the impact of its programs and enrich programmatic learning. Currently in draft form, the framework will help MCC consolidate impacts across projects, compacts, and sectors to assess overall impact at an organizational level.

Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY17? (Examples: Prizes and challenges; behavioral science trials; innovation labs/accelerators; performance partnership pilots; demonstration projects or waivers with strong evaluation requirements)

  • MCC recently launched a new Millennium Efficiency Challenge to encourage innovation specifically in the compact development phase. The challenge is designed to tap into the extensive knowledge of MCC’s staff to identify efficiencies and innovative solutions that can shorten the compact development timeline while maintaining MCC’s rigorous quality standards and investment criteria. Winning ideas will be integrated into the compact development process starting in 2018 and could have lasting impact on MCC for years to come.
  • In September 2014, MCC’s Monitoring and Evaluation division launched the agency’s first Open Data Challenge, a call-to-action to Masters and PhD students working in economics, public policy, international development, or other related fields who were interested in exploring how to use publicly available MCC-financed primary data for policy-relevant analysis. The Challenge was intended to facilitate broader use of MCC’s US-taxpayer funded data. Due to the success of the first Open Data Challenge, a second Open Data Challenge was launched in February 2016 in order to encourage innovative ideas and maximize the use of data that MCC finances for its independent evaluations. For the second Open Data Challenge, MCC announced $1000 prizes for the best papers from students around the world whose work contributed significantly to agency learning and innovation. By August 31, 2017, MCC will announce up to six selected papers and begin planning for public presentation of papers at MCC.
  • In 2014, MCC launched an internal “Solutions Lab” that was designed to encourage innovation by engaging staff to come up with creative solutions to some of the biggest challenges MCC faces. To further encourage staff who pursue innovative ideas throughout the compact lifecycle, MCC launched the annual MCC Innovation Award. (MCC promotes agency-wide participation in its Solutions Lab through an internal intranet portal. MCC’s new Innovation Award is a part of the Agency’s Annual Awards Ceremony held each summer.) The Innovation Award recognizes individuals who demonstrate “exemplary” leadership integrating innovation in project design, project implementation, and/or systems functionality and efficiency. Selections for the Innovation Award are based on a demonstrated ability to lead and implement innovative strategies from project conception that foster sustained learning and collaboration and add value to MCC and/or country partnerships. Award nominees must meet at least one of the following criteria:
    • Integrating innovation in projects which rigorously assess and scale up pilot projects or new technologies tested by other development institutions, universities, research institutions businesses or non-profit organizations;
    • Implementing projects or initiatives which enhance MCC’s ability to effectively carry out its mission, to include introduction and adoption of new technologies in MCA countries, implementation of transformational IT projects, and/or improvement to needed system controls for enhanced functionality or compliance; or
    • Leading new types of collaborative partnerships to leverage MCC resources and bolster the development impact and sustainability of projects in MCA countries.
  • The PEPFAR/MCC Partnership has an innovation challenge. The Data Collaboratives for Local Impact Innovation Challenge identifies, supports, and involves country-based youth, developers, programmers, and solution providers through targeted challenge competitions that address specific programmatic challenges with data — for example, engaging with communities to lower their risk of contracting HIV/AIDS. In December 2016, the Data Lab also held a mapathon that focused on Tanzanian districts where vulnerable adolescent girls and young women are susceptible to HIV infection. By improving maps, volunteers contributed directly to valuable datasets that can inform the understanding of HIV/AIDS prevalence in priority districts, as well as health facility and antiretroviral drug accessibility. DCLI also hosted a mapathon in Washington in July 2017, bringing together more than 100 people from the US government, the Data Lab, and the private sector to map over 2,000 kilometers of roads and nearly 4,000 buildings in Côte d’Ivoire and Togo, two of MCC’s partner countries. The information is now public and can be used by governments, donors, and investors to make better data-driven decisions when planning for critical services like health clinics, schools, and utilities.
  • MCC is conducting an “Innovation Grant Program” in Zambia in order to encourage local innovation in pro-poor service delivery in the water sector through grants to community-based organizations, civil society, and/or private sector entities.
  • MCC recently launched an evaluation of an innovative infrastructure grant facility in Cabo Verde to understand whether funding opportunities could incentivize utilities to reform and improve the sustainability of infrastructure investments. This evaluation represents MCC’s commitment to innovate new approaches and use evidence to test whether those new approaches produce results.
  • MCC regularly engages in implementing pilot projects as part of its overall Compact programs. A few examples include: 1) in Morocco, an innovative pay for results (PFR) mechanism to replicate or expand proven programs that provide integrated support including short-term (one to six months) job readiness skills training, technical training, job matching, follow-up to ensure longevity, and other services; 2) a “call-for-ideas” in Benin in 2015 that extended an invitation to interested companies and organizations from around the world to submit information regarding potential projects that would expand access to renewable off-grid electrical power in Benin; and 3) a regulatory strengthening project in Sierra Leone that includes funding for a results-based financing system designed to strengthen the regulator’s role, incentivize performance by the utilities, and enhance accountability.
  • MCC has signed a five-year, $450m grant with the Kingdom of Morocco, called the Morocco Employability and Land Compact. The focus of the Compact is on making improvements toward land productivity and employability to create new economic opportunities, improve workforce skills, and strengthen the business environment. The Labor Market Impact Evaluation Lab is a key component of the compact aimed at improving labor market outcomes through the use of rigorous quantitative research. The Lab will finance rigorous impact evaluations and other rigorous empirical studies, as well as policy-research engagements, to build the capacity of the Moroccan government to commission/generate such studies. This is the first time MCC has pursued a country-led Lab focused on impact evaluations and policy research.
Use of Evidence in 5 Largest Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its 5 largest competitive grant programs in FY17? (Examples: Tiered-evidence frameworks; evidence-based funding set-asides; priority preference points or other preference scoring; Pay for Success provisions)

  • MCC awards all of its agency funds through two competitive grant programs: Compact and Threshold programs (whose budgets for FY17 were $671.2 and $29.9 million respectively). Both types of grants require demonstrable, objective evidence to support the likelihood of project success in order to be awarded funds. For country partner selection, MCC uses twenty different indicators within the categories of economic freedom, investing in people, and ruling justly to determine country eligibility for program assistance. These indicators (see MCC’s FY2017 Guide to the Indicators) are collected by independent third parties. When considering granting a second compact, MCC further considers whether countries have (1) exhibited successful performance on their previous compact; (2) exhibited improved 2017 Scorecard policy performance during the partnership; and (3) exhibited a continued commitment to further their sector reform efforts in any subsequent partnership. As a result, the Board has an even higher standard when selecting countries for subsequent compacts.
  • Following country selection, MCC conducts a constraints analysis to identify the most binding constraints to private investment and entrepreneurship that hold back economic growth. The results of this analysis enable the country, in partnership with MCC, to select compact or threshold activities most likely to contribute to sustainable, poverty-reducing growth. Due diligence, including feasibility studies where applicable, are conducted for each potential investment. MCC also performs Cost-Benefit Analysis to assess the potential impact of each project, and estimates an Economic Rate of Return (ERR). MCC uses a 10% ERR hurdle to more effectively prioritize and fund projects with the greatest opportunity for maximizing impact. MCC then recalculates ERRs at compact closeout, drawing on information from MCC’s monitoring data inter alia, in order to test original assumptions and assess the cost effectiveness of MCC programs. In connection with the ERR, MCC conducts a Beneficiary Analysis, which seeks to describe precisely which segments of society will realize the project benefits. It is most commonly used to assess the impact of projects on the poor, but it has broader applicability that allows for the estimation of impact on populations of particular interest, such as women, the aged, children, and regional or ethnic sub-populations.
  • In line with MCC’s M&E policy, MCC projects are required to submit quarterly Indicator Tracking Tables showing progress toward projected targets. MCC also requires independent evaluations of every project to assess progress in achieving outputs and outcomes throughout the lifetime of the project and beyond.
  • In February 2017, MCC issued new Compact Development Guidance. This guidance codifies the MCC’s commitment to using evidence to inform country and project selection by requiring that each project meet certain investment criteria like generating high economic returns, including clear metrics for results, and supporting the long-term sustainability of results.
Use of Evidence in 5 Largest Non-Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its 5 largest non-competitive grant programs in FY17? (Examples: Evidence-based funding set-asides; requirements to invest funds in evidence-based activities; Pay for Success provisions)

  • MCC does not administer non-competitive grant programs.
Repurpose for Results

In FY17, did the agency shift funds away from or within any practice, program, or policy that consistently failed to achieve desired outcomes? (Examples: Requiring low-performing grantees to re-compete for funding; removing ineffective interventions from allowable use of grant funds; proposing the elimination of ineffective programs through annual budget requests)

  • MCC has an established Policy on Suspension and Termination that lays out the reasons for which MCC may suspend or terminate assistance to partner countries. Assistance may be suspended or terminated, in whole or in part, if a country (1) engages in activities contrary to the national security interests of the US, (2) engages in a pattern of actions inconsistent with the MCC’s eligibility criteria, or (3) fails to adhere to its responsibilities under a compact or threshold grant, or related agreement. MCC has terminated a compact partnership, in part or in full, seven times out of 33 compacts approved to date, and has suspended partner country eligibility (both compact and threshold) four times, most recently seen with the suspension of Tanzania in March 2016 due to democratic rights concerns. MCC’s Policy on Suspension and Termination also allows MCC to reinstate eligibility when countries demonstrate a clear policy reversal, a remediation of MCC’s concerns, and an obvious commitment to MCC’s eligibility indicators. For example, in early 2012, MCC suspended Malawi’s Compact prior to Entry into Force as MCC determined that the Government of Malawi had engaged in a pattern of actions inconsistent with MCC’s eligibility criteria, specifically around democratic governance. Thereafter, the new Government of Malawi took a number of decisive steps to improve the democratic rights environment and reverse the negative economic policy trends of concern to MCC, which led to a reinstatement of eligibility for assistance in mid-2012.
  • MCC also consistently monitors the progress of Compact programs and their evaluations, using the learning from this evidence to make changes to MCC’s portfolio. For example, MCC recently undertook a review of its portfolio investments in roads in an attempt to better design, implement, and evaluate road investments. Through evidence collected across 16 compacts with road projects, MCC uncovered seven key lessons including the need to prioritize and select projects based on a road network analysis, to standardize content and quality of road data collection across road projects, and to consider cost and the potential for learning in determining how road projects are evaluated. This body of evidence and analysis will be published in September 2017 as MCC’s next Principles into Practice paper. Critically, the lessons from this analysis are already being applied to road projects in compacts in development in Cote d’Ivoire and Nepal.
Back to the Index

Visit Results4America.org