2017 Federal Index


Leadership

Did the agency have a senior staff member(s) with the authority, staff, and budget to evaluate its major programs and inform policy decisions affecting them in FY17? (Example: Chief Evaluation Officer)

Score
8
Administration for Children and Families (HHS)
  • ACF’s Deputy Assistant Secretary for Planning, Research, and Evaluation oversees its Office of Planning, Research, and Evaluation (OPRE) and supports evaluation and other learning activities across the agency. ACF’s budget for research and evaluation in FY17 is approximately $165 million. ACF’s evaluation policy gives the OPRE Deputy Assistant Secretary “authority to approve the design of evaluation projects and analysis plans; and…authority to approve, release and disseminate evaluation reports.” OPRE’s staff of 44 includes experts in research and evaluation methods as well as ACF programs and policies and the populations they serve. OPRE engages in on-going collaboration with program office staff and leadership to interpret research and evaluation findings and to identify their implications for programmatic and policy decisions. OPRE also provides written summaries of emerging findings and discusses their implications with agency and program leadership.
  • While OPRE oversees most of ACF’s evaluation activity and provides overall coordination, some ACF program offices also sponsor evaluations. ACF’s evaluation policy states, “In order to promote quality, coordination and usefulness in ACF’s evaluation activities, ACF program offices will consult with OPRE in developing evaluation activities. Program offices will discuss evaluation projects with OPRE in early stages to clarify evaluation questions and methodological options for addressing them, and as activities progress OPRE will review designs, plans, and reports. Program offices may also ask OPRE to design and oversee evaluation projects on their behalf or in collaboration with program office staff.”
Score
8
Corporation for National and Community Service
  • CNCS’s Office of Research and Evaluation Director (R&E) oversees the development of social science research designed to measure the impact of CNCS programs and shape policy decisions; encourage a culture of performance and accountability in national and community service programs; provide information on volunteering, civic engagement, and volunteer management in nonprofit organizations; and assist in the development and assessment of new initiatives and demonstration projects. The R&E Director, who is overseeing R&E’s $4 million budget and a staff of 9 in FY17, reports directly to the CNCS Chief of Staff and is a member of CNCS’s Leadership Team and Policy Council. The R&E Director also meets regularly with CNCS Program Directors to identify areas where evidence can be generated and used for various decisions.
  • The R&E Director meets annually with all CNCS program offices to identify priorities and negotiate which pools of funds are need to support the year’s priorities. The FY17 plan was developed through a series of formal and informal conversations.
Score
8
Millennium Challenge Corporation
  • MCC’s Monitoring and Evaluation (M&E) Division, which falls within the Department of Policy and Evaluation (DPE), has a staff of 23 and an estimated FY17 budget of $21.2 million in due diligence (DD) funds. These resources are used to directly measure high-level outcomes and impacts in order to assess the attributable effects of MCC’s programs and activities. Departments throughout the agency have a total of $71.9 million in DD funds in FY17. The M&E Managing Director, as well as the Vice President for the Department of Policy and Evaluation, have the authority to execute M&E’s budget and inform policy decisions affecting independent evaluations. The M&E Managing Director participates in technical reviews of proposed investments as well as in regular monitoring meetings in order to inform policy and investment decisions. The Vice President sits on the Agency’s Investment Management Committee which examines the evidence base for each investment before it is approved by the MCC Board and conducts regular oversight over the compact (i.e., grant program) development process. The MCC also recently appointed a new Chief Economist in DPE to oversee and strengthen the economic evidence base used for program development, including economic growth diagnostics, beneficiary analyses, and cost-benefit analyses.
Score
7
Substance Abuse and Mental Health Services Administration
  • The director of SAMHSA’s Center for Behavioral Health Statistics and Quality (CBHSQ) Division of Evaluation, Analysis and Quality (DEAQ) serves as the agency’s evaluation lead with key evaluation staff housed in this division. In addition, the agency’s chief medical officer (CMO), as described in the 21st Century Cures Act, plays a key role in addressing evaluation approaches and the utilization of evidence-based programs and practices among grantees; at this time, a collaborative approach between CBHSQ and the Office of the CMO is being established to ensure broad agency evaluation oversight by senior staff. The Office of the CMO is housed within the agency’s emerging Mental Health Policy Lab (currently the Office of Policy, Planning and Innovation) and will influence evaluation policy decisions across the agency in a more systematic manner as the new Policy Lab is stood up in January 2018.
  • SAMHSA’s Office of Policy, Planning and Innovation provides policy perspectives and guidance to raise awareness around SAMHSA’s research and behavioral health agenda. OPPI also facilitates the adoption of data-driven practices among other federal agencies and partners such as the National Institutes for Health, the Centers for Disease Control and Prevention, and the Centers for Medicare and Medicaid Services.
  • At this time, evaluation authority, staff, and resources are decentralized and found throughout the agency. SAMHSA is composed of four Centers, the Center for Mental Health Services (CMHS), the Center for Substance Abuse Treatment (CSAT), the Center for Substance Abuse Prevention (CSAP) and the Center for Behavioral Health Statistics and Quality (CBHSQ). CMHS, CSAT, and CSAP oversee grantee portfolios and evaluations of those portfolios. Evaluation decisions within SAMHSA are made within each Center specific to their program priorities and resources. Each of the three program Centers uses their program funds for conducting evaluations of varying types. CBHSQ, SAMHSA’s research arm, provides varying levels of oversight and guidance to the Centers for evaluation activities. CBHSQ also provides technical assistance related to data collection and analysis to assist in the development of evaluation tools and clearance packages. Within CBHSQ’s DEAQ, the Quality, Evaluation, Performance Branch (QEPB) builds internal capacity for “developing more rigorous evaluations conducted” internally and externally to assess the “impact of its behavioral health programs… and treatment measures,” and the Analysis and Services Research Branch (ASRB) focuses on effective delivery and financing of health care and services.
  • SAMHSA evaluations are funded from program funds that are used for service grants, technical assistance, and for evaluation activities. Evaluations have also been funded from recycled funds from grants or other contract activities. Given the broad landscape of evaluation authority and funding, a variety of evaluation models have been implemented. These include recent evaluations funded and managed by the program Centers (e.g., First Episode Psychosis, FEP); evaluations funded by the Centers but directed outside of SAMHSA (e.g., Assisted Outpatient Treatment, AOT), and those that CBHSQ directly funds and executes (e.g., Primary and Behavioral Health Care Integration, PBHCI, and the Cures-funded Opioid State Targeted Response funding). Evaluations require different degrees of independence to ensure objectivity and the models above afford SAMHSA the latitude to enhance evaluation rigor and independence on a customized basis.
  • In 2016, CBHSQ conducted a summer review of evaluation activities with the program Centers and presented its findings to the SAMHSA Executive Leadership Team (ELT). As a result, SAMHSA revised and finalized a new Evaluation Policy and Procedure (P&P) grounded in an earlier evaluation P&P and is currently developing a Learning Agenda to prioritize activities to address gaps in data collection, data analysis and the identification of evidence based practices in high profile areas (e.g. SMI, SED, Opioids, Marijuana, Suicide, Health Financing, among others.) The new Evaluation P&P requires Centers to identify research questions and appropriately match the type of evaluation to the maturity of the program. A new workgroup, the Cross-Center Evaluation Review Board (CCERB), composed of Center evaluation experts, will now review significant evaluations at critical milestones in the planning and implementation process, providing specific recommendations to the Center Director having the lead for the evaluation. SAMHSA’s Cross Center Evaluation Review Board (CCERB) works with the four centers within SAMHSA: CSAP, CMHS, CSAT, and CBHSQ to advise, conduct, collaborate, and coordinate on all evaluation and data collection activities that occur within SAMHSA. CCERB staff provides support for program-specific and administration-wide evaluations. SAMHSA’s CMO will also play a key role in reviewing evaluation proposals and clearing final reports.
Score
8
U.S. Agency for International Development
  • USAID’s Office of Learning, Evaluation and Research (LER) in the Bureau for Policy, Planning, and Learning (PPL) provides guidance, tools and technical assistance to USAID staff and partners to support monitoring, evaluation and learning practices, some of which can be found online. The LER Director oversaw approximately 20 staff and an estimated $17 million budget in 2017.
  • LER, bureaus, and independent offices hold several contracts that USAID missions and offices can use for building staff capacity in monitoring, evaluation and learning, and for commissioning evaluations and monitoring services directly. For example, LER manages the Monitoring and Evaluation Services Indefinite Delivery Indefinite Quantity (EVAL-ME IDIQ) contract, which allows USAID missions and Washington Offices, using their own funds, to competitively bid statements of work among 14 pre-approved companies that have been selected for their monitoring and evaluation capabilities, shortening and simplifying the process for contracting an independent evaluation team. LER also manages a classroom training program in monitoring and evaluation for USAID staff.
  • The LER Director participates in the USAID Administrator’s Leadership Council (ALC), a senior level bi-weekly meeting chaired by the USAID Administrator and attended by Assistant Administrators and select Agency Senior Staff, when the agenda includes issues related to evaluation. The LER Director also informs policy decisions across the agency by providing input into working groups and reviewing statements, draft memos and other policy products.
  • One of LER’s primary objectives is to strengthen USAID’s capacity in the fields of program monitoring, evaluation, and learning. For example, through a contract commissioned by LER to expand monitoring and evaluation capacity at USAID individual USAID, Offices and Missions can access experts for short term assistance or longer term fellows with expertise in monitoring, evaluation, learning, and project design. Fellows work with a specific mission or office for 6 months to up to 2 years. Another contract managed by LER provides organizational learning support, including helping USAID offices develop learning agendas and use monitoring and evaluation for learning and program adaptation. To build staff capacity in designing or commissioning impact evaluations funded by missions or offices, LER has hosted clinics on Impact Evaluation to provide USAID staff with tools, resources and hands-on support to design an impact evaluation for a future program activity. In addition to providing general capacity-building services in the form of training, clinics, technical assistance, and fellowships, LER staff occasionally manage evaluations directly or participate on evaluation teams for evaluations funded by LER or for those funded by other parts of the Agency. LER also coordinates several cross-agency working groups organized to support Learning champions and monitoring and evaluation specialists throughout the Agency.
Score
8
U.S. Department of Education
  • ED’s Institute of Education Sciences (IES), with a budget of $605.3 million in FY17, has primary responsibility for education research, evaluation, and statistics. The IES Director is appointed by the President and confirmed by the U.S. Senate, and advises the U.S. Education Secretary on research, evaluation and statistics activities. Four Commissioners support the IES Director, including the Commissioner for the National Center for Education Evaluation and Regional Assistance (NCEE), who is responsible for planning and overseeing ED’s major evaluations. IES employed approximately 180 full-time staff in FY 2017, including approximately 25 staff in NCEE.
  • The Office of Planning, Evaluation, and Policy Development’s (OPEPD) Program and Policy Studies Services (PPSS) has a staff of 20 and serves as the Department’s internal analytics office. PPSS conducts short-term evaluations to support continuous improvement of program implementation and works closely with program offices and senior leadership to inform policy decisions with evidence. While some evaluation funding – such as that for Special Education Studies and Evaluations – is appropriated to IES ($10.8 million in FY17), most evaluations are supported by funds appropriated to ED programs. NCEE and PPSS staff work closely with program offices to design program evaluations that reflect program priorities and questions. IES and PPSS provide regular briefings on results to help ensure information can be used by program offices for program improvement.
  • IES and PPSS staff collaborate closely through ED’s Evidence Planning Group (EPG) with other senior staff from the ED’s Office of Planning, Evaluation, and Policy Development (OPEPD), including Budget Service, and the Office of Innovation and Improvement (OII). EPG supports programs and advises Department leadership on how evidence can be used to improve Department programs. EPG has coordinated, for example, the development of revised evidence definitions and related selection criteria for competitive grant programs that align with the Elementary and Secondary Education Act, as amended by the Every Student Succeeds Act (P.L. 114-95) (ESSA). EPG has also facilitated cross-office alignment of evidence investments in technical assistance and pooling program funds for evaluations.
  • Senior officials from IES, OPEPD, and OII are part of ED’s leadership structure. Officials from OPEPD and OII weigh in on major policy decisions. OPEPD leadership plays leading roles in the formation of the Department’s annual budget requests, recommendations for grant competition priorities, including evidence, and providing technical assistance to Congress to ensure that evidence informs policy design.
Score
8
U.S. Dept. of Housing & Urban Development
  • HUD’s Office of Policy Development & Research (PD&R) informs HUD’s policy development and implementation by conducting, supporting, and sharing research, surveys, demonstrations, program evaluations, and best practices. PD&R achieves this mission through three interrelated core functions: (1) collecting and analyzing national housing market data (including with the Census Bureau); (2) conducting research, program evaluations, and demonstrations; and (3) providing policy advice and analytic support to the HUD Secretary and program offices. PD&R is led by an Assistant Secretary who oversees six offices, about 149 staff including a team of field economists that work in HUD’s 10 regional offices across the country, and a budget of $113 million in FY17. The Assistant Secretary ensures that evidence informs policy development through frequent personal engagement with other principal staff, the Secretary, and external policy officials; HUDstat performance review meetings (see Question #4 below for a description); speeches to policy audiences, sponsorship of public research briefings, and policy implications memoranda. The Assistant Secretary also regularly engages with each HUD program office to ensure that metrics, evaluations, and evidence inform program design, budgeting, and implementation.
  • Periodic PD&R meetings with program offices enable PD&R to share knowledge about evaluation progress and program offices to share knowledge about emerging needs for research, evaluation, and demonstrations to advance program policy.
Score
10
U.S. Department of Labor
  • DOL’s Chief Evaluation Officer is a senior official with responsibility for all activities of the Chief Evaluation Office (CEO), and coordination of evaluations Department-wide. In 2016, DOL’s Chief Evaluation Officer was converted to a career position, a change which more fully cements the principle of independence and reflects the Department’s commitment to institutionalizing an evidence-based culture at DOL. Evaluation results and products are approved and released by the Chief Evaluation Officer (as per the CEO Evaluation Policy), and disseminated in various formats appropriate to practitioners, policymakers, and evaluators.
  • The CEO includes 15 full-time staff plus a small number of contractors and 1-2 detailees at any given time. This staff level is augmented by staff from research and evaluation units in other DOL agencies. For example, the Employment and Training Administration has 9 FTE’s dedicated to research and evaluation activities with which CEO coordinates extensively. CEO staff have expertise in research and evaluation methods as well as in DOL programs and policies and the populations they serve. CEO also employs technical working groups on the majority of evaluation projects whose members have deep technical and subject matter expertise. Further, CEO staff engage and collaborate with program office staff and leadership to interpret research and evaluation findings and to identify their implications for programmatic and policy decisions.
  • In FY17, the CEO will directly oversee an estimated $40 million in evaluation funding (this includes the direct appropriation, the set aside amount and other funds that come from programmatic accounts where evaluations are co-funded). The $40 million includes the appropriated budget for the Departmental Program Evaluation (over $8 million in FY17) and the Department’s evaluation set-aside funds (of up to 0.75% of select department accounts), which will be approximately $24 million in FY17. CEO also collaborates with DOL and other Federal agencies on additional evaluations being carried out by other offices and/or supported by funds appropriated to DOL programs such as Employment and Training Administration (ETA) pilots, demonstrations and research and evaluations of large grant programs including the Performance Partnership Pilots (P3), American Apprenticeship Initiative (AIA), the Trade Adjustment Assistance Community College and Career Training (TAACCCT) Grant Program, and Reentry Programs for Ex-Offenders.
  • The CEO also participates actively in the performance review process during which each operating agency meets with Department leadership to review progress on performance goals established for the year required under Government Performance and Results Act (GPRA).
  • The CEO’s role is to incorporate evidence and evaluation findings as appropriate and to identify knowledge gaps that might be filled by evaluations or convey evidence that can inform policy and program decisions or performance. DOL’s Chief Evaluation Officer and senior staff are part DOL’s leadership structure and weigh in on major program and policy decisions and play a role in the formation of the DOL’s agencies’ annual budget requests, recommendations around including evidence in grant competitions, and providing technical assistance to the Department leadership to ensure that evidence informs policy design. There are a number of mechanisms set up to facilitate this: CEO participates in quarterly performance meetings with DOL leadership and the Performance Management Center (PMS); CEO reviews agency operating plans and works with agencies and the PMS to coordinate performance targets and measures and evaluations findings; Quarterly meetings are held with agency leadership and staff as part of the Learning Agenda process; and meetings are held as needed to strategize around addressing new priorities or legislative requirements.
Back to the Index

Visit Results4America.org