Table of contents
Table of contents

This chapter provides an overview of managing evaluations, specifically planning, undertaking, following- up and using evaluation, as well as the benefits of learning and accountability derived from evaluation. It outlines the responsibilities and steps required to commission and manage an evaluation, how to differentiate between different types of evaluation, use evaluation criteria and identify and promote learning approaches. This chapter does not cover how to conduct an evaluation as an evaluator. IOM staff interested in developing their evaluation skills further in order to join the roster of internal IOM evaluators can participate in the IOM Internal Evaluator training, which covers this topic.

Evaluation overview

Evaluation is defined as the systematic and objective assessment of an ongoing or completed intervention, including a project, programme, strategy or policy, its design, implementation and results. Evaluation is about accountability and learning by informing stakeholders on the extent to which resources have been used efficiently and effectively to achieve results, and providing empirical knowledge about which elements of an intervention worked or did not work and why.1 Evaluation can be used to improve IOM’s work through evidence-based decision-making as a promotion tool for IOM activities and as a tool for fundraising and visibility.

By contributing to knowledge and providing information on the performance and achievement of activities, evaluations enable informed decision-making for policymakers, programme managers and other key stakeholders. Since 2011, IOM has made it mandatory to consider the inclusion of evaluations in its project proposals.2

The accountability dimension is usually addressed to donors and other stakeholders, including beneficiaries, by demonstrating whether work has been carried out as agreed and intended results achieved, and in compliance with established standards.3 To gain the full benefit of learning and to ensure that the organization continues to build on its recognized strengths of flexibility, reliability and creativity, a strong evaluation culture is required and encouraged.

  • 1 When accountability and learning is discussed, the acronym MEAL is often used for monitoring, evaluation, accountability and learning, instead of using the concept of M&E only. However, it is important to note that evaluation itself includes accountability and learning.
  • 2For further information, see IOM, 2018a.
  • 3 For the purpose of the IOM Monitoring and Evaluation Guidelines, IOM uses the OECD/DAC definition of beneficiary/ies or people that the Organization seeks to assist as “the individuals, groups, or organisations, whether targeted or not, that benefit directly or indirectly, from the development intervention. Other terms, such as rights holders or affected people, may also be used.” See OECD, 2019, p. 7. The term beneficiary/ies or people that IOM seeks to assist, will intermittently be used throughout the IOM Monitoring and Evaluation Guidelines, and refers to the definition given above, including when discussing humanitarian context.
INFORMATION

 

In addition to accountability, learning, decision-making and promotion, other possible purposes for evaluation can include steering, fundraising and visibility.

Capture5, Info1

To foster an evaluation culture at IOM, it is important to consider multiple aspects that help shape the way evaluation is thought of within the Organization. This includes building the evaluation culture itself. This can be done by clarifying what evaluation is, encouraging the planning, management and conduct of evaluations, and paying close attention to the utilization of evaluation.

INFORMATION

IOM’s evaluation efforts are largely decentralized as specified in the IOM Evaluation Policy.4

IOM proposes the following definition for decentralized evaluation: “Decentralized evaluations are evaluations commissioned and managed outside the IOM Central Evaluation Office  by Headquarters Departments, Regional Offices and Country Offices – focusing on activities, themes, operational areas, policies, strategies and projects falling under their respective areas of work.”5

As per its mandate, the Central Evaluation Unit (DPP)/Evaluation) is responsible for providing guidance on the implementation of decentralized evaluation approaches.6 Some features of decentralized evaluation at IOM are as follows:

 

  • Decentralized evaluations are conducted by independent internal or external evaluators, and managed by IOM country offices, regional offices and Headquarters departments, which fund them through their projects and activities.
  • Decentralized evaluations more often relate to projects and programmes, or operational areas at the global, regional and country levels, and can also focus on thematic areas and strategies of national or regional importance.7
Roles in evaluation

There are four important roles to distinguish within the evaluation process: (a) the evaluation commissioner; (b) the evaluation manager; (c) the evaluator; and (d) the evaluation user.

 

roles1

 

The evaluation commissioner is the party or stakeholder who decides that an evaluation should take place. This could be the IOM programme manager, relevant IOM chief of mission (CoM), a thematic specialist or unit(s) from Headquarters and/or from a regional/country office, the donor or any combination of these stakeholders.

roles2

 

The evaluation manager is the person who is in charge of managing the evaluation. It is possible that the evaluation manager is from the same entity or office that commissioned the evaluation. Most often in IOM, the evaluation manager is the programme or project manager.

It is important to note that, at times, several stakeholders may be part of an evaluation management committee, overseeing the evaluation process together.

roles3

 

An evaluator is charged with conducting the evaluation. Evaluators can be external consultants, IOM staff or evaluators recruited by IOM, donors, partner organizations and governments.
roles4

 

The evaluation users are key players for guaranteeing the full utilization and benefits of evaluation. They can be direct users, who are, for instance, directly concerned with the implementation of the recommendations and accountability purposes, as well as indirect users that can be more interested with the learning dimension of the evaluation.
In addition to these roles, there are other stakeholder engagement and reference groups that play an important role, for instance in terms of quality assurance. For further information on reference groups, see Information box.
roles5

 

The United Nations Evaluation Group’s (UNEG) Norms and Standards for Evaluation (2016) further elaborates on stakeholder engagement and reference groups.8  Specifically, the document states that “inclusive and diverse stakeholder engagement in the planning, design, conduct and follow-up of evaluations is critical to ensure ownership, relevance, credibility and the use of evaluation. Reference groups and other stakeholder engagement mechanisms should be designed for such purpose.”9

Stakeholder engagement and reference groups are recommended for complex evaluations, multi-country and multiprogramme with a wide range of stakeholders. In these cases, such groups may be particularly useful and can ensure a more participatory approach throughout the evaluation.

 

INFORMATION

The UNEG Norms and Standards for Evaluation define the various groups as follows:10

Reference groups: Reference groups are composed of core groups of stakeholders of the evaluation subject who can provide different perspectives and knowledge on the subject. The reference groups should be consulted on the following: (a) evaluation design to enhance its relevance; (b) preliminary findings to enhance their validity; (c) recommendations to enhance their feasibility, acceptability and ownership; and (d) at any point during the evaluation process when needed. The use of reference groups enhances the relevance, quality and credibility of evaluation processes.

Learning groups: Learning groups could be established with stakeholders to focus on the use of evaluation. Learning groups generally have a smaller role in quality enhancement or validation of findings than reference groups.

Steering groups: When appropriate, some key stakeholders could be given a stronger role as members of the steering group to ensure better ownership. Steering groups not only advise, but also provide guidance to evaluations.

Advisory groups: Advisory groups are composed of experts on evaluation or the subject matter. Because group members generally do not have a direct stake in the subject matter to be evaluated, they can provide objective advice to evaluations. Using these groups can enhance the relevance, quality and credibility of evaluation processes through guidance, advice, validation of findings and use of the knowledge.

  • 10Ibid., pp. 24–25.
    Evaluation stages

    Module 6 of the IOM Project Handbook, published in 2017, outlined three phases for the evaluation process: (a) planning evaluations; (b) managing evaluations; and (c) using evaluations.11 The IOM Monitoring and Evaluation Guidelines, however, proposes a three-stage process including the following stages: (a) planning for evaluation; (b) undertaking evaluation; and (c) follow-up and using evaluation. These three stages of evaluation are as follows:12

    5.1.2. figure

     

    RESOURCES

    IOM resources

        2015 Resolution No. 1309 on IOM–UN relations, adopted on 24 November 2015 (C/106/RES/1309).

        2017 Module 6. In: IOM Project Handbook. Second edition. Geneva (Internal link only).

        2018a IOM Evaluation Policy. Central Evaluation Unit, September.

    External resources

    Organisation for Economic Co-operation and Development (OECD)

        2019 Better Criteria for Better Evaluation: Revised Evaluation Criteria Definitions and Principles for Use. OECD/Development Assistance Committee (DAC) Network on Development for Evaluation.

    United Nations Evaluation Group (UNEG)

        2016 Norms and Standards for Evaluation. New York.

    World Bank

        2015 Managing Evaluations: A How-to Guide for Managers and Commissioners of Evaluations. Independent Evaluation Group, The World Bank Group, Washington, D.C.

      Planning for evaluation

      5.2. figure

       

      IOM strongly recommends conducting evaluations, and an effective use of evaluation starts with sound planning. The IOM Project Handbook (IN/250) requires that all proposals consider the inclusion of an evaluation within the project; hence, the first step of planning happens during project development.13 Project developers provide a brief description of the evaluation, including its purpose, timing, intended use and methodology. The cost of evaluation must also be included in the budget at the planning stage.14

      If no evaluation of the project is foreseen at the project development stage, an appropriate justification must be provided. Reasons for this may include the following:

          (a) The expected donor has indicated, prior to the submission of the proposal, that it will not fund an evaluation;
          (b) The donor plans to conduct its own evaluation, outside of the IOM implementation cycle;
          (c) Other evaluative approaches have been agreed upon with the donor, such as project performance reviews (PPR) or after-action reviews (AAR).

      While other possible exceptions may exist, note that the following are not considered valid or sufficient justifications for excluding evaluation from project design: “The project is doing alright without an evaluation”; “The project will examine the validity of an evaluation later”; “The project can spend that money in a better way”; “The donor does not want an evaluation”, without negotiating further with the donor. The examples given also reflect a weak evaluation culture and failure to understand and duly promote the benefits of evaluation.

      If a full-fledged evaluation is not possible due to funding and/or resource constraints or the short duration of implementation, there may still be possibilities to conduct other evaluative approaches.15 For instance, an internal review and other evaluative assessments such as lessons learned workshops, AARs or PPRs can be done. These evaluative approaches will be explained later in the chapter. However, these other learning and/or evaluative approaches are not as extensive as an evaluation and do not replace it. Other evaluative approaches could be viewed rather as complementary to evaluation, even when evaluation is planned.

      In contrast to other evaluative approaches, the benefit of conducting evaluation lies in its more robust and rigorous methodology. Evaluation allows for a detailed analysis through a predefined and logical framework, the participation of a wider range of stakeholders and supports a strong evidence-based approach to document overall performance and change brought about by an intervention, which is measured against a widely accepted and tested set of evaluation criteria.

      INFORMATION

      In Project Information and Management Application (PRIMA),as before, project developers are expected to provide minimum information on planned evaluations within the Evaluation Module when creating project proposal in the platform.16 The Evaluation Module populates the Evaluation section of the IOM Proposal Template. The information requested while completing this module includes whether or not an evaluation is planned, including a justification if no evaluation is planned; the purpose of the evaluation (intended use and users); the type (by time and who conducts the evaluation); suggested criteria to be addressed by the evaluation; and the proposed methodology. Furthermore, project developers will also be required to provide a budget for any planned evaluations when building a budget in PRIMA.PRIMA

      For more information regarding planning for evaluation during project development in PRIMA, see the Create Proposal (IOM Template) section of the internal IOM PRIMA User Guide.

      • 16PRIMA for All is an institutional project information management solution. It is available internally to IOM staff via the IOM intranet. For more on PRIMA, see chapter 3 of the IOM Monitoring and Evaluation Guidelines.

      During implementation, planning for the evaluation typically occurs a few months before the evaluation takes place and involves three main components: (a) defining the purpose and evaluability of the evaluation; (b) preparing the evaluation terms of reference (ToR); and (c) selecting the evaluators.

      Define the purpose and evaluability of evaluation

      5..2.1. figure

       

      The first step in planning for an evaluation is defining the purpose and evaluability of an evaluation. The evaluation purpose describes the overall reason why the evaluation is being conducted and its expected results. Agencies and organizations may use different terminology, and IOM is open to accept such terminology when preparing the evaluation ToR.

      INFORMATION

      Agencies, organizations and resource materials also refer to evaluation objectives, and, respectively, to specific objectives. The definition of evaluation objective is similar to the one for the evaluation purpose, which is the overall reason the evaluation is being conducted, while evaluation-specific objectives typically make reference to the criteria being addressed by the project or scope of the evaluation.17

      Some guiding questions that can be used to frame the purpose of an evaluation are as follows:

      Guiding questions to define evaluation purpose
      • Who are the intended users of the evaluation?
      • What does the evaluation strive to assess (the intervention, specific thematic components, a strategy, collaboration)?
      • What are the priority evaluation aspects to analyse, considering that not necessarily all evaluation criteria need to be covered (such as relevance, performance and implementation processes, impact, coherence and sustainability)?
      • What is the expected result (such as to draw any specific recommendations, identify challenges and lessons learned, gather good practices and inform next phases of implementation)?

       

      TIP

      Identify and engage relevant stakeholders early in the planning process through a participatory approach. This can provide opportunities to clarify key aspects of the evaluation and help reach an agreement on key evaluation questions and scope.

      Assessing the evaluability – in other words feasibility – of an evaluation is an essential part of the evaluation planning process, increasing the likelihood that the evaluation will be able to produce credible information in a timely manner or by limiting its scope.18 It encourages evaluation managers to set realistic expectations of an evaluation on the basis of the contextual realities on the ground, including financial realities and timing, as well as on the monitoring and data collection mechanisms already in place.

      It is important to review relevant project documents or strategies to identify what has already been agreed upon with the donor or at the institutional and governmental levels. As some time may have passed since the start of the planning for the intervention to be evaluated, it is also important to review the choices made so far in the intervention to ensure that earlier decisions taken still hold when the evaluation takes place. The programme manager may need to discuss any planned changes with the donor.

      The process of planning an evaluation involves trade-off decisions, as the evaluation manager will have to weigh the cost and feasibility of various evaluation designs, as well as the benefits of the evaluation (operational, institutional and strategic).

      To define the purpose and assess the evaluability of an evaluation, managers must be aware of the common types of evaluations, methodologies, and evaluation criteria. Understanding these concepts and technical requirements and specificities can also help evaluation managers to manage their evaluations more effectively.

       

      Types of evaluation

      Evaluation types can be defined according to the following elements, and evaluations can be a combination of the different categories:

      Figure 5.1.

       

      Evaluation type according to timing

      Figure 5.2.

       

      One distinction is made on the basis of the timing of the evaluation exercise; in other words, when in the intervention life cycle, the evaluation is conducted.

      Figure 5.3.

       

       

      Ex-ante evaluation

      An ex-ante evaluation is performed before the implementation of an intervention to assess the validity of the design, target populations and objectives. An ex-ante evaluation includes criteria and analysis that are not covered by needs assessments, appraisals or feasibility studies.

       

      Real-time evaluation

      Real-time evaluations are mostly used in emergencies, at the early stages of implementation, to provide instant feedback to intervention managers about an ongoing operation.19

       

      Midterm evaluation

      A midterm evaluation is carried out during an intervention’s implementation and for the purpose of improving its performance or, in some cases, to amend its objective, if it has become unrealistic due to unexpected factors or implementation challenges.

       

      Final evaluation

      A final, or terminal, evaluation is undertaken at the end, or close to the end, of an intervention to examine the overall performance and achievement of results, also for the benefit of stakeholders not directly involved in the management and implementation of the intervention (such as donors and governmental entities).

       

      Ex-post evaluation

      The ex-post evaluation is implemented some months after the end of an intervention to assess the immediate and medium-term outcomes and sustainability of results. It includes the extent to which the intervention has contributed to direct or indirect changes; however, it is not as robust as an impact evaluation.

       

      Evaluation types according to purpose

      Figure 5.4.

       

      Evaluations defined by their purpose can be formative or summative. Formative evaluation is conducted during implementation for the purposes of improving performance. It is intended to assist managers adjust and improve project, programme and strategy implementation based on findings, as well as stakeholders’ suggestions and needs. A summative evaluation is conducted at the end of an intervention time frame and also for the benefit of stakeholders not directly involved in the management of the implementation such as donors. It provides insights about the effectiveness of the intervention and gives then the opportunity to use best practices identified during the evaluation. A summative evaluation can inform higher-level decision-making, for instance to scale up an intervention, consolidate it or continue funding follow-up phases.

       

      Based on the purpose of the evaluation
      Formative evaluation Summative evaluation
      Chapter5_BasedOnPurposeTable_figure1_2

       

      • Conducted during implementation
      • Intended for managers and direct actors
      • Redresses and improves the project or programme
      Chapter5_BasedOnPurposeTable_figure2_0

       

      • Conducted at the end of a project or programme
      • Intended for those not directly involved in management
      • Provides insights about the effectiveness of the project
      • Gives the opportunity to use best practices identified during the evaluation
      • Informs higher-level decision-making for follow-up actions

       

      Evaluation types according to who conducts it

      Figure 5.5.

       

      A third distinction is made according to the person(s) who conduct(s) the evaluation exercise. There are three types of evaluation based on who conducts the evaluation: (a) internal; (b) external; and (c) mixed.

       

      Internal evaluation

      • An internal evaluation is conducted by an IOM unit, an individual staff member or a team composed of IOM staff.
      • An independent internal evaluation is conducted by someone who did not directly participate in the conceptualization, development and/or implementation of the intervention to be evaluated. Within IOM, internal independent evaluations are conducted by the Central Evaluation Unit, regional M&E officers and trained staff on the IOM Internal Evaluation roster. Evaluations of interventions conducted by staff members from the implementing office are also considered independent internal evaluations, as long as the evaluators were not involved in its development and implementation.
      • A self-evaluation is an internal evaluation done by those who are or were entrusted with the development and/or delivery of the project or programme.20

      External evaluation

      • An external evaluation is conducted by someone recruited externally, mainly by the implementing organization and/or the donor.
      • These are often considered independent evaluations, with reservations expressed by some organizations given the interference of management in the recruitment.21

      Mixed evaluation

      • Mixed evaluations include both internal and external evaluators who conduct the evaluation together. Each evaluator may have her/his own specific role within the team.
      • 18UNEG, 2016.
      • 19Cosgrave et al., 2009.
      • 20Some define self-evaluations as being all evaluations conducted in an organization, including those conducted by external consultants, that are not falling under the responsibility and management of independent central evaluation offices, funded by independent mechanisms and budget.
      • 21Ibid.
      INFORMATION - Joint evaluation

      Joint evaluations are conducted by a group of agencies, including perhaps with the participation of donors. There are “various degrees of ‘jointness’ depending on the extent to which individual partners cooperate in the evaluation process, merge their evaluation resources and combine their evaluation reporting.”22 An agency can participate as a lead agency for conducting the joint evaluation or it can act simply as a participant in the joint exercise. A group of agencies can also lead the process, and the various roles and responsibilities can be defined during the planning stage.

       

      🢂 While joint evaluations are very useful and encouraged, the organization of joint evaluations is more demanding than a single external or internal evaluation due to the coordination required between participating parties for the planning, establishment of ToR and financing of the exercise.

      The cost and logistical implications of each type of evaluation will also vary based on who will conduct it. If an external evaluator (or an evaluation firm) or evaluators are contracted to conduct the evaluation, they will charge fees for this service. The fees for evaluators will vary depending on their experience, qualifications and location (locally recruited evaluators with the same level of experience may often be less expensive than internationally recruited evaluators), and evaluators may charge different fees depending on the complexity and difficulty of the assignment. Additional fees may also be charged for travel to insecure locations. The amount to budget for evaluator fees also depends on whether the evaluation is to be conducted by a single evaluator or an evaluation team.

      For further information on the decision to select a single evaluator or evaluators, see the subsection, Select evaluator(s) of this chapter.

      INFORMATION - Considering the cost of an external evaluation when developing projects

      When in the development phase of the project cycle, project developers should consult with procurement and human resource officers to estimate the standard market rates for each projected evaluation team member and, if necessary, seek advice from the Central Evaluation Unit.

      Project developers will also need to estimate the duration of the evaluation based on its objective and scope to anticipate the potential cost of the evaluation. The evaluator fees are often calculated using a daily rate, and project developers should estimate how many days are required for each of the following:23

      • Initial document and literature review;
      • Travel (if relevant);
      • Preparation of the inception report;
      • Data collection and analysis;
      • Presentation of initial findings;
      • Preparation of the draft report;
      • Revisions and finalization of the evaluation report.
      TIP

      In IOM, the number of days of work for conducting an evaluation is usually between 20 and 40 days in a period ranging from 1 to 3 months.

      If an IOM staff member from a different IOM office, which is not involved in the project, will conduct an internal evaluation, the cost for the time spent in-country to conduct the evaluation needs to be considered as travel duty (TDY). For an internal self-evaluation, or evaluations conducted by a staff member from the implementing office, there are normally no fees associated, except for those that would relate to data collection and analysis (for example, in the case of surveys with enumerators or for field visits).

      INFORMATION

      Rosters of internal and external evaluators are managed by the Central Evaluation Unit and the regional M&E officers. Internal evaluators have usually been trained through the internal evaluator training managed by OIG and the regional M&E officers, who can assist offices to identify internal and external evaluators as required.

      🢂 For further information on budgeting for evaluation within an IOM intervention, see  Annex 5.1. Budgeting for evaluation. 

       

      Evaluation types according to technical specificities and scope

      Figure 5.6.

       

      The fourth group of evaluation types is defined according to technical specificities and scope. This group is the most diversified, and the most common types of evaluation are presented here, with additional references provided in the Resources box and within the annexes. The scope of an evaluation allows the understanding of what will be covered and what type of evaluation may be conducted.

      IOM usually conducts programme and project evaluations that examine respectively a set of activities brought together to attain specific global, regional, country or sector assistance objectives, and an individual activity designed to achieve specific objectives within a given budget and time period. IOM may also conduct evaluations of a strategy or policy. These may use similar approaches as for programme or project evaluations. In addition, IOM conducts thematic evaluations that examine selected aspects or cross-cutting issues in different types of assistance (such as poverty, environment and gender).

      The following evaluation types are relatively common within the IOM context, as well as in international cooperation activities and deserve to be mentioned. A process evaluation examines the internal dynamics of implementing organizations, their policy instruments, their service delivery mechanisms, their management practices and the linkages among these. A country-programme or country-assistance evaluation is more common in United Nations agencies and bilateral assistance that use country programming approaches and defined as an evaluation of one or more donor or agency’s portfolio of development.

      Furthermore, IOM also conducts meta-evaluations, which aim to judge the quality, merit, worth and significance of an evaluation or several evaluations.24 Synthesis evaluations are also encouraged as they provide the opportunity to identify patterns and define commonalities.25

      Evaluations may also be defined by their technical specificity and the approach that will be used during the evaluation, for instance, a participatory evaluation, which may be defined as an evaluation method in which representatives of agencies and stakeholders (including beneficiaries) work together in designing, carrying out and interpreting an evaluation. The collaborative effort deserves to be underlined, but it also brings organizational constraints that render the exercise relatively complex. A distinction should be made here as well between participatory evaluation and participatory techniques. The latter consist, for instance, of focus group discussions or preparatory meetings and can be included as an evaluation approach irrespective of other types of evaluation selected.

      • 24A meta-evaluation is an instrument used to aggregate findings from a series of evaluations. It also involves an evaluation of the quality of this series of evaluations and its adherence to established good practice in evaluation. See Ministry of Foreigner Affairs (Denmark), 2004.
      • 25A synthesis evaluation is “a systematic procedure for organizing findings from several disparate evaluation studies, which enables evaluators to gather results from different evaluation reports and to ask questions about the group of reports”. See General Accounting Office, 1992 (name was changed to Government Accountability Office in 2004), The Evaluation Synthesis.
      RESOURCES

      IOM resources

          2017a Module 6. In: IOM Project Handbook. Second edition. Geneva (Internal link only).

          2018a IOM Evaluation Policy. OIG, September.

       

      External resources

      Aubel, J.

          1999   Participatory Program Evaluation Manual: Involving Program Stakeholders in the Evaluation Process. Child Survival Technical Support Project and Catholic Relief Services, Maryland.

      Cosgrave, J., B. Ramalingam and T. Beck

          2009   Real-time Evaluations of Humanitarian Action – An ALNAP Guide. Active Learning Network for Accountability and Performance (ALNAP).

      Ministry of Foreigner Affairs (Denmark), Danida

          2004   Meta-Evaluation: Private and Business Sector Development Interventions. Copenhagen.

      Organisation for Economic Co-operation and Development (OECD)

          2010   Glossary of Key Terms in Evaluation and Results Based Management. OECD/DAC, Paris.

      United Nations Evaluation Group (UNEG)

          2016a  Norms and Standards for Evaluation. New York.

      United States General Accounting Office (GAO)

          1992   The Evaluation Synthesis. GAO/PEMD 10.1.2. Revised March 1992.

      An impact evaluation attempts to determine the entire range of long-term change deriving from the intervention, including the positive and negative, primary and secondary, long-term change produced by the intervention, whether directly or indirectly, intended or unintended.

      INFORMATION - Key considerations regarding impact evaluations

      As noted above, an impact evaluation specifically attempts to determine the entire range of effects deriving from an intervention, including the positive and negative, primary and secondary, long-term effects and changes produced by the project, directly or indirectly, intended or unintended.

      Such evaluations also attempt to establish the amount of identified change that is attributable to the intervention. Impact evaluations are often conducted sometime after the end of the intervention.

      Impact evaluations require specific methodologies and precise and systematic technical steps in order to elaborate valid and verified conclusions and recommendations. The budget for conducting an impact evaluation can also be high, requiring detailed surveys on broad population samples and control groups, and the exercise can also be time-consuming. It is important to make a clear distinction between an impact analysis or expected impact analysis, which can be found in several types of evaluations that are using the evaluation criteria of impact, and an impact evaluation or rigorous impact evaluation that call for relevant and strict methodologies and statistical approaches to measure them.26

       

      🢂 A basic principle to apply before choosing an impact evaluation is that the benefits of the evaluation should outweigh their costs and limitations.

      • 26The term rigorous evaluation was used by impact evaluation specialists who considered that the impact evaluation methodologies commonly used are not sufficiently rigorous and started to call for such a distinction.
      RESOURCES

      External resources

      International Fund for Agricultural Development (IFAD)

          2015 Chapter 8: Impact evaluation. In: Evaluation Manual. Second edition. Rome, pp. 96–100.

      Public Health England, Government of the United Kingdom

          2018 Guidance: Outcome Evaluation. 7 August.

      Rogers, P.

          2014 Overview of impact evaluation. Methodological Briefs: Impact Evaluation 1. UNICEF, Florence.

      United Nations Evaluation Group (UNEG)

          2013 Impact Evaluation in UN Agency Evaluation Systems: Guidance on Selection, Planning and Management. Guidance Document.

      For more information regarding data collection methodology and analysis for impact evaluation, see Chapter 4: Methodologies for data collection and analysis for monitoring and evaluation.

      INFORMATION - Utilization-focused evaluation

      One approach to evaluation is the utilization-focused evaluation (U-FE), developed by Michael Q. Patton. The approach does not advocate for any particular type of evaluation or evaluation methodology, but rather can be applied regardless of the type or methods selected for the evaluation.

      The U-FE “begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use…Therefore, the focus in utilization-focused evaluation is on intended use by intended users”.27

      In other words, how useful the evaluation will be to those who will use it. U-FE encourages evaluators to design and conduct evaluations with this core principle in mind, ensuring that each decision and action is taken in a way that encourages use. It involves the intended evaluation users throughout, requiring a close collaborative relationship between the evaluator, evaluation manager and the intended evaluation users, based on the premise that “intended users are more likely to use evaluations if they understand and feel ownership over the evaluation process and findings.”28

       

      🢂 A 17-step checklist has been developed in order to facilitate the implementation of U-FE.

      RESOURCES

      Patton, M.Q.

          2008   Utilization-Focused Evaluation. Fourth edition. SAGE Publications, Thousand Oaks.

          2012    Essentials of Utilization-Focused Evaluation. First edition. SAGE Publications, Thousand Oaks.

          2015    Evaluation in the field: The need for site visit standards. American Journal of Evaluation, 36(4):444–460.

      Ramírez, R. and D. Brodhead

          2013    Utilization Focused Evaluation: A primer for evaluators. Southbound, Penang.

      Evaluation criteria

      Evaluation uses a set of criteria for the assessment of an intervention. Evaluation criteria are standards by which an intervention can be addressed. While several criteria exist, IOM primarily uses two established references for evaluation criteria: (a) the OECD/DAC criteria, which had originally been developed for development-orientated interventions and were adjusted in December 2019 to also be relevant for humanitarian interventions; and (b) the Active Learning Network for Accountability and Performance (ALNAP) criteria, which were developed for humanitarian interventions.

      Table 5.1. Selecting evaluation criteria
      Type of intervention Development intervention Humanitarian intervention
      Description Development interventions focus on responding to ongoing structural issues, particularly systemic poverty, that may hinder socioeconomic and institutional development in a given context.29 Humanitarian interventions focus on saving lives, alleviating suffering and maintaining human dignity during and after human-induced crises and natural disasters, as well as preventing and preparing for them.30
      Evaluation criteria

      OECD/DAC criteria

       

      🢂  ALNAP criteria may also be applied to development interventions, where appropriate.

      ALNAP criteria

       

      🢂 The revised OECD/DAC criteria may also be applied to humanitarian interventions, where appropriate.

      • Relevance
      • Coherence31
      • Efficiency
      • Effectiveness
      • Impact
      • Sustainability
      • Appropriateness
      • Effectiveness
      • Efficiency
      • Impact
      • Coherence
      • Coverage
      • Coordination
      • Connectedness
      INFORMATION

      Evaluation criteria are used to help identify key questions that should be answered during the evaluation. Evaluation questions should be targeted to what is needed and relevant to the evaluation commissioner’s requirements.

      The OECD/DAC criteria are commonly used in the evaluation community and were updated and adjusted in 2019, including the addition of a new criterion, “Coherence”. A table reflecting those changes is provided.

      INFORMATION

      The OECD/DAC underscores that the criteria it outlines, and their respective definitions, should be understood within a broader context and be read together with its own, as well as other standards and guidelines for conducting evaluation.

      The OECD/DAC prefaces its criteria with the following two principles of use.

      Principle one

      The criteria should be applied thoughtfully to support high-quality, useful evaluation. They should be contextualized – understood in the context of the individual evaluation, the intervention being evaluated and the stakeholders involved. The evaluation questions (what you are trying to find out) and what you intend to do with the answers should inform how the criteria are specifically interpreted and analysed.

       

      Principle two

      Use of the criteria depends on the purpose of the evaluation. The criteria should not be applied mechanistically. Instead, they should be covered according to the needs of the relevant stakeholders and the context of the evaluation. More or less time and resources may be devoted to the evaluative analysis for each criterion depending on the evaluation purpose. Data availability, resource constraints, timing and methodological considerations may also influence how (and whether) a particular criterion is covered.32

      In addition to the updated definitions, sample evaluation questions related to each criterion are also included.

       

      OECD/DAC and ALNAP evaluation criteria

      Criteria Definition Sample evaluation questions 
      Relevance (OECD/DAC)

      Relevance is “[t]he extent to which the intervention objectives and design respond to beneficiaries, global, country, and partner/ institution needs, policies, and priorities; and continue to do so if circumstances change.

      Note: “Respond to” means that the objectives and design of the intervention are sensitive to the economic, environmental, equity, social, political economy and capacity conditions in which it takes place. “Partner/institution” includes government (national, regional, local), civil society organizations, private entities and international bodies involved in funding, implementing and/or overseeing the intervention. Relevance assessment involves looking at differences and trade-offs between different priorities or needs. It requires analysing any changes in the context to assess the extent to which the intervention can be (or has been) adapted to remain relevant.”33

      • Do the intervention’s expected outcomes and outputs remain valid and pertinent either as originally planned or as subsequently modified?
      • Are the project activities and outputs consistent with the intended outcomes and objective?
      • Do the project activities and outputs take into account relevant policies, guidelines and beneficiary needs?
      • Does the project still respond to the needs of the other target groups/stakeholders?
      • Is the intervention well-designed (results matrix, Theory of Change (ToC) and risk analysis in particular) to address needs and priorities?
      • Is the project aligned with and supportive of IOM national, regional and/or global strategies?
      • Is the project aligned with and supportive of national strategies?
      • Is the project in line with donor priorities?
      Appropriateness (ALNAP) The analysis of appropriateness examines “[t]he extent to which humanitarian activities are tailored to local needs, increasing ownership, accountability and cost-effectiveness accordingly.”34
      • To what extent were tools and technologies used adapted to the local context?
      • To what extent were local stakeholders and beneficiaries consulted and involved in the implementation of activities?
      • To what extent were the delivered supplies adapted to local needs?
      Coherence (OECD/DAC, newly added in 2019 and ALNAP)

      Within OECD/DAC, coherence looks at “[t]he compatibility of the intervention with other interventions in a country, sector or institution.

      Note: The extent to which other interventions (particularly policies) support or undermine the intervention, and vice versa. Includes internal coherence and external coherence: Internal coherence addresses the synergies and interlinkages between the intervention and other interventions carried out by the same institution/government, as well as the consistency of the intervention with the relevant international norms and standards to which that institution/government adheres. External coherence considers the consistency of the intervention with other actors’ interventions in the same context. This includes complementarity, harmonization and coordination with others, and the extent to which the intervention is adding value while avoiding duplication of effort.”35

      ALNAP also uses the criterion of “coherence”.

      Within ALNAP, coherence in this context refers to “[t]he extent to which security, developmental, trade and military policies as well as humanitarian policies, are consistent and take into account humanitarian and human rights considerations”.36

      • Do synergies exist with other interventions carried out by IOM as well as intervention partners?
      • To what extent do the other implemented interventions support or undermine the intervention?
      • To what extent is the intervention consistent with international norms and standards to be applied to the existing context?
      • To what extent is the intervention consistent with other actors’ interventions in the same context?
      • To what extent does the intervention add value/avoid duplication in the given context?
      • Are security, developmental, trade and military policies including humanitarian components consistent?
      • To what extent are these policies concretely applied during interventions, taking into account humanitarian and human rights considerations?
      Effectiveness (OECD/DAC and ALNAP)

      In the OECD/DAC, effectiveness considers “[t]he extent to which the intervention achieved, or is expected to achieve, its objectives, and its results, including any differential results across groups.

      Note: Analysis of effectiveness involves taking account of the relative importance of the objectives or results.”37

      ALNAP also uses this criterion similarly. ALNAP defines effectiveness as “[t]he extent to which an activity achieves its purpose, or whether this can be expected to happen on the basis of the outputs.”38

      • To what extent did the intervention achieve its objectives, including the timely delivery of relief assistance?
      • Have the outputs and outcomes been achieved in accordance with the stated plans?
      • Are the target beneficiaries being reached as expected?
      • Are the target beneficiaries satisfied with the services provided?
      • What are the major factors influencing the achievement of the intervention’s desired outcomes?
      • To what extent has the project adapted or is able to adapt to changing external conditions in order to ensure project outcomes?
      Coverage (ALNAP)

      Coverage is defined as “[t]he extent to which major population groups facing life-threatening suffering were reached by humanitarian action”.39

       

      🢂 Coverage can often be included in the analysis of effectiveness.

      • Who were the major groups in need of humanitarian assistance? Of these groups, who were provided with humanitarian assistance?
      • Is the assistance and protection proportionate to their needs and devoid of extraneous political agendas?
      Coordination (ALNAP)

      Coordination is “[t]he extent to which the interventions of different actors are harmonised with each other, promote synergy, avoid prevent gaps, duplication and resource conflicts”.40

       

      🢂 Coordination can often be included in the analysis of effectiveness.

      • Are the different actors involved in an emergency response coordination?
      • Are the point of views from other actors of the overall system taken into account in the intervention strategy?
      Efficiency (OECD/DAC and ALNAP)

      Efficiency within the OECD/DAC considers “[t]he extent to which the intervention delivers, or is likely to deliver, results in an economic and timely way.

      Note: “Economic” is the conversion of inputs (funds, expertise, natural resources, time, etc.) into outputs, outcomes and impacts, in the most cost-effective way possible, as compared to feasible alternatives in the context. “Timely” delivery is within the intended time frame, or a time frame reasonably adjusted to the demands of the evolving context. This may include assessing operational efficiency (how well the intervention was managed).”41

      ALNAP also includes the criterion of efficiency and considers it to look at “[t]he outputs – qualitative and quantitative – achieved as a result of inputs”.42

      • Were the project activities undertaken and were the outputs delivered on time?
      • Was the project implemented in the most efficient way compared to alternative means of implementation?
      • How well are the resources (funds, expertise and time) being converted into results?
      • To what extent are disbursements/ provision of inputs for activities implemented as scheduled?
      Impact (OECD/DAC and ALNAP)

      Impact within OECD/DAC looks at “the extent to which the intervention has generated or is expected to generate significant positive or negative, intended or unintended, higher-level effects.

      Note: Impact addresses the ultimate significance and potentially transformative effects of the intervention. It seeks to identify social, environmental and economic effects of the intervention that are longer term or broader in scope than those already captured under the effectiveness criterion. Beyond the immediate results, this criterion seeks to capture the indirect, secondary and potential consequences of the intervention. It does so by examining the holistic and enduring changes in systems or norms, and potential effects on people’s well- being, human rights, gender equality, and the environment.”43

      The ALNAP criterion of impact looks at “the wider effects of the project – social, economic, technical and environmental – on individuals, gender and age groups, communities and institutions.” Similar to the OECD/DAC criterion, “[i]mpacts can be intended and unintended, positive and negative, macro (sector) and micro (household).”44

      • What significant change(s) does the intervention bring or is expected to bring, whether positive or negative, intended or unintended?
      • Does the impact come from the intervention, from external factors or from both?
      • Did the intervention take timely measures for mitigating any unplanned negative impacts?
      Sustainability (OECD/DAC)

      Sustainability refers to “the extent to which the net benefits of the intervention continue, or are likely to continue.

      Note: Includes an examination of the financial, economic, social, environmental, and institutional capacities of the systems needed to sustain net benefits over time. Involves analyses of resilience, risks and potential trade- offs. Depending on the timing of the evaluation, this may involve analysing the actual flow of net benefits or estimating the likelihood of net benefits continuing over the medium and long- term.”45

      • Are structures, resources and processes in place to ensure the benefits generated by the project are continued after the external support ceases?
      • Is the project supported by local institutions and well-integrated into local social and cultural structures?
      • Do the partners benefiting from the intervention have adequate capacities (technical, financial, managerial) for ensuring that the benefits are retained in the long run, and are they committed to do so?
      • To what extent have target groups, and possibly other relevant interest groups and stakeholders, been involved in discussions about sustainability?
      • Do the target groups have any plans to continue making use of the services/ products produced?
      Connectedness (ALNAP)

      Connectedness looks at “[t]he extent to which activities of a short-term emergency nature are carried out in a context that takes longer-term and interconnected problems into account”.46

       

      🢂 Adds a humanitarian dimension to sustainability.

      • To what extent are the project activities connected to longer-term development concerns?
      • What steps have been taken to promote retention of gains from these interventions?

       

      The focus on given criteria may change at different stages of the intervention life cycle. In an ex-ante evaluation, the focus could be on relevance, while for a midterm evaluation, it could shift towards effectiveness and efficiency so that recommendations for improvement can be made during implementation. By the end of the life cycle, final and ex-post evaluations are better able to assess the overall performance, sustainability and impact of the intervention. However, the evaluation criteria must always take account of the specific requirements of the evaluation and the interest of end users of the evaluation and of other stakeholders.

      The evaluation commissioner and/or manager, in consultation with relevant stakeholders, select the evaluation criteria to be used and the questions to be answered. The criteria selected must clearly be spelled out in the ToR and properly reflect the purpose and scope of the evaluation.

      RESOURCES

      Buchanan-Smith, M., J. Cosgrave and A. Warner

          2016 Evaluation of Humanitarian Action Guide. Active Learning Network for Accountability and Performance/Overseas Development Institute (ALNAP/ODI), London.

      Humanitarian Coalition

          n.d. From humanitarian to development aid.

      Organisation for Economic Co-operation and Development (OECD)

          2019 Better Criteria for Better Evaluation: Revised Evaluation Criteria Definitions and Principles for Use. OECD/DAC Network on Development for Evaluation.

          2021 Applying Evaluation Criteria Thoughtfully. OECD Publishing, Paris.

          n.d. Evaluation criteria. OECD/DAC criteria for evaluating development assistance.

        Prepare evaluation terms of reference

        5.2.2. figure

        The evaluation ToR are a key framing and planning tool for managing an evaluation, as this provides clear and detailed specifications on the objectives and scope of the evaluation, as well as the roles and responsibilities of the parties involved, such as the evaluation manager, the evaluator(s) the evaluation users and/or possible partners. They also provide information on the timing, methodology and budget of the evaluation. Poorly developed ToR can cause confusion and result in expectations, and focus, that may differ between the involved parties. Having a clear understanding of the different evaluation types and criteria outlined in the previous sections will help formulate the evaluation ToR. The ToR are part of the contractual agreement between IOM and contacted evaluators, as they outline evaluator obligations at all stages of the process, as well as the evaluation commissioner and/or manager expectations.

        INFORMATION

        In IOM, it is almost always the organization itself that commissions the evaluation, but sometimes donors may stipulate that they will conduct an evaluation at their level. The entity responsible for commissioning the evaluation is usually responsible for preparing the evaluation ToR. In the case of jointly commissioned evaluations, such responsibilities can be shared between participating entities. In all cases, IOM, its partners, when relevant, and the donor should review and agree on the ToR prior to their finalization.

        Figure 5.7.

         

         

         

         

        Evaluation context

        The evaluation context section provides a summary description of the political, economic, social, environmental and/or legal contexts in which the intervention is being implemented.

        The section also includes a brief description of IOM, a brief summary of its history in the country, including related to specific thematic areas being covered, as well as a description of the intervention being evaluated that includes its objectives and its intended results.

         

        Evaluation purpose/objective

        The evaluation purpose/objective section explains why the evaluation is being conducted and the main objective of the evaluation itself. In this section, the intended audience for the evaluation and how the evaluation will be used are also included. These are important elements that provide information on its utilization for both accountability and learning purposes, as well as who may be concerned by its recommendations.

         

         

         

         

        Evaluation scope

        An evaluation scope specifies what will be covered by the evaluation, including, for instance, the components or phases of the intervention that will be assessed, the period of the intervention to be covered (relevant phases or given years), any other intervention(s) that should also be considered or the geographical area to be covered.

        This section can also include expectations on recommendations, good practices and lessons learned that could be derived from the analysis.

        If there are specific exclusions from the evaluation, such as certain geographical areas or security limitations, these should also be stated in the evaluation scope.

        Evaluation criteria

        The evaluation criteria are those described in the previous section of this chapter. The criteria selected for the evaluation should be listed clearly in this section of the ToR.

        List of evaluation questions

        Evaluation questions should be developed based on the evaluation criteria selected. The questions should be categorized per the criteria.

         

         

         

        Methodology section

        This section describes the type of data collection and analysis methods to be used in the evaluation and inform the evaluator accordingly. More precise information on the methodology can be proposed by evaluators in the proposals submitted during the selection process or during the inception phase.

        For more detailed information related to evaluation methodology, please see chapter 4 of the IOM Monitoring and Evaluation Guidelines.

         

        Ethics, norms and standards for evaluation

         

        Include the following statement at the end of the ToR: IOM abides by the Norms and Standards of UNEG and expects all evaluation stakeholders and the consultant(s) to be familiar with the UNEG Ethical Guidelines for Evaluation, as well as the UNEG Codes of Conduct for Evaluation in the UN System.

         

         

         

         

         

        Cross-cutting themes

        The coverage of cross-cutting themes should be explained in the ToR within the evaluation scope section, as well as in a specific subsection to the evaluation questions listed or through specific questions under the relevant criteria. In addition, in the evaluation methodology, evaluators could be asked to consider evaluation approaches and methods that properly address cross-cutting issues (for instance, in the collection of data and presenting data disaggregated by sex, geographic location and income).

        Evaluation ToR should attempt to cover all cross-cutting themes (in IOM mainly gender, human rights, environment, accountability to affected populations in emergencies) or explain if certain themes may not be covered.

        Annex 5.3. Incorporating cross-cutting themes at IOM provides a detailed description of the cross-cutting themes used in IOM, as well as guiding questions for incorporating cross-cutting themes into M&E.

         

         

         

         

        Budget

        This section specifies the resources that are available to conduct an evaluation, including in-kind support provided by IOM, such as transportation and translation.

        The section also outlines which costs related to the evaluation will be covered by IOM and which are to be covered by the consultant or service provider. These include the consultancy fee, travel, daily subsistence allowance, as well as any data collection or technical costs to be considered.

        For more information regarding budgeting for evaluation, see Annex 5.1. Budgeting for evaluation.

         

         

         

         

        Specification of roles

        This section specifies the roles of those involved in the evaluation to inform all parties of the tasks they need to accomplish and what is expected of them. Examples of this include providing general information about project management and relevant focal points, such as those tasked with facilitating access to project-related documentation or setting up meetings and collecting data from project partners.

        An evaluation could require the set-up of a committee, such as a reference group, a management committee or a learning group. If this takes place, it would need to be highlighted here. These are particularly useful, and recommended, for complex evaluations (multi-country, multiprogramme), with multiple stakeholders, and can ensure a participatory approach throughout the evaluation.

         

        Time schedule

        An indicative time schedule sets out, in chronological order, the dates by when tasks need to be accomplished or products handed over, the amount of time allocated for the completion of tasks and products and who is responsible for the completion of each task or product. It can also include the dates of field visits or surveys to be conducted.

         

         

         

        Deliverable section

        The section specifies the products to be generated at various stages of the evaluation process, as well as who will be responsible for each deliverable (considering, however, that it will be mainly related to the work of the evaluator).

        The list of deliverables is likely to include the evaluation matrix (see Information box for more details) and/or inception report, the draft evaluation report to be submitted for comments and the final evaluation report and evaluation brief. It can also include information on an initial presentation of findings or workshop for presenting the final report to main stakeholders. For further information, please see subsection on Evaluation deliverables.

        The use of the terms of reference checklist developed by the Central Evaluation Unit is strongly recommended during the preparation phase and before finalizing the terms of reference. This checklist outlines requirements for a comprehensive terms of reference and for meeting quality standards. Three levels can be considered: all elements included, elements partially included, and elements not included. If elements listed in the checklist are missing, corrective measures, such as adding the necessary information, should be taken before publication.

        TIP

        ToRs can be shared with the regional M&E officers or the Central Evaluation Unit for quality checking prior to being finalized.

          Select evaluator(s)

          5.2.3. figure

           

          As part of the planning phase, once the purpose and evaluability of a planned evaluation have been defined and the ToR for the evaluation has been elaborated, a selection process for evaluator(s) must take place, in line with what has already been agreed in the project proposal and/or ToR.

           

          Internal versus external evaluators

          The following table provides some tips on the benefits of using internal or external evaluator(s):

          Internal evaluator(s) External evaluator(s)
          • Familiar with the context and object of the study.
          • May lead to greater acceptability of the findings by IOM colleagues.
          • Less expensive.
          • Well placed to understand IOM, its mandate and operations.
          • Can continue building on the evaluation over time with the utilization of evaluation results.
          • Can learn from the evaluation experience and apply it to one’s own work.
          • Could ensure the inclusion of independent and external views in the analysis.
          • Can bring new perspectives and lessons learned from similar non-IOM projects that has been evaluated.
          • Generally perceived to be unbiased, as not influenced by internal factors, and with relevant evaluation expertise.
          • Could be more familiar with ethical and independence principles to be applied for the conduct of an evaluation

          Note: Adapted from Module 6 of IOM Project Handbook, p. 440 (Internal link ony).

          INFORMATION

          A meta-evaluation covering IOM evaluations conducted from 2017 to 2019 indicated that the level of quality of the evaluation did not differ between internal and external evaluations.

          A mixed team of internal and external evaluators can be used, with the roles and responsibilities defined by their strengths highlighted above. An external evaluator could, for instance, benefit from the knowledge of the organization of the internal evaluator to prepare the inception report and focus on methodology given her/his evaluation expertise.

           

          Considerations for selecting an internal versus external evaluator

          Based on the benefits listed above, the following considerations are useful for selecting an internal evaluator versus an external evaluator:

          • Budget availability;
          • Understanding of the thematic area and context;
          • Technical competencies required;
          • Existing workload of the IOM staff to be approached as internal evaluator;
          • Expertise in data collection and analysis methodology.

           

          The expected duration, timing and complexity of the evaluation may also impact the choice. An evaluation may require significant time and travel to various locations; this may have implications for using an internal evaluator, who would still need to perform the regular tasks when conducting the evaluation and may only be available for a shorter field visit. The supervisor of the internal evaluator may also object to release the staff for a longer duration and absence. To complete their evaluation, internal evaluators may need to work on an evaluation report over the course of an average period of three months, while fulfilling the responsibilities of their full-time position.

          The question of timing and constraints related to the selection of an evaluator, therefore, needs to be considered and the recruitment process initiated well in advance of the planned start date of the evaluation exercise by the evaluator(s). Recruiting an external evaluator requires IOM to issue a call for evaluator(s) to organize a selection process and have the contract approved and signed according to relevant procedures, which may also take some time. Such procedures are not needed for an internal evaluator, but the availability of an internal evaluator needs to be negotiated with the potential evaluator’s supervisor within IOM, and more time may be required to complete the exercise given the internal staff’s ongoing tasks, as specified above.

           

          Selecting evaluator(s)

          In parallel to the decision to select an internal or external evaluator, the use of multiple evaluators instead of a single evaluator can also be considered. For instance, a team may be required if specific expertise is required to analyse the performance of an intervention (such as an engineer to review a construction- related programme component or a health expert to review the response to a specific disease within a programme) or if additional national evaluators can bring an added value to the exercise (such as in case of complex interventions that require good knowledge of national context, stakeholders or politics). The evaluation commissioner, the evaluation manager or management committee can determine which option is best suited to the task, include it in the ToR and adjust the selection process accordingly.

          The main points for consideration in the choice between a single evaluator or a team can be summarized as follows:

              (a)  Complexity of the evaluation: Should the evaluation require significant data collection, in- depth and multiple field visits, have a multi-country scope, a combination of specific evaluation methods, involve multiple languages or require a national evaluator perspective, evaluation team may be best suited to conduct the evaluation.

              (b)  Duration of the evaluation: If the evaluation time frame is short, it may be better to consider a team, where the members can work together and complete the evaluation within a shorter time frame.

              (c)  Multiple areas of expertise: If an evaluation requires different areas of very specific expertise that may not be found within one evaluator, it may be necessary to consider selecting an evaluation team that can meet the requirements through its various members.

           

          Selection process

          The following section discusses the selection process for hiring an external evaluator or evaluators; this can be done mainly by applying the following: (a) recruitment of an individual consultant; or (b) engagement of a consulting firm or service provider. This process applies to external evaluator(s) only, as internal evaluators in IOM have been pre-identified at the global and regional levels and may be engaged through direct negotiation with the identified evaluator(s) supervisor.

          TIP - Looking for an internal evaluator

          IOM offices who are interested in an internal evaluation should contact their designated regional M&E officer after developing the ToR for the evaluation. The regional M&E officer will help to identify an available evaluator based on the existing global or regional roster.

          Selecting individual consultants

          For the recruitment of a single evaluator, a call for evaluator(s) is issued, including the evaluation ToR (see Annex 5.4. Evaluation terms of reference template) and the following additional elements:

          • Requirements: This is the list of competencies required for the individual
          • Instructions for the submission of the application: This should include what additional documents are expected to be submitted as part of the application, such as previous evaluation reports. It should also include the deadline for the submission of the application and the contact details for the person to whom the application should be sent.
          INFORMATION

          The Central Evaluation Unit and IOM regional M&E officers maintain a roster of external consultants and service providers (Internal link only) with detailed information on the expertise, languages, specializations and others.47 The call for evaluator(s) can be shared via internal IOM SharePoint to pass on or through existing listservs, such as MandENews, UNEG, XCeval, International Program for Development Evaluation Training and ALNAP. These can be accessed publicly, through the regional M&E officers or the Central Evaluation Unit. Selected evaluators from the roster based on the needs can also be contacted for submitting a proposal if interested.

          • 47Evaluation and Monitoring Portal, available internally to IOM staff via the IOM intranet.

          Once the applications are received, the evaluation manager/committee assesses them and shortlists applicants. IOM has developed a scorecard for the assessment of applications (see Annex 5.6) for evaluations, which is a helpful tool in the selection process. Once the selection is completed, the evaluation manager and/or programme manager prepare a contract, in accordance with IOM instructions on hiring consultant(s) (IN/84 Guidelines for Selection and Employment of Consultants (Internal link only).

           

          Selecting a consulting firm

          For the selection of an evaluation team, a request for proposal is issued in accordance with IOM procurement instructions as per the IOM Procurement Manual (Internal link only). A template for the Request for Proposal (RFP) for evaluations is available here in the event that a consulting firm is needed (Internal link only).

          INFORMATION

          IOM staff are strongly encouraged to determine in advance whether a single evaluator or a team may be required for an evaluation. In the event that this cannot be done in advance, then staff should reach out to their respective regional M&E officer or the Central Evaluation Unit for further information on selecting evaluators and processes that could help them.

          RESOURCES

          Annexes

          IOM resources

              2006 Guidelines to the Differences between Individual and Service Provider Contracts (IN/73) (Internal link only).

              2016a IOM Procurement Manual: Procurement of Goods, Works and Services (IN/168 rev. 2) (Internal link only).

              2021a Guidance for Selection and Employment of Consultants (IN/84) (Internal link only)

              2021b Changes to Procurement, Implementing Partners Selection and Related Contracting Procedures (IN/284) (Internal link only).

              n.d.b Evaluation and Monitoting Portal (Internal link only).

          • Ensure that clauses related to data protection and confidentiality, as well as PSEA are included in contracts.

           

          Attention should also be drawn, and the documents provided to consultants if necessary, to the following:

          IOM resources

              2010    IOM Data Protection Manual. Geneva.

          Other resources

          United Nations Evaluation Group (UNEG)

              2010a UNEG Quality Checklist for Evaluation Terms of Reference and Inception Reports. Guidance Document, UNEG/G/(2010)1.

              2008   UNEG Code of Conduct for Evaluation in the UN System. Foundation Document, UNEG/FN/ CoC(2008).

              2016   Norms and Standards for Evaluation. New York.

              2020   UNEG Ethical Guidelines for Evaluation.

            Undertaking evaluation

            5.3. figure

            Once the evaluator(s) is/are commissioned, the evaluation work itself can start and the evaluation manager has three main tasks to perform:

                (a) Supervising the evaluation implementation and workplan.

                (b) Providing feedback on the activities conducted for the development of the report and on the draft report itself.

                (c) Ensuring quality requirements are understood and quality review is monitored.

            The evaluator(s) will complete the evaluation during this phase. This section of the chapter, therefore, also provides information on the expected deliverables that the evaluator(s) should complete during the course of the evaluation. This is summarized in the section of this chapter, Evaluation deliverables.

            Supervise evaluation implementation and workplan

            5.3.1. figure

             

            The process of overseeing the implementation of the evaluation implies not only supervising the evaluator(s), but also managing and organizing the collection of documents and other materials for the evaluation, organizing the field visits, interviews and written surveys, as well as maintaining communication with key stakeholders.

            TIP

            When organizing evaluation activities, evaluation managers should keep in mind the demands made of stakeholders, beneficiaries and affected populations with regard to the time, resources and effort that they must invest to provide evaluation-related data. In addition to obtaining informed consent (see chapter 2: Norms, standards and management for monitoring and evaluation), be sure to inform all relevant parties from whom data will be collected of what will be asked of them in advance and in an organized manner. Keep in mind other ongoing monitoring and implementation-related activities that may make similar demands to avoid overburdening key stakeholders.

            At the outset of this phase, the evaluation manager, evaluation commissioner, evaluation management committee (if present) and selected evaluator(s) should jointly review the ToR to ensure that there are no comments, questions or key points that need to be renegotiated. It is also standard practice to have a management meeting at the beginning of the evaluation process to ensure that the evaluation manager, evaluator(s) and stakeholders (if relevant) all share a common understanding of the evaluation process and various roles and responsibilities. Furthermore, evaluators should be requested to develop an inception report. This will provide insight into their understanding of the evaluation ToR, as well as useful information on the way they will conduct the evaluation (for further information on the inception report, see the section, Evaluation deliverables). Any changes that result from reviewing the inception report should be well documented and reflected in the relevant documents and/or ToR. At this stage, the evaluation manager should have already provided the evaluator(s) with the key documents to start the evaluation, and additional resources can be shared when the final agreement on the work to complete is reached.

            INFORMATION

            In addition to intervention-specific documents, in order to support evaluators in their process and ensure that they abide by the expectations for all IOM evaluations, evaluation managers should provide certain key documents:

               (a) IOM Guidance for Addressing Gender in Evaluations: This document provides practical guidance for ensuring that gender is properly addressed in evaluation;

               (b) IOM Gender and Evaluation Tip Sheet: This tip sheet provides a short guide to help staff involved in managing and conducting evaluations develop gender-sensitive M&E scope of work, methodologies and findings. For more detailed guidance, including examples of gender-sensitive criteria, indicators and findings.

              (c) A copy of this chapter (chapter 5) of the IOM Monitoring and Evaluation Guidelines, with a particular emphasis on the Evaluation deliverables section, so that they understand the components expected;

               (d) A copy of the IOM templates for Inception reports, Evaluation matrix and Evaluation reports that can serve as a guide;

               (e) Links to the quality checklist tools from UNEG and Guidance On Quality Management Of IOM Evaluations, so that they understand how evaluations will be reviewed;

                  (f) Annex 5.10. Evaluation brief template and guidance.

            The clear definition of the roles and responsibilities of all parties directly involved in the evaluation is also essential for a sound implementation, with each individual having tasks to complete and deadlines to respect in order to ensure quality.

              Evaluation deliverables

              5.3.2.

               

              Evaluators are expected to provide several key deliverables, which should be clearly stated in the ToR. Each of these deliverables are outlined below, with key information concerning their content and potential structure.

               

               

              Inception report

               

              fig1The inception report is the first main deliverable that is provided by the evaluator. This report should be written following an initial document review and meetings with the evaluation manager or management committee. This document reveals the evaluator(s)’ understanding of the evaluation exercise, how each evaluation question will be answered and the intended data collection methods. The Inception report template is available in Annex 5.7.

               

              Inception reports should always be requested in an evaluation ToR for external consultants. In the case of an internal evaluation, an evaluation matrix will be sufficient as it will help to frame the understanding of the exercise by the internal evaluator.

               

              One key element of the inception report is the evaluation matrix. An evaluation matrix is a tool for guiding the evaluation by specifying the following: (a) criteria being assessed by the evaluation; (b) questions and subquestions that will be answered to assess each criterion; (c) indicators to be used to guide the assessment; (d) sources of data; and I data collection tools. It can clearly represent how the evaluation will be conducted, although it does not replace the need for a full inception report. For examples of evaluation matrices for a development and humanitarian project, see Annex 5.5. IOM sample evaluation matrices for a development-oriented project and a humanitarian project.

              After the draft of the inception report is finalized, it is mandatory for the evaluator to utilize the quality control tool - inception report to assess the quality of the report before submitting both the draft and final versions to the evaluation manager. This tool is designed to ensure that all quality requirements are met according to defined ratings, indicating the extent to which each listed element aligns with the terms of reference. 

              Progress reports

               

              fig2It is encouraged that evaluator(s) regularly report on the progress made while conducting the evaluation, so the evaluation manager or committee can periodically monitor how well data collection is going and if the methodologies selected for the evaluation are being properly used. The purpose of this is to ensure that when problems are encountered in the data collection process that could adversely affect the quality of the evaluation (such as the cancellation of scheduled meetings, unmet target numbers of interview or survey respondents or basic documents not properly reviewed), corrective measures can be introduced in a timely manner. Progress reports do not need to be lengthy and can be provided in an email or during regular meetings. Furthermore, the need for progress reports may vary depending on the duration and complexity of the evaluation.

               

              The evaluation management should ensure that suitable logistical arrangements are made for data collection. If circumstances outside of IOM or the evaluator’s control occur (such as weather, social or political events that prevent some site visits), the evaluator(s) and the evaluation management should examine whether these circumstances will affect the quality and credibility of the exercise and in case, discuss relevant methodological and practical alternatives.

              Debrief of initial findings

               

              fig3Initial findings should be presented at the end of the field visit or the data collection phase, providing an opportunity for relevant parties – such as government stakeholders, donors, beneficiaries or implementing partners – to identify any misinterpretation or factual mistake at an early stage before report writing. This can be done in the form of a PowerPoint or short report; it should be added as a deliverable if expected.

              Evaluation report

               

              fig4The evaluation report should first be provided in draft format to allow stakeholders to provide comments (see section, Provide feedback on all phases of the evaluation). After the evaluator receives the consolidated feedback, he/she should revise the report as necessary and submit the final version.

               

              🢂 Final evaluation reports are to be written in one of IOM’s official languages. If not possible, a summary of the findings and recommendations should be prepared in one of IOM’s official languages.

              Although IOM does not oblige all evaluators to use the same reporting format, evaluator(s) are expected to address all the following components:

               

              • Title page, including the title of the evaluation, date of completion (such as the date that the draft report is submitted) and the name of the evaluator(s) or evaluation firm(s);
              • Executive summary, including an explanation of the project background, overview of evaluation background, concise description of the evaluation methodology, summary of all evaluation findings, summary of all conclusions, summary of all lessons learned and good practices and a summary of all recommendations;
              • Project background, including a brief overview of contextual factors, clear and relevant description of key stakeholders, description of intervention logic and funding arrangements;
              • Evaluation background, including an explanation of the purpose of the evaluation, description of evaluation scope and list of evaluation clients and main audience for the report;
              • Evaluation approach and methodology, including a statement of the evaluation approach, evaluation questions and criteria (providing a justification for their use or lack thereof), methodology used, inclusion of cross-cutting themes, stakeholder participation, limitations of the evaluation and description of evaluation norms and standards;
              • Evaluation findings per criteria that are complete (all questions are addressed and findings aligned with purpose, questions and approach), robust (findings are justified by evidence and data disaggregated by key variables), identify causal factors that led to accomplishments and failures and adequately address IOM cross-cutting themes;
              • Conclusions that are based on and clearly linked to the evidence presented in the Evaluation findings section and that are, to the extent possible, objective and clearly justified;
              • Recommendations that are clear and concise, based on findings and/or conclusions of the report are relevant, identify the person responsible for their implementation and that are actionable;
              • Lessons learned that are relevant, specific to the context, targeting specific users and applicable;
              • Good practices that concisely capture the context from which they are derived and specify target users, are applicable and replicable and demonstrate a link to specific impacts that are realistic.

              It is on the basis of the report that quality assessment/assurance/control will take place (see this chapter’s section on how to ensure evaluation quality).

              🢂 More detailed guidance for each evaluation report component is provided in Annex 5.8. IOM evaluation report components template. A template for reporting is provided in Annex 5.9. IOM final evaluation report template.

              Evaluation brief

               

              fig5An evaluation brief should be developed by the evaluators after the final report has been completed. A template for this will be provided for by IOM developed on Microsoft Publisher. The brief provides a short overview of the evaluation, ensuring that conclusions, recommendations, lessons learned and good practices are provided. Guidance for the evaluation brief is provided in Annex 5.10. Evaluation brief template and guidance.

              Final presentation of the evaluation

               

              fig6A final presentation of the evaluation may be expected for some evaluations that would once again provide an overview of the key elements of the evaluation with a strong focus on the findings, conclusions and recommendations. Other deliverables presenting the evaluation, such as a PowerPoint presentation or infographic, may also be requested from the evaluator. In the event this kind of deliverable is anticipated, it should be clearly stated within the deliverable section of the evaluation ToR.

               

              Preliminary management response matrix

               

              fig7Evaluator(s) should prepare a draft management response matrix by inserting the recommendations, as well as indicative time frame or deadline for implementation. This draft matrix will then be shared with the evaluation manager, who will then liaise with relevant IOM management and staff to complete the matrix. If a draft management response matrix is expected from the evaluator(s), its preparation should be agreed upon at the start of the evaluation as a part of the evaluator’s deliverables.48

               

              For more information regarding the management response matrix, see this chapter’s section on Follow-up and using evaluation. A management response matrix template is available in the Central Evaluation Unit publication, Management Response and Follow-up on IOM Evaluation Recommendations.

                Provide feedback on all phases of the evaluation

                5.3.3. figure

                 

                Reviewing and providing feedback to the draft evaluation report is a critical step in the evaluation process. Involving the evaluation commissioner, manager (or management committee), as well as other key stakeholders in the process also ensures that all the intended evaluation users will receive the information that they need. If this is not undertaken properly, there is a risk that the evaluation may be discredited by its users once it is published. This step allows for a transparent and open process to review the evaluation prior to finalization.

                 

                INFORMATION - Involving key stakeholders in providing feedback

                Key stakeholders should have an opportunity to comment on the report, which is common with participatory approaches. If a reference group or other stakeholder engagement mechanism has been established for the purpose of the evaluation, their involvement in this process can guarantee broader participation in the feedback loop. External stakeholders can include partners, donors and beneficiaries. Internal IOM stakeholders can include CoMs, regional thematic specialists and other staff who have contributed to implementation (for instance, from other programmes that have influenced the implementation of the programme evaluated).

                When the draft report is provided by the evaluator(s), the evaluation manager should coordinate the comments and responses and consolidate all feedback to present it back to the evaluator(s) without delay. Feedback should focus on the technical aspects of the evaluation and factual evidence. Bear in mind that the evaluator is required to make factual corrections but is not required (and should not be requested) to revise findings, conclusions or recommendations in a manner not consistent with presented evidence, as this contravenes evaluation ethics.

                In case significant issues surface in the final stage of reporting, the evaluator and manager should reassess the process and develop a plan to address those identified issues. The challenges should be thoroughly assessed to determine if mistakes have been made and whether they can be corrected. All parties can also ensure that the recommendations in the report are acceptable and actionable.

                If the evaluation manager and evaluator(s) do not reach an agreement on the interpretation of data and/ or on the conclusions and recommendations that flow from that interpretation, the evaluation manager can prepare a management opinion, highlighting the disagreements with justifications.49

                In general, the final report review process should not be another opportunity to provide new information for the evaluation, as relevant information should have been provided during the data collection and analysis phases. However, if new relevant information has just become available, or a recent or concurrent event has had an impact on the analysis or recommendations (as it has happened with the COVID-19 unexpected crisis), the evaluation manager should discuss it with the evaluator, and additional time can be allocated to incorporate the new data and information into the report or into an addendum (for instance, examining how COVID-19 affects the recommendations already made).

                INFORMATION

                Regional M&E officers and/or the Central Evaluation Unit can assist if there is a disagreement on the findings, conclusions and recommendations of an evaluation report.

                After the evaluator receives the consolidated feedback, she/he should revise the report as necessary, and submit the finalized version.

                  Ensure evaluation quality

                  5.3.4. figure

                   

                  Communication on the progress of the evaluation is key for guaranteeing quality and relevant reporting, and each party has a role to play, in particular at the level of the evaluation management and the evaluator(s). Maintaining quality standards for an evaluation is particularly important, as it also enhances the credibility and objectivity of the exercise.

                  IOM has developed a Guidance on Quality Management of IOM Evaluationsto provide a common understanding of the quality and assurance standards for IOM evaluations. This guidance aims to establish consistent mechanisms to ensure the quality of outcomes, streamline the evaluation processes, and promote a culture of evidence-based learning. It also addresses the need for a standardized approach, incorporating recommendations from various reviews and assessments conducted over the years.

                  Quality standards ensure that evaluations are conducted in line with the procedural and technical requirements, as well as with the evaluation norms and standards, applied in the organization. They also contribute to the provision of accurate and useful information and to regularly monitor the quality of the evaluations.

                  Each evaluation actor can contribute to achieving quality standards by providing relevant inputs. Quality control is the primary responsibility of the evaluation manager, who should ensure that an evaluation is conducted in line with the IOM Evaluation Policy and Guidance, as well as any requirements and standards agreed upon with other stakeholders, for instance the intervention donor.51  The evaluation manager and evaluator(s) have the responsibility to guarantee conformity with established quality standards in the carrying out of activities at all stages of the evaluation process.

                   

                  Key roles and activities to ensure a high-quality evaluation52

                  The evaluation manager should:

                  • Ensure that the evaluation objectives are clear and that the methodologies and activities implemented by the evaluator(s) will contribute to reaching them;
                  • Maintain ownership of the evaluation by ensuring that the decision-making responsibility is retained and that decisions are made in a timely manner;
                  • Monitor the progress of the evaluation and provide relevant and timely feedback and guidance to the evaluator(s);
                  • Consider and discuss suggestions from evaluators of possible solutions, if problems arise;
                  • Discuss and ensure agreement on communication protocols, from the beginning, with all evaluation actors;
                  • Ensure evaluators, the evaluation commissioner and evaluation committees have full access to information from the beginning;
                  • Meet with evaluators, the evaluation steering committee and stakeholders to discuss draft reports and revisions;
                  • Approve the final report and organize a presentation of the evaluation findings for stakeholders;
                  • Provide a management response that responds to all recommendations for follow-up.

                   

                  The evaluator(s) should:

                  • Conduct the evaluation within the allotted time frame and budget;
                  • Ensure implementation of proper methodologies for conducting surveys and analysis of data/results;
                  • Provide regular progress reports to the evaluation manager/committee and communicate problems that require their attention in a timely manner;
                  • Ensure that the process of commenting on the draft report is well organized and includes feedback on the corrections and clarifications on misinterpretations;
                  • When requested, make a presentation of the initial findings during the conduct of the evaluation (if possible, for beneficiaries as well).

                   

                  In addition to the Guidance on Quality Management of IOM Evaluations, IOM has also developed three mandatory quality control tools. Both the evaluation manager and the Evaluator are required to utilize these mandatory quality control Tools to assess the completeness and quality of key documents during the different stages of evaluation:

                  • 51Quality control is defined as “part of quality management focused on fulfilling quality requirements”. It is one activity related to quality assurance, which is “part of quality management focused on providing confidence that quality requirements will be fulfilled”. Quality control efforts should be done at the level of evaluation management, and quality assurance is the responsibility of the centralized evaluation function within an organization. See definitions from ISO 9000:2015: Quality management systems on ASQ, n.d.
                  • 52Adapted from World Bank, 2015.
                  INFORMATION

                  The UNEG analytical frameworks for assessing the evaluation quality (see Resources section) should be provided to evaluators to ensure that they have good understanding of IOM’s expectations for the quality of the evaluation. The same can also be used by evaluation managers and regional M&E officers during the drafting of ToR and inception reports, as well as in the review of the evaluation report.

                  TIP

                  If engaged evaluator(s) produce a poor-quality inception report, evaluation management should offer the opportunity to the evaluator(s) to amend the inception report until a consensus is reached on its quality. If the inception report continues to be unsatisfactory and is included in the key deliverables, consideration should be given to terminate the contract, instead of taking the risk of receiving a final product of poor quality. The regional M&E officers and/or the Central Evaluation Unit can also be contacted to provide advice on the negotiation process with the evaluator(s) and on the decision to terminate the contract.

                  pag241_warningIt is important that the contract with the evaluator(s) is structured in such a way that enables evaluation management to respond appropriately, by including a clause that states that IOM reserves the right to withhold payment, in full or in part, if the services are not provided in full or are inadequate. The same can be applied for finalization of the draft evaluation report, allowing for the final payment to be withheld if quality is not met after several attempts to correct it.

                   

                   Regional M&E officers and/or the Central Evaluation Unit are available to assist in quality settings and, in coordination with IOM Office of Legal Affairs, if contractual measures need to be taken in cases where quality standards are not met.

                  RESOURCES

                  IOM resource

                  2019   Management Response and Follow-Up on IOM Evaluation Recommendations. IOM Central Evaluation Unit.

                  2022   Guidance on Quality Management of IOM Evaluations. IOM Central Evaluation Unit. 

                  2022   Checklist for Terms of Reference. IOM Central Evaluation Unit.

                  2022    Quality control tool for inception reportsIOM Central Evaluation Unit.

                  2022    Quality control tool for evaluation reports. IOM Central Evaluation Unit.

                  Other resources

                  American Society for Quality (ASQ)

                  n.d.     Quality assurance and quality control.

                  World Bank

                  2015    Managing Evaluations: A How-To Guide for Managers and Commissioners of Evaluations. Washington, D.C.

                  Tools

                  United Nations Evaluation Group (UNEG)

                  2010a UNEG Quality Checklist for Evaluation Terms of Reference and Inception Reports. Guidance Document, UNEG/G(2010)1.

                  2010b UNEG Quality Checklist for Evaluation Reports. Guidance Document, UNEG/G(2010)/2.

                    Follow-up and using evaluation

                    5.4. figure

                    A common misconception about managing an evaluation is that the evaluation process is considered finished once the final report is submitted and approved. In fact, the conduct and then approval of the report represent the first two thirds of the process, but the main raison d’être and benefit of an evaluation lies within the final third of the process, namely the use of the report, its findings and recommendations.

                    The final third
                    • Use and follow-up of evaluation findings and recommendations.
                    • Internal and external promotion for replication and learning.
                    • Use for other purposes, such as synthesis evaluations or meta evaluations.
                    Follow-up on implementation of recommendations and use of the report

                    5.4.1. figure

                    After the final report is approved, the evaluation commissioner or manager should work on the follow- up to the evaluation recommendations, in coordination with senior management and the project stakeholders, as appropriate. The evaluation commissioner and manager should consider and discuss with relevant entities how the findings of the report will be communicated to a broader audience as well. The evaluation manager will then finalize the management response matrix drafted by the evaluator, in line with the instructions provided in the IOM publication, Management Response and Follow-Up on IOM Evaluation Recommendations.

                    The management response matrix is tool to:
                    • Indicate if the evaluation recommendations are accepted, partially accepted or rejected.
                    • Describe the follow-up actions to be taken to address the recommendations.
                    • Indicate the deadline for follow-up actions taken and who is responsible for each action.
                    • Monitor the implementation of the follow-up action.
                    • Facilitate integration of accepted evaluation recommendations into future actions.

                    It is a monitoring tool that must be referred to on a regular basis until all the follow-up actions have been implemented or are no longer applicable. The relevant use of evaluations as an accountability tool should be done in a timely manner; therefore, it is recommended to complete follow-up actions and the review process within 18 months of the evaluation’s final submission, even when not all follow-up actions have been finalized. The monitoring of the implementation of the management response matrix can be assigned to specific staff within the office. Progress on the follow-up actions included in the matrix should be shared with relevant entities, as well as with the regional M&E officers and the Central Evaluation Unit for their records.

                    INFORMATION

                    FIRST logoThe management response matrix can either be filled out directly in PRIMA or the Word version can be uploaded directly to PRIMA. Programme and project managers will receive a reminder to fill out the management response matrix, and 12 months after the evaluation report has been completed, another reminder will be sent to update on the status of the recommendations.

                      Using and disseminating the evaluation

                      5.4.2. figure

                      Sharing and publicizing evaluation reports are important steps for guaranteeing the relevant use of evaluation. Evaluation managers and/or commissioners may want to discuss and prepare a communication and dissemination strategy, which will require deliberate action and analysis to reach the right audience. The following points may be considered:

                      • How will the evaluation be used and disseminated?
                      • How will the findings in the evaluation report be shared with various groups of stakeholders who may have diverging points of view?
                      • When is the best time to disseminate the evaluation to ensure its optimal use?

                      Disseminating evaluations contributes to fulfilling their purpose of learning, by ensuring that the knowledge gained through evaluation can be widely used to improve the quality of new interventions, as well as implementation methods. It is recommended to think about how evaluations will be shared, and with whom, early in the planning phase. These decisions should also take into consideration the specific needs when deciding to share evaluations internally within IOM or externally.

                      INFORMATION - Utilization-focused evaluation and disseminating evaluation

                      The U-FE approach can provide useful insight when planning evaluation dissemination and/or preparing a communication and dissemination strategy. For more information, see the information box Utilization-focused evaluation .

                      The IOM Evaluation Policy specifies that all evaluation reports are to be made public, but the “sharing strategy” can vary. In the case of an externally shared evaluation – for example, an evaluation of an IOM strategy (corporate, regional or country) or policy (usually corporate) – it could be of interest to all IOM Member States and possibly some donors, while for project evaluations, external interest may be limited to the local government(s) and the specific donor(s) who funded the project. However, in the case of projects, external distribution can also include implementing partners, collaborating non-governmental organizations (NGOs) and/or beneficiaries, which may not be the case for an evaluation of an IOM strategy.

                      For reports shared internally, a similar distinction applies, as with external reports. While the evaluation of a policy may be shared more often at the departmental level, evaluations of projects, programmes, as well as local or regional strategies, are more valuable for the field offices concerned and relevant thematic specialists at the regional and departmental levels. If the policy evaluation’s dissemination is mainly at the department level for the purpose of organizing the follow-up or lessons learning, it can also be shared more broadly, including to all IOM offices worldwide, given their possible interest on a corporate policy. Some cases are also very specific; for instance, the regional M&E officers and the Central Evaluation Unit need to be kept informed of the publication of evaluations to add them to its central repository of evaluation reports and/or on the IOM Evaluation web page.

                      Generally, it is recommended to have just one version of a report that can be shared both externally and internally and that serves all stakeholders with varied points of view. It has happened, in a limited number of cases, that two versions of an evaluation report – one for limited distribution and internal use and the other for external consumption – were produced; for instance, when the report contains some sections covering confidential or sensitive issues related to demobilization activities. If uncertain about the dissemination of an evaluation report, the evaluation manager should consult with the CoM for country-level interventions, regional directors for regional or cross-regional interventions and/or the regional M&E officer or the Central Evaluation Unit.

                      INFORMATION

                      Evaluation reports, when cleared, are to be shared with the Central Evaluation Unit, who will include them in the central repository and on the IOM Evaluation web page.

                      As stated in the Evaluation deliverables section of this chapter, a separate summary or evaluation brief is also required. The brief should be developed by the evaluator to provide a shorter, succinct report on key elements. Guidance on developing an evaluation brief, which is mandatory, as well as an evaluation brief template, are available in Annex 5.10. Evaluation brief template and guidance.

                      Ways of sharing evaluations

                      pag245_purpleTableFigIt is also important to consider different ways of sharing evaluations in a strategic and systematic manner to ensure that lessons can be extracted by key users and that others can benefit from the evaluation based on their needs and interest. Some examples of different ways are as follows:

                      • Communication strategy using various communication platforms, such as Yammer (internal), Facebook, Twitter and websites;
                      • Webinar conducted for relevant stakeholders;
                      • Video presentation of the evaluation and the response from IOM;
                      • Workshop to discuss findings and agree on the way forward.
                      RESOURCES

                      IOM resources

                          2019 Management Response and Follow-Up on IOM Evaluation Recommendations. IOM Central Evaluation Unit (Internal link only).

                          n.d.c IOM Evaluation repository.

                          n.d.d IOM Evaluation website.

                        Accountability and learning from evaluation

                        The benefits of using information derived from evaluations are numerous. Practitioners must effectively apply this information to enhance accountability, improve performance, as well as strengthen decision- making through learning. Accountability can be defined as “the obligation to demonstrate that work has been conducted in compliance with agreed rules and standards or to report fairly and accurately on performance results vis-à-vis mandated roles and/or plans. This may require a careful, even a legally sound, demonstration that the work is consistent with the contract terms”.54  Learning, on the other hand, is the process by which individuals or organizations acquire and use skills and knowledge. This section will address the various ways of learning through evaluation and other evaluative approaches and, while the requirements related to accountability in sharing an evaluation report are covered in the previous section, this section will also include accountability considerations for the other evaluative approaches that are discussed.

                        One way to use information gained from evaluation is to share it at the organizational level, thereby generating knowledge for ongoing and future planning and implementation, as well as fostering a culture of learning and knowledge in the organization and supporting its overall accountability. Knowledge gained from evaluations also provides the organization with evidence-based information. Learning must be incorporated into the core element of an evaluation, including effective information-sharing and learning systems.

                        RESOURCES

                        Organisation for Economic Co-operation and Development (OECD)

                            2010 Glossary of Key Terms in Evaluation and Results Based Management. OECD/DAC, Paris.

                        Generating knowledge and learning through evaluation

                        Knowledge and learning derived from evaluation can feed back into the organizational learning and planning processes through regular reflection, accessibility to the evaluation reports and regular exchange of information through learning sessions. This can be visualized as follows:

                        Figure 5.8.

                        In addition to evaluation, other processes can enhance learning from interventions. The following are three examples of evaluative approaches that can also incorporate learning in addition to accountability, and also used for monitoring purposes:

                            (a) Lessons learned;

                            (b) Project performance review (PPR); and

                            (c) After-action review (AAR).

                         

                        Lessons learned and lessons learning

                        Lessons learned can be understood as generalizations based on evaluation experiences with projects, programmes, strategies or policies that abstract from the specific circumstances to broader situations. Frequently, lessons highlight strengths or weaknesses in preparation, design and implementation that affect performance, outcome and impact. In other words, they are intended to describe the knowledge gained from experiences in well-defined situations. Documenting lessons learned and incorporating them into other interventions can lead to improving the quality of service delivery. In particular, they can help to avoid practices that may regularly fail to produce results or other common mistakes.

                        The following graphic provides an overview of the process of how lessons learned are identified (through implementation and evaluating implementation), how they are developed and, finally, incorporated and used to improve implementation.

                        Figure 5.9.

                        While lessons learned are generally surfaced through conducting evaluation, lessons learned can also be captured through specific lessons-learning workshops, which bring together various stakeholders to brainstorm on performance and identify the lessons learned from an intervention. This approach can be used as well for interventions at the policy or strategic levels, where stakeholders may be asked to reflect on their approaches to a particular topic or thematic area over time.

                        Another similar concept, in terms of generating knowledge from an evaluation, is the notion of good practices, which can be seen as the identification of a procedure that has proven to produce results in a satisfactory way and that is proposed as a “standard” practice suitable for widespread adoption. A lesson learned with an identified practice that produces such satisfactory results is identified to be worthy of replicating and possibly upscaling, may, over time, become an “emerging good practice”.55

                        INFORMATION

                        While lessons learning is noted here as one of several evaluative approaches, it is important to underline that evaluators are generally expected to incorporate lessons learned into the report. See the section, Planning for evaluation: Prepare evaluation terms of reference (Evaluation scope).

                        In general, disseminating lessons learned can take place as a part of, or in a similar manner to, disseminating a final evaluation report. In particular, they can be incorporated into any evaluation summary documents made available to relevant users. The Evaluation brief template and guidance (Annex 5.10) contains a specific section for the presentation of lessons learned, when required.

                        In some cases, lessons learned may be of particular interest to relevant thematic specialists for their further dissemination and applicability to other similar interventions. The “use” of lessons learned is particularly critical in the development of new interventions, at the project or programme level, as well as in the development of strong strategic and policy guidance to a particular area of IOM’s work. Therefore, evaluation managers should carefully consider with whom to share lessons learned and identified good practices, in order to best incorporate them into future and planned IOM interventions.

                        RESOURCE

                        International Labour Organization (ILO)

                            2014 Evaluation Lessons Learned and Emerging Good Practices. Guidance Note 3, 25 April.

                        Project performance review

                        IOM has developed a PPR tool, which is an assessment that focuses primarily on the performance of a project or programme using OECD/DAC criteria, with a focus on effectiveness and efficiency. The objective of a PPR is to support field offices in assessing the performance of their interventions, using a constructive, participatory and coordinated approach. The exercise usually takes place during implementation, so that corrective measures can be taken if necessary. The criteria of relevance, impact and sustainability are briefly analysed through the PPR, and it may also look at the extent to which the outcomes of an intervention are being achieved or may be achieved due to the activities, as well as outputs completed.

                        A PPR also looks at cross-cutting issues, analysing the level of accountability to beneficiaries and affected populations, particularly in emergency context, as well as assessing the intervention’s link to global, regional or country strategies.

                        It is important to note that a PPR is not an evaluation, as it is less comprehensive than an evaluation. An evaluation takes more time and preparation, covers issues in greater detail and is able to produce more evidence-based analysis and findings. Further differences between a review and an evaluation can be summarized as follows:

                        pag249_Evaluation_Review

                         

                        RESOURCES

                        IOM resources

                            2018b Planning, Conducting and Using Project Performance Reviews (PPR). OIG/Evaluation, June (Internal link only).

                        • PPR Tool Template
                        • PPR Report Template
                        • Reader for PPR Reporting
                        • Preparing for PPRs
                        • Action Plan on PPR Recommendations

                        After-action review

                        An AAR is a structural discussion about an intervention that enables a team to consider and reflect on what happened, why it has happened and how to sustain strengths and improve weaknesses.56  It is a facilitated process involving key actors, in which the general principle is to be neutral and objective to ensure that the discussions stay focused on challenges, remain positive and do not evolve into self- justification. Essentially, the review should focus on questions, such as the following: “What was expected versus what actually happen(ed)?”, “What went well and why?” and What could have gone better and why?”.

                        An AAR involves the following steps:

                        Figure 5.10.

                        Source: Adapted from Buchanan-Smith et al., 2016 and USAID, 2006.

                        As a first step, participants brainstorm on their understanding of the objective(s) or intent of the action and then develop a timeline of what actually happened and has changed over time. The next step is more focused on an analytical approach, as they identify what went well and why, and what could have gone better and why. At the end of the process, conclusions of what could be done better next time are summarized into lessons learned. Participants may be asked to vote for what they regard as the three most important lessons in case of multiple considerations. An AAR discussion is a facilitated process and may not last more than half a day or a day. Depending on the resources and time available, it can either be formal, with additional preparatory work, or informal as detailed in the box below.

                         

                        Key features of after-action review

                        Formal reviews

                        • Are facilitated by an objective outsider
                        • Take more time
                        • Use more complex review techniques and tools
                        • Are scheduled beforehand
                        • Are conducted in meetings or other “formal” settings
                        • Require a more standard and thorough report

                        Informal reviews

                        • Are conducted by those closest to the activity
                        • Take less time
                        • Use simple review techniques and tools
                        • Are conducted when needed
                        • Are held at the event’s site
                        • Can be covered by a less-comprehensive report

                        Source: Adapted from USAID, 2006.

                        RESOURCES

                        Buchanan-Smith, M., J. Cosgrave and A. Warner

                            2016 Evaluation of Humanitarian Action Guide. ALNAP/ODI, London.

                        USAID

                            2006 After-Action Review: Technical Guidance. PN-ADF-360. Washington, D.C

                         Other examples of evaluative approaches and tools are summarized as follows:

                        Additional approaches and resources

                        Most significant change

                        What is it?

                        A most significant change (MSC) is a type of participatory tool that requires gathering personal accounts of perceived change(s) and determining which of these accounts is the most significant and why.

                        A more detailed explanation of the MSC approach is elaborated in Annex 5.11. Evaluative approaches: Most significant change.

                        RESOURCES

                        MSC toolkits and guides

                        Asadullah, S. and S. Muñiz

                            2015   Participatory Video and the Most Significant Change: A guide for facilitators. InsightShare, Oxford.

                        BetterEvaluation

                            n.d.     Most significant change. Online resource.

                        Davies, R. and J. Dart

                            2005   The ‘Most Significant Change’ Technique – A Guide to Its Use.

                        International Development Research Centre’s Pan Asia Networking

                            2008   Jess Dart – Most significant change, part I. Video.

                        Additional approaches and resources

                        Kirkpatrick model

                        What is it?

                        The Kirkpatrick model is a four-level training evaluation model developed to evaluate trainings. The four levels are as follows: (a) reaction; (b) learning; (c) behaviour; and (d) results. This is a commonly used method for assessing the results acquired from a training. A generic post- training completion evaluation form has been developed that can be easily modified as required by interested parties.

                        RESOURCES

                        IOM

                            2017b Reaching results through training. Webinar video, 25 July (Internal link only)

                        MindTools

                            n.d.    Kirkpatrick’s four-level training evaluation model: Analyzing learning effectiveness.

                        Additional approaches and resources

                        Peer review

                        What is it?

                        Peer review is a process that can help advise on quality issues and compliance to standards, usually conducted by other specialists from the same field, who are chosen for their knowledge of the subject matter. This process has been used in IOM, for instance, for the review of implementation of the United Nations System-wide Action Plan (UN-SWAP) for gender equality and the empowerment of women, with the participation of two to three other agencies being mutually reviewed. A peer review mechanism has also been developed by UNEG in partnership with OECD/DAC to review the evaluation policy of UNEG members.

                        Additional approaches and resources

                        Outcome harvesting57

                        What is it?

                        Outcome harvesting is an evaluative approach that can be used to collect data on interventions. As its name suggests, outcome harvesting collects (“harvests”) evidence of occurred changes (outcomes). Once changes are identified, it works backwards to determine whether and how these changes are linked to your intervention.

                         

                        RESOURCES

                        Outcome Harvesting

                            n.d.     Homepage.

                        Outcome Mapping

                            2014   What is outcome harvesting? Video, 15 January.

                        Wilson-Grau, R.

                            2015     Outcome harvesting. BetterEvaluation.

                          Annexes
                          Budgeting for evaluation

                          Adapted from Module 6 of IOM Project Handbook, pp. 423–431 (Internal link only).

                          Click HERE for more info

                            Expanded list of evaluation types by specificities and scope

                            Adapted from OIG/Evaluation, IOM Evaluation Guidelines (January 2006), Annex 2.

                            Cluster evaluation: An evaluation that analyses a set of related activities, projects or programmes to identify common threads and themes.

                            Country-programme/Country-assistance evaluation: An evaluation of one more or more donor or agency’s portfolios of development.

                            Cross-section evaluation: A systematic evaluation of various evaluation reports on a specific project type, on projects involving one particular sector, or on one particular instrument or theme, designed to review and possibly update existing development policy directives.

                            Democratic evaluation: An evaluation approach that addresses critical evaluation issues, such as dealing with power relations among stakeholders, including stakeholders’ perspectives, and providing useful information to programmes. Power redistribution is accomplished by “democratizing knowledge” and holding all groups, including the client, mutually accountable.

                            Empowerment evaluation: An evaluation promoting close involvement between the evaluator and the project/programme participants to produce more meaningful and useful evaluation results. Empowerment evaluation is necessarily a collaborative group activity, not an individual pursuit.

                            In-depth evaluation: An approach that consists of focusing evaluation or a part of an evaluation precisely on a category of outputs, or on a group or category of impacts.

                            Incorporated/built-in evaluation: An approach to implementation that involves fairly continuous self-evaluation by principal actors and participants, according to pre-established criteria related to the purpose and goal of the assistance.

                            Meta-evaluation: An evaluation that aims to judge the quality, merit, work and significances of an evaluation or several evaluations.

                            Partial system evaluation: An evaluation also used in emergency situations, which covers only a part of the system. It can be related to thematic or sector evaluations.

                            Participatory evaluation: An evaluation method in which representatives of agencies and stakeholders (including beneficiaries) work together in designing, carrying out and interpreting an evaluation.

                            Process evaluation: An evaluation that examines the internal dynamics of implementing organizations, their policy instructions, their service delivery mechanisms, their management practices and the linkages among these.

                            Quasi-experimental impact evaluation: An evaluation that compares different groups before and after programme implementation to assess the programme impact and value added of further investments. It uses rapid and economical studies that combine exploitation of existing data sets with rapid sample surveys, tracer studies, interviews and others.

                            Real-time evaluation: An evaluation implemented in emergency situations that aims to provide a rapid feedback on humanitarian operations and be an immediate catalyst for improvements in organizational and operational performance. The methodology cannot be rigid, and flexibility and adaptability are required, although it must guarantee quality.

                            Sector evaluation: An evaluation of a variety of aid actions, all of which are located in the same sector, either in one country or cross-country. A sector covers a specific area of activities, such as health, industry, education, transport or agriculture.

                            Single-agency response evaluation: Also in emergency situations, an evaluation that covers the overall response by a particular agency.

                            Single-agency/Single-project evaluation: An evaluation that covers a single project undertaken by a single agency in an emergency situation.

                            Stakeholder evaluation: An evaluation that involves agencies, organizations, groups or individuals who have a direct or indirect interest in the development assistance, or who affect or are positively or negatively affected by the implementation and outcome of it. Stakeholders work together to develop and finalize instruments and procedures, produce recommendations, and make decisions throughout the evaluation process (related term: Participatory evaluation, which focuses on methodology).

                            Strategic evaluation: An evaluation of a particular issue aiming to advance a deeper understanding of the issue, reduce the range of uncertainties associated with the different options for addressing it and help to reach an acceptable working agreement among the parties concerned. It is usually adapted when urgency of the issue poses high risks to stakeholders and has generated conflicting views.

                            Synthesis evaluation: “[A] systematic procedure for organizing findings from several disparate evaluation studies, which enables evaluators to gather results from different evaluation reports and to ask questions about the group of reports.”58

                            System-wide evaluation: An evaluation used in emergency situations that covers the response by the whole system to a particular disaster or emergency.

                            Theory-based evaluation: An evaluation that focuses on an in-depth understanding of the workings of a programme or activity, the programme theory or logic. It needs not assume simple linear cause-and- effect relationships, but maps out the determining or causal factors judged important for success and how they might interact.

                            RESOURCES

                            United States General Accounting Office (GAO)

                                1992 The Evaluation Synthesis. GAO/PEMD 10.1.2. Revised March 1992.

                              Incorporating cross-cutting themes at IOM

                              Cross-cutting themes can be defined as additional considerations or areas that intersect with an intervention, or that can be easily integrated into it, without losing focus on the main goals of the intervention. Mainstreaming a cross-cutting theme is generally understood as a strategy to make the specific theme, given its importance, an integral dimension of the organization’s design, implementation and M&E of policies and interventions. The inclusion of themes can evolve over time and new themes can be added; they are not necessarily the same for all organizations and not all may be relevant to be considered in an intervention.

                              This section will cover the following themes: (a) rights-based approach (RBA); (b) protection mainstreaming; (c) disability inclusion; (d) gender mainstreaming; (e) environmental sensitivity and sustainability; and (f) accountability to affected populations (AAP). It is important to note that this annex treats the M&E of cross-cutting issues only. In the event that these thematic areas become the main focus of an intervention, it is no longer to be considered as a cross-cutting theme.

                                  🢂 Evaluation terms of reference (ToR) should ensure that questions pertaining to the in-tegration of relevant cross-cutting themes are reflected inside a specific section or un-der relevant criteria, specifying that it will be examined as a cross-cutting theme.

                              Rights-based approach

                              What is it?

                              RBA is a conceptual framework and methodological tool for developing policies and practices. RBA is the conscious and systematic integration of rights, norms and standards derived from international law into programming, with a main focus on migration in the case of IOM. An RBA to migration programming aims to empower rights holders and strengthen the capacity of duty bearers to fulfil their obligations to protect rights holders.

                              Although there is no universal understanding of how to apply an RBA to interventions in practice, it generally includes the following attributes that can be applied to IOM’s migration context:

                              • Identification of the rights holders, their entitlements, and duty bearers’ obligations to respect, protect and fulfil those entitlements;
                              • Assessment of whether rights are being respected, protected and fulfilled and, if they are not, an analysis of the underlying causes and a strategy for correcting;
                              • Capacity-building for rights holders to be aware of and enjoy their rights and of duty bearers to meet their obligations;
                              • Ensuring that rights principles (such as non-discrimination, participation and accountability) are integrated into the project, strategy and policy developed and during the implementation process.

                              How to monitor and evaluate rights-based approach

                              When considered as a cross-cutting theme, an RBA would require measuring the process of programming and its adherence to rights principles. These principles can be incorporated into a results matrix and monitored accordingly, or they can be measured without being set out in the matrix by using a monitoring tool. Lastly, RBA can and should be included in an evaluation; an evaluation should assess rights issues even if the projects themselves do not have a specific rights-based objective or outcome.

                               

                              INFORMATION

                              Individuals engaged in monitoring the RBA within an intervention can also refer to IOM’s RBA manual titled Rights-based Approach to Programming, which includes a section on M&E and presents a monitoring tool in Annex IV.59

                              The following are some questions that can be asked during both monitoring and evaluation processes to ensure that an RBA perspective is covered:

                              Participation

                              • Have the various stakeholders (including both rights holders and duty bearers) been involved in planning and designing the M&E of the project and determining the type of data to collect?
                              • Are other individuals or groups, such as local civil society groups or NGOs, involved?
                              • Are key groups, particularly the most marginalized groups of rights holders, included and/or involved in the M&E process?

                              Equality and non-discrimination

                              • Is the M&E process explicitly designed to detect or measure discrimination against particular groups throughout its objectives and outcomes?
                              • Is the data collected appropriately disaggregated, such as by age, disability, ethnicity, sex, nationality and migration status, to track any gaps in considering equality and discrimination throughout intervention outputs and outcomes?

                              Accountability, transparency and rule of law

                              • Are the M&E processes directly linked to any rights such as measuring the realization of specific rights?
                              • Do the M&E processes account for any form of complaint mechanisms and how are received complaints dealt with?
                              • Are the findings from the M&E shared publicly in a transparent manner?
                              • Are the findings from the M&E used to promote changes in law or policy of the State?

                              During the evaluation, the evaluator should also consider the following tips for ensuring that RBA is integrated in the evaluation process:

                                  (a) Include mechanisms to ensure that the most marginalized groups of rights holders are/were involved in the evaluation.

                                  (b) As an evaluator, ask yourself: Were all stakeholders included and how will the evaluation explicitly detect or measure discrimination against particular groups? For example, the evaluation may be designed to detect any form of discriminatory practices that may have occurred during the implementation of the project or as a result of the project.

                                  (c) Identify channels to field any form of complaints that may be received during the evaluation.

                              RESOURCES

                              IOM resources

                                  2015b Annex IV: Rights-based monitoring tool. In: Rights-based Approach to Programming. Geneva, p. 144.

                                  2017a Annex 4.2: Guiding questions for incorporating cross-cutting themes into the project management and monitoring phase of the IOM project cycle (Module 4). In: IOM Project Handbook. Second edition. Geneva, pp. 344–346 (Internal link only).

                              Protection mainstreaming

                              What is it?

                              Protection mainstreaming is defined as “the inclusion of humanitarian protection principles into the crisis response by ensuring that any response is provided in a way that avoids any unintended negative effects (do no harm), is delivered according to needs, prioritizes safety and dignity, is grounded on participation and empowerment of local capacities and ultimately holds humanitarian actors accountable vis-à-vis affected individuals and communities”.60

                              IOM is committed to mainstreaming protection across all of its humanitarian programming, as this aims to ensure safe programming. IOM incorporates the following four protection mainstreaming principles, which are fundamental to crisis and post-crisis response:

                                  (a) Prioritize safety and dignity and avoid causing harm;

                                  (b) Secure meaningful access;

                                  (c) Ensure accountability;

                                  (d) Ensure participation and empowerment.

                              Adhering to the Inter-Agency Standing Committee’s (IASC) Statement on the Centrality of Protection in Humanitarian Action, IOM reaffirms that the protection of all affected and at-risk individuals and communities must be at the heart of humanitarian decision-making and response before, during and after a crisis strikes.61 IOM ensures that service and assistance delivery preserves the physical integrity of individuals and communities, and their dignity is culturally appropriate and minimizes any harmful and unintended negative consequences. Assistance and ser-vices are provided according to needs and not on the basis of age, sex, gender identity, national-ity, race or ethnic allegiance. Services and assistance are provided in good quantity, within safe and easy-to-reach locations, are known by the affected individuals and accessible by all various groups, including medical cases, disabled individuals and discriminated against groups. Affected individuals and communities play an active role in the measurement of the quality of interventions that affect them and put in place effective and easily accessible mechanisms for suggestions and complaints from the population, and, in so doing, increase accountability. Inclusive participation to decision-making processes is fostered to support the development of self-protection capacities and assist people to claim their rights and empower themselves.

                              The mobility dimensions of humanitarian crises often include complex and large-scale migration flows and mobility patterns that typically involve significant and diverse vulnerabilities for affected individuals and communities. For interventions developed within the framework of the IOM Migration Crisis Operational Framework (MCOF) sectors of assistance,62 appropriate consideration must be given to ensuring appropriate protection of affected persons, including migrants (displaced persons, refugees, asylum seekers, stateless persons and others) and crisis-affected communities that produce and/or host migrants. The Guidance Note on how to mainstream protection across IOM crisis response (IN/232) (Internal link only) also provides a step-by-step approach on how to integrate protection mainstreaming principles into both crisis response planning and the various phases of the project life cycle. The note also provides several tools such as situation and vulnerability analysis that could be relevant.

                              Protection in humanitarian action can be through three main interventions:

                                  (a) Mainstreaming of humanitarian protection principles;

                                  (b) Protection integration;

                                  (c) Specialized protection activities.

                              Projects using the first approach, mainstreaming protection, ensure that any response is provided in a way that complies with each protection mainstreaming principle within the intervention itself. Protection mainstreaming is the responsibility of all actors.

                              Protection integration “involves incorporating protection objectives into the programming of other sector-specific responses […] to achieve protection outcomes.”63

                              Specialized protection activities “directly aim to prevent or respond to human rights and humanitarian law violations, or to restore the rights of individuals who are particularly vul-nerable to or at risk of neglect, discrimination, abuse and exploitation. Stand-alone protection activities can include activities aimed at preventing or responding to specific protection risks […] violations and needs […] including for specific groups such as women, children, persons with disabilities, older persons, displaced persons and migrants.”64

                              How to monitor and evaluate protection mainstreaming

                              As per the Guidance Note on Protection Mainstreaming, relevant interventions should monitor to what extent protection mainstreaming was effectively integrated during implementation. Furthermore, evaluations should be conducted through a participatory and inclusive approach to integrate protection mainstreaming considerations. Examples include sex and age diversity during consultations and not exclusively relying on community leaders to identify respondents such as marginalized groups.

                              • 60Please note that this section is based primarily on the guidance from 2016 for protection mainstreaming within MCOF. This will be further updated upon availability of new guidance on protection (IOM, n.d.f, p. 4).
                              • 61IASC, 2013.
                              • 62IOM MCOF specifies the following 15 sectors of assistance: (a) camp management and displacement tracking; (b) shelter and non- food items; (c) transport assistance for affected populations; (d) health support; (e) psychosocial support; (f) (re)integration assistance; (g) activities to support community stabilization and transition; (h) disaster risk reduction and resilience building; (i) land and property support; (j) counter-trafficking and protection of vulnerable migrants; (k) technical assistance for humanitarian border management; (l) emergency consular assistance; (m) diaspora and human resource mobilization; (n) migration policy and legislation support; and (o) humanitarian communications (IOM, 2012).
                              • 63IOM, 2018c, p. 16; see also IASC, 2016.
                              • 64Ibid.
                              INFORMATION

                              Individuals may wish to consult the Guidance Note on Protection Mainstreaming, which includes a tool for M&E in its Annex 3.

                              The following are some questions that can be considered for both the monitoring and evaluation of protection as a cross-cutting theme and to ensure adherence to the protection principles:

                              • Are monitoring processes designed to ensure that access to humanitarian assistance by all groups is being regularly monitored?
                              • Are procedures in place to mitigate risks resulting from unintended consequences of IOM activities on protection issues?
                              • While providing assistance, is the safety and security of beneficiaries taken into consideration? If barriers to services and assistance are identified, are measures being taken to mitigate these barriers?
                              • Have procedures for informed consent been established and are they being used appropriately?
                              • Are all affected population and beneficiary groups and subgroups (such as boys, girls, men and women, abled and disabled, marginalized) being involved in monitoring and/or the evaluation processes?
                              • Is specific attention being given to access services by different beneficiary groups and subgroups and in different project locations?
                              • Are referral pathways for protection incidents established and in use?
                              • Is sensitive data being managed appropriately and in line with the IOM Data Protection Principles?
                              • Is feedback from affected populations and beneficiaries regularly collected and used to improve programming to better suit their needs?
                              • Are self-protection capacities being utilized within the framework of the project?
                              • Are State and local actors regularly consulted and involved in the implementation of protection measures?
                              • What impact has been achieved after the introduction of protection mainstreaming considerations during the project design, implementation and monitoring?

                              Below are some key tips for including protection mainstreaming into evaluation:

                              • Consider a participatory evaluation approach to ensure inclusion of all beneficiary groups.
                              • Consider how evaluation findings could be used to improve future actions, propose course correctors and ensure that findings that are deemed to be of interest to the larger community are shared.
                              • Consider to which extent and how protection should be further integrated into intervention activities as a cross-cutting issue.

                               

                              RESOURCES

                              IOM resources

                                  2012 IOM Migration Crisis Operational Framework, MC/2355.

                                  2016b Guidance Note on how to mainstream protection across IOM crisis response (or the Migration Crisis Operational Framework sectors of assistance). IN/232.

                                  2017a Annex 4.2: Guiding questions for incorporating cross-cutting themes into the project management and monitoring phase of the IOM project cycle (Module 4). In: IOM Project Handbook. Second edition. Geneva, p. 350 (Internal link only).

                                  2018c Institutional Framework for Addressing Gender-Based Violence in Crises. Geneva.

                                  n.d.e Protection mainstreaming in IOM crisis response.

                                  n.d.f Guidance Note on Protection Mainstreaming – Annex 3 (Internal link only).

                              Other resources

                              Inter-Agency Standing Committee (IASC)

                                  2013 The Centrality of Protection in Humanitarian Action: Statement by the Inter-Agency Standing Committee (IASC) Principals.

                                  2016 IASC Policy on Protection in Humanitarian Action.

                              Disability inclusion

                              Disability inclusion in IOM interventions has gained importance in recent years in line with initiatives promoted by the United Nations. Disability inclusion requires specific attention to be fully integrated as a cross-cutting issue into M&E efforts.

                              What is it?

                              Persons with disabilities are estimated to represent 15 per cent of the world’s population. In specific humanitarian contexts, they may form a much higher percentage and can be among the most marginalized people in crisis-affected communities. Persons with disabilities may face multiple forms of discrimination and be at heightened risk of violence and abuse, also often linked to their social conditions and other intersecting identities (such as gender, age, race and indigenous groups).

                              The Convention of the Rights of Persons with Disabilities (CRPD) affirms that States Parties must protect and promote the rights of persons with disabilities in their laws, policies and practices; and must also comply with the treaty’s standards when they engage in international cooperation. The CRPD, along with the Sustainable Development Goals and the Sendai Framework for Disaster Risk Reduction, set out other standards that protect persons with disabilities.

                              In addition to legal frameworks, IOM’s work on disability inclusion is also guided by the United Nations Disability Inclusion Strategy (UNDIS) that was launched in 2019, as well as IASC’s Guidelines on the Inclusion of Persons with Disabilities in Humanitarian Action. IOM’s commitments made at the Global Disability Summit in 2018 are also important in disability inclusive programming.

                              CRPD defines persons with disabilities as those who have long-term sensory, physical, psychosocial, intellectual or other impairments that, in interaction with various barriers, may hinder their full and effective participation in society on an equal basis with others.

                              IOM interventions must ensure that their activities address the barriers that prevent persons with disabilities in all their diversity from participating in, or having access to services and/or protection, in line with CRPD.

                              Both the UNDIS strategy and the IASC Guidance recommend taking a twin-track approach, which combines inclusive mainstream programmes with targeted interventions for persons with disabilities.

                              INFORMATION

                              The IASC Guidelines have sector-specific guidance on how to ensure disability-inclusive M&E in humanitarian action.

                              How to monitor and evaluate disability inclusion

                              To ensure disability inclusion within an intervention, it is recommended to monitor adherence to the following principles and standards: (a) promote meaningful participation; (b) address barriers faced by persons with disabilities; and (c) empower them to develop their capacities. Below are a series of questions and actions required to ensure that these are being followed within interventions:

                              Promoting meaningful participation of persons with disabilities

                              Does the intervention:

                              • Consider participation of persons with disabilities during implementation, and possibly in the design of the intervention?
                              • Recruit persons with disabilities as staff?
                              • Seek advice and collaborate with organizations of persons with disabilities (OPDs) when they devise strategies for engaging with persons with disabilities?

                               

                              Addressing the barriers faced by persons with disabilities

                              Does the intervention:

                              • Identify attitudinal, environmental and institutional barriers that may prevent persons with disabilities from accessing IOM’s programmes and services?
                              • Identify enablers that facilitate the participation of persons with disabilities?
                              • Take appropriate measures to remove barriers and promote enablers, to ensure that persons with disabilities benefit from assistance and can participate meaningfully?

                               

                              Empowering persons with disabilities and supports them to develop their capacities

                              Does the intervention:

                              • Develop the capacities of persons with disabilities and OPDs by equipping them with the knowledge and leadership skills they need to contribute to and benefit from IOM’s work and the protection this affords them?
                              • Build the capacity of IOM staff to design and implement inclusive interventions that are accessible to persons with disabilities by strengthening their understanding of the rights of persons with disabilities, as well as principles and practical approaches that promote inclusion and reduce barriers to inclusion?

                              The United Nations Evaluation Group (UNEG) guidelines on Integrating Human Rights and Gender Equality in Evaluations, the UNDIS framework indicator 10 on evaluation and the IASC Guidelines set standards on how to evaluate IOM’s work on disability inclusion with the following considerations that could also apply to a cross-cutting analysis:

                              Evaluation questions cover different aspects of disability inclusion. Evaluation questions mainstreamed across the different evaluation criteria or under a specific criterion shows the extent and the quality of disability inclusion.

                              Evaluation stakeholder mapping and data collection methods involve persons with disabilities and their representative organizations. Persons with disabilities and OPDs can enrich evaluation by providing first-hand information on their situation and experience.

                              Evaluation examines if barriers have been removed to allow full participation of persons with disabilities. It can also include long-term impact analysis on the lives of persons with disabilities and the recognition of their rights according to international standards.

                              RESOURCES

                              IOM resources

                                  n.d.e Protection mainstreaming in IOM crisis response.

                                  n.d.g Disability inclusion SharePoint (Internal link only).

                              Other resources

                              Government of the United Kingdom

                                  n.d. IOM’s commitments made at the Global Disability Summit in 2018.

                              Inter-Agency Standing Committee (IASC)

                                  2019 Guidelines on the Inclusion of Persons with Disabilities in Humanitarian Action.

                              United Nations

                                  n.d.a Indicator 10: Evaluation. In: Entity Accountability Framework. Technical notes.

                                  n.d.b United Nations Disability Inclusion Strategy (UNDIS).

                              United Nations Evaluation Group (UNEG)

                                  2011b Integrating Human Rights and Gender Equality in Evaluation – Towards UNEG Guidance. Guidance document, UNEG/G(2011)2.

                              Gender mainstreaming

                              What is it?

                              IOM has been working actively to mainstream gender throughout all of its interventions. Numerous policy and guidance documents are available to support this commitment (see the Resources box). IOM’s Gender Coordination Unit is in charge of the promotion of gender equality in IOM and proposes the following considerations and definitions of the notion of gender, gender analysis and gender mainstreaming:

                              Gender: The social attributes and opportunities associated with one’s sex and the relationships between people of different gender and age groups (such as women, men, girls and boys), as well as the relations between people of the same gender group. These attributes, opportunities and relationships are socially constructed and learned through socialization processes. They are context- and time-specific and changeable. Gender determines what is expected, allowed and valued in people based on their sex in a given context. In most societies, there are differences and inequalities between people of different gender groups in terms of responsibilities assigned, activities undertaken, access to and control over resources, as well as decision-making opportunities. Gender is part of the broader sociocultural context.

                              Gender analysis: A critical examination of how differences in gender roles, experiences, needs, opportunities and priorities affect people of different gender and age groups in a certain situation or context. A gender analysis should be integrated into all sector assessments and situation analyses, starting with the needs assessment.

                              Gender mainstreaming: The process of assessing the implications of any planned action, including legislation, policies or programmes, for people of different gender groups, in all areas and at all levels. It is an approach for making everyone’s concerns and experiences an integral dimension of the design, implementation, M&E of interventions in all political, economic and societal spheres so that all gender groups benefit equally and inequality is not perpetuated. The ultimate goal is to achieve gender equality.

                              How to monitor and evaluate gender mainstreaming

                              Throughout its interventions, IOM aims to promote gender equality and ensure that all of its beneficiaries and populations assisted are receiving the services and support they need, taking into consideration their gender-specific experiences so that interventions do not perpetuate gender inequalities.

                              The following are a few simple points to ensure gender mainstreaming and to monitor it within an intervention as a cross-cutting theme:

                              • Ensure that interventions address all the different needs (and capacities) of a diverse beneficiary population, with an aim to eliminate gender disparities and contribute to gender equality.
                              • Assesses how well an intervention captures gender perspectives. This includes using gender-sensitive indicators, which are disaggregated by sex, as well as indicators that measure gender-specific changes, such as prevalence of gender-based violence or perceptions of gender norms, roles and relations.
                              • Ensure that progress on gender-sensitive indicators is monitored regularly and adapted, as needed, to ensure that all intended beneficiaries are covered.
                              • Ensure that all gender and age groups are consulted when monitoring an intervention, to better inform progress on indicators and ensure that no one is left behind or discriminated because of gender considerations.

                              Gender marker: The IOM Gender Marker is a tool that assesses how well interventions integrate gender considerations. It establishes a clear set of minimum standards for incorporating gender considerations and sets out a coding system based on how many minimum standards are met. It allows IOM to track the percentage of its interventions and financial allocations that are designed to contribute to gender equality. The Gender Marker aims at improving the quality of IOM interventions by emphasizing the importance of addressing the specific needs and concerns of women, girls, boys and men, inclusive of those identifying as lesbian, gay, bisexual, transgender and/or intersex (LGBTI), and of different ages, so that everyone benefits in an appropriate way.

                              Evaluation can ensure that adequate attention is paid to the above points (and any other gender-related issues) that they are properly reflected in the evaluation methodology, findings/results, challenges and lessons learned. IOM has developed the Guidance for Addressing Gender in Evaluations, which are available in the Resources box and can be used for examining gender as a cross-cutting element of the intervention.

                              During the evaluation, the evaluator should also consider the following tips for ensuring that gender mainstreaming is integrated in the evaluation.

                               (a) Ensure that gender issues are specifically addressed in the evaluation ToR.

                               (b) During data collection, ensure that the persons being interviewed or surveyed are diverse and gender- representative of all concerned project partners and beneficiaries.

                               (c) Surveys, interview questions and other data collection instruments should include gender issues.

                               (d) Evaluation reports should include a gender perspective, such as analysis of sex-disaggregated data.

                              Evaluations should include questions to determine this during the process, such as the following:

                              • Are/were male and female beneficiaries able to participate meaningfully in the project?
                              • What are/were some of the barriers to meaningful participation and what has been or will be done to address these barriers?
                              • Are/Were men’s and women’s needs and skills adequately addressed and incorporated?
                              • Are/Were men and women satisfied with the project’s activities?

                               (e) Include gender perspective when analysing the successes and challenges, actions taken, lessons learned and best practices during the evaluation process.

                              RESOURCES

                              IOM intranet (available internally to IOM staff) and IOM website (publicly available) contain numerous references that are useful for monitoring the inclusion of gender in IOM interventions, including as cross-cutting issue, and in particular the IOM Gender Marker which should be considered in all interventions. The United Nations System-wide Action Plan on Gender Equality and the Empowerment of Women (UN-SWAP) is also an important resource for the inclusion of gender.

                              IOM resources

                                  2018d Guidance for Addressing Gender in Evaluations. OIG.

                                  n.d.h IOM Gender and Evaluation Tip Sheet.

                                  n.d.i IOM Gender Marker (Internal link only).

                                  n.d.j Gender and migration.

                               

                              Other resources

                              UN-Women

                                  n.d. Promoting UN accountability (UN-SWAP and UNCT-SWAP).

                              Environmental sensitivity and sustainability

                              What is it?

                              Environmental sensitivity must be addressed by all IOM interventions that should safeguard the environment. No IOM intervention should have a direct negative impact on the environment, and all possible measures should be taken to prevent harm to biodiversity and ecosystems, such as the destruction or contamination of natural resources.

                              Environmental sustainability is about addressing human needs without jeopardizing the ability of future generations to meet their needs and preventing irreversible damage to the world. Where sufficient resources and expertise are available, IOM projects should strive towards environmental sustainability.65

                              🢂   Environmental issues should be identified and analysed throughout the intervention as part of the initial risk analysis, as well as addressed as a part of the risk management plan where environmental risks are inevitable.66

                              Mainstreaming environmental sustainability “requires integrating the principles of sustainable management, protection, conservation, maintenance and rehabilitation of natural habitats and their associated biodiversity and ecosystem functions.”67

                              How to monitor and evaluate environmental considerations

                              When interventions are not specifically designed to address environmental issues – such as IOM programmes addressing disaster preparedness and disaster risk reduction to prevent forced migration that results from environmental factors, or those for relocation of populations from zones affected by environmental degradations – there are various elements that can be taken into account for monitoring and evaluating the inclusion of environmental sensibility and sustainability as a cross-cutting issue.

                              In its 2018 document titled IOM’s engagement in migration environment and climate change, IOM suggests the following considerations for understanding the migration and environment nexus; further suggestions are provided as to when this could be included and how it could be monitored within an intervention as a cross-cutting theme:

                               

                              Considerations Monitoring or evaluating in the context of an intervention
                              Environmental factors have always been a cause of migration. Ensure that environmental factors are included in the rationales of interventions whenever relevant and how the intervention mitigates this.
                              It is often difficult to isolate the environmental and climatic factors from socioeconomic factors, but an increasing number of studies show that environmental challenges are clearly a factor that impact the decision to move or to stay. When relevant and feasible, these factors should be identified and how the intervention indirectly address them as a cross-cutting theme. The linkage of these factors may often be explained in a ToC.
                              Climate change is expected to have major impacts on human mobility as the movement of people is and will continue to be affected by natural disasters and environmental degradation. As a cross-cutting theme in interventions dealing with mobility, the role and impact of the environment should be identified, if not specifically addressed by an objective and outcome.
                              Environmental migration may take many complex forms: forced and voluntary, temporary and permanent, internal and international. When examining the role and impact of the environment on IOM interventions dealing with migration, it could be relevant to identify if it can be categorized as “environmental migration” and if the intervention addresses it properly.
                              The concept of “vulnerability” needs to be put at the centre of current and future responses to environmental migration. The most vulnerable may be those who are unable to or do not move (trapped populations). The disaggregation of different groups will be necessary to ensure that interventions are monitored accordingly.
                              Environmental migration should not be understood as a wholly negative or positive outcome – migration can amplify existing vulnerabilities and can also allow people to build resilience. For example, temporary migration and remittances can open up alternative sources of income and reduce reliance on the environment for subsistence. An evaluation of an intervention could assess the positive and negative effects of environmental migration and how the intervention contributed to this if relevant to be considered as a cross-cutting theme.

                              The following are a series of questions that could be included in the evaluation ToR to ensure that environmental sensitivity and sustainability were properly integrated.

                              • Would it have been relevant to conduct an environmental impact assessment for this intervention?
                              • Was the project successfully implemented without any negative impact on the environment that could have affected human well-being?
                              • Has environmental damage been caused or likely to be caused by the project? What kind of environmental impact mitigation measures have been taken?
                              • Were appropriate environmental practices followed in project implementation?
                              • Does the project respect successful environmental practices identified in IOM?
                              • What are the existing capacities (within project, project partners and project context) dealing with critical risks that could affect project effectiveness such as climate risks or risks of natural disasters?
                              • Will the achievement of project results and objectives likely to generate increased pressure on fragile ecosystems (such as natural forests, wetlands, coral reefs and mangroves) and scarce natural resources (such as surface and groundwater, timber and soil)?
                              • Did the intervention bring relevant benefits and innovation for environmental sensitivity and sustainability?
                              RESOURCES

                              IOM resources

                                  2017a Annex 4.2: Guiding questions for incorporating cross-cutting themes into the project management and monitoring phase of the IOM project cycle (Module 4). In: IOM Project Handbook. Second edition. Geneva, p. 344 (Internal link only).

                                  2018e IOM’s engagement in migration environment and climate change. Infosheet.

                                  n.d.k Migration, environment and climate change. IOM intranet (Internal link only).

                                  n.d.l Environmental Migration Portal web site.

                               

                              Other resources

                                  United Nations Development Programme (UNDP)

                                  2014 Social and Environmental Standards. New York.

                              Accountability to affected populations

                              What is it?

                              AAP is an active commitment by humanitarian actors to use power responsibly by taking account of, giving account to, and being held to account by the people they seek to assist. AAP has featured on the humanitarian agenda for over two decades, initially known as “accountability to beneficiaries”. The shift to “accountability to affected populations” takes into account that assistance not only affects the aid recipients, but also the wider community. It aims to see affected populations as partners rather than as passive beneficiaries, recognizing their dignity and capacities and empowering them in the efforts that matter to them.

                              AAP takes accountability beyond the limited practice of accountability to identified “beneficiaries”, as it reaches out to people unintentionally excluded from receiving assistance that often happens to marginalized groups including people with disabilities, older persons and LGBTI groups. Moreover, the commitment to AAP differs from the traditional accountability to donors only. It requires humanitarian actors to place people at the core of the response, fostering their right to be involved in the decision-making processes that affect them and inform programming to be appropriate and responsive to their needs.

                              AAP gained particular prominence through the Transformative Agenda (2011) and the World Humanitarian Summit (2016) commitments, including the Grand Bargain (2016). These initiatives helped develop a shared understanding of AAP within the international community and resulted in a range of collective, as well as individual institutional commitments that aim to include people receiving aid in making the decisions that affect their lives, foster meaningful collaboration with local stakeholders and prevent sexual exploitation and abuse (SEA).

                              The Accountability to Affected Populations (AAP) Framework establishes IOM’s common approach for implementing and mainstreaming AAP throughout its crisis-related work, as contained in its MCOF. It helps the Organization ensure quality and responsive programming in line with the evolving needs of affected populations and communities and enforce the Organization’s zero tolerance against SEA and other misconduct. The commitments of this framework were developed in line with the IASC commitments to AAP and adapted to meet IOM’s operational realities.

                              Adherence to the framework’s principles and achieving its commitments and objectives are mandatory. There are many ways to implement and mainstream AAP, and such efforts need to be contextually relevant. Therefore, the framework is to be read in conjunction with the guiding IOM Accountability to affected populations collaboration space (Internal link only), which aims to help IOM staff identify and tailor AAP interventions.

                              AAP is founded on two operational principles in humanitarian programming: (a) rights-based approach; and (b) aid effectiveness.

                              Being accountable to affected people reaffirms IOM’s obligation to respect, fulfil and protect human rights and dignity, and achieving the commitments is essential for quality programming.

                              IOM is committed to providing humanitarian assistance in a manner that respects and fosters the rights of beneficiaries. IOM recognizes that there is often an inherent and important power differential in the interactions between IOM staff members and beneficiaries. As AAP is an active commitment by IOM, the Organization understands AAP more concretely as follows:

                              • Taking account of their views, which means giving them a meaningful influence over decision-making about projects and programmes in a way that is inclusive, gender-sensitive, non-discriminatory, does not harm, is conflict sensitive and accounts for the diversity of people in the affected community. IOM ensures that informed consent, protection and safety concerns are key considerations in its response. The Organization places high value on incorporating the feedback from migrants and affected populations into its projects, strategies, as well as in its collective response. While IOM has started to put in place individual feedback, complaints and response mechanisms in its interventions, the Organization is also involved in innovative approaches to joint feedback mechanisms that can reinforce transparency, mutual accountability and have a positive impact.
                              • Giving account by sharing of information in an effective and transparent way across all the thematic areas of work and to all communities with whom IOM works. This includes information about IOM and its mission, about projects/programmes and how to access them, timelines, entitlements related to IOM projects and selection criteria for taking part in the project and reasons for any changes that may be needed, as well as the staff code of conduct and information on how to provide feedback or how to raise complaints. IOM has the responsibility to share information in an appropriate and timely way, depending on the context, to ensure that affected populations can understand that information can be empowered by it, and become active participants in the IOM response. IOM also works with Humanitarian Country Teams and other key inter-agency fora and actors to agree on a strategy to share information to streamline communication and ensure coherence of messaging.
                              • Being held to account by the affected populations it serves, which means that IOM ensures affected communities and individuals have the opportunity to assess and, where feasible, inform modifications/ adjustments to its actions. Being accountable involves consulting affected communities and individuals on what they think about the quality of IOM response – at the country, regional and organizational levels – and act upon the feedback or provide an appropriate explanation on why such action cannot be taken. Particular emphasis needs to be placed on accountability to those left furthest behind, including extremely vulnerable women, adolescent girls, people with disabilities, the elderly and people identifying as LGBTI. IOM has in place a “zero tolerance” policy on fraud, corruption and SEA by staff and contractors, as this constitutes a breach of and very serious violation of the rights of the concerned persons.68 Populations should know about the code of conduct and be able to raise complaints and call for appropriate protection measures against such abuse, as well as be informed in general terms of the results of investigations on these complaints.

                              How to monitor and evaluate AAP mainstreaming

                              It is then also vital that communities being assisted are involved in the monitoring and the evaluation of IOM interventions and that their points of view on the success and failures, as well as the impact of the intervention, are considered for improving practice and future response. Accountability has always been embedded in the organizational structure of IOM and its operational policies and procedures. Monitoring AAP is also necessary for addressing the relationship between beneficiaries and IOM, and ensuring that the populations’ needs are met and that they are participating in the intervention at the planning, design, implementation and M&E stage. The Accountability to Affected Populations Framework can be considered as reference for related monitoring activities.

                              Through its interventions, IOM aims to ensure that all its beneficiaries and affected populations assisted are receiving the services and support they need. The following M&E questions can be asked when examining AAP as a cross-cutting issue.

                                (a) Does/Did the intervention use participatory methodologies in design, decision-making, implementation and monitoring of interventions to ensure the affected communities are involved from the initial stages of planning to identify their own needs, capacities, traditional and cultural divisions, and the strategies that are best suited to address these?

                                (b) Does/Did the intervention involve affected populations to ensure that their views are captured and influence further programming? For instance, this can be done by adding questions in data collection tools for monitoring and/or evaluation purposes that collect beneficiary feedback.

                                (c) Does/Did the intervention integrate indicators reflecting AAP efforts to ensure understanding the quality of IOM’s service provision and assist in identifying strengths and weaknesses in AAP related implementation?

                                (d) Does/Did the intervention conduct reviews for high-profile and high-risk interventions to identify AAP practices or provide recommendations on how to improve it?

                                (e) Does/Did the intervention learn from, document and share good practice on AAP as a cross-cutting theme to assist in institutionalizing AAP practice across interventions, across countries and regions?

                              Questions identified in previous cross-cutting themes such as rights-based approach, protection or gender can also properly cover elements related to AAP.

                              RESOURCES
                                Evaluation terms of reference template

                                Click here to view Evaluation terms of reference template

                                RESOURCES

                                IOM resources

                                    2021 IOM Checklist - Terms of Reference . Central Evaluation Unit.

                                United Nations Evaluation Group (UNEG)

                                    2008 UNEG Code of Conduct for Evaluation in the UN System. Foundation Document, UNEG/FN/ CoC(2008).

                                    2016 Norms and Standards for Evaluation. New York.

                                    2020 UNEG Ethical Guidelines for Evaluation.

                                  IOM sample evaluation matrices for a development-oriented project and a humanitarian project

                                  Module 6 of IOM Project Handbook (Internal link only)

                                  Click here to view Sample Evaluation Matrices

                                    IOM scorecard for assessment of applications for evaluations commissioning evaluator(s) (individual consultant or consulting firm) (Internal link only)

                                    Click here to view IOM Scorecard

                                      IOM inception report template

                                      Module 6 of IOM Project Handbook, pp. 474–475 (Internal link only)

                                      Click here to view Inception Report Template

                                      RESOURCES

                                      IOM resources

                                          2021 IOM quality control tool - inception report . Central Evaluation Unit.

                                        IOM evaluation report components template

                                        Click here to view Evaluation Report Components Template

                                        RESOURCES

                                        IOM resources

                                            2022 Guidance On Quality Management Of IOM Evaluations. Central Evaluation Unit.

                                            2022 Quality Control Tool - Evaluation Reports. Central Evaluation Unit.

                                          IOM final evaluation report template

                                          Click here to view Final Evaluation Report Template

                                          RESOURCES

                                          IOM resources

                                              2022 Guidance On Quality Management Of IOM Evaluations. Central Evaluation Unit.

                                              2022 Quality Control Tool - Evaluation Reports. Central Evaluation Unit.

                                            Evaluation brief template and guidance

                                            Click here to view Evaluation Brief Template and Guidance

                                            RESOURCES

                                            IOM resources

                                                n.d.a Evaluation Brief Templates. (Internal link only).

                                              Evaluative approaches: Most significant change

                                              What it is

                                              A most significant change (MSC) is a type of participatory monitoring and evaluation.69 It involves gathering personal account of change and determining which of these accounts is the most significant and why. It is participatory because it involves multiple stakeholders in deciding what type of change to record and analyse. It is also a form of monitoring, because gathering of data occurs throughout the implementation cycle and provides information for decision makers. Finally, MSC is a form of evaluation, because it provides information on higher-level results, such as outcomes and impact, which can be useful to assess implementation performance as a whole.

                                              When to use it

                                              MSC had different names, one of which is “monitoring-without-indicators” or “story approach”, as it does not make use of performance indicators and the answer to how change occurred is formulated in a story. In this sense, MSC is very helpful instrument in explaining how and when change comes about, which makes it useful to support the development of a Theory of Change.

                                              How it is done

                                              • Scholars may disagree on the number of steps in using MSC, but in essence, these can be summarized into three basic steps:
                                              • Panels of key stakeholders at different hierarchical levels (such as field staff, programme staff, managers and donors) decide together on what type of significant change accounts/stories should be collected. As these stories come from the field, key stakeholders identify general domains of change and the frequency to be monitored, such as changes in people’s lives for example.
                                              • After analysing the collected stories, they are filtered up through the levels of authority typically found within an organization, where at each level along with a detailed explanation of the selection criteria, a most significant change is identified.
                                              • The stories are shared and the values and selection criteria are discussed with stakeholders and in this way contribute to learning.

                                              Strengths and limitations

                                              MSC not only supports the process of learning from the stories, as it provides information about un/ intended impact, but also helps clarify the values held by different stakeholders in terms of identifying what success looks like. Note that MSC by itself is not sufficient for impact analysis, as it does not sufficiently explain why change happens and provides information about the extremes, rather than the usual experience. One of its limitations is that it is time consuming and requires thorough follow-up and multiple stakeholder meetings.

                                              RESOURCES

                                              Asadullah, S. and S. Muñiz

                                                  2015 Participatory Video and the Most Significant Change: A guide for facilitators. InsightShare, Oxford.

                                              Davies, R. and J. Dart

                                                  2005 The ‘Most Significant Change’ Technique – A Guide to Its Use.

                                              International Development Research Centre’s Pan Asia Networking

                                                  2008 Jess Dart – Most significant change, part I. Video.

                                              Overseas Development Institute (ODI)

                                                  2009 Strategy Development: Most Significant Change. Toolkit.

                                                Request for Proposals (RFP) template

                                                Click here to view Request for Proposals (RFP) template