This chapter looks at the essentials for monitoring an intervention (a project/programme and/or a strategy or policy). It introduces the Theory of Change (ToC) and the IOM Results Matrix, and describes the basic types of monitoring, including activity, results, financial and risk monitoring, as well as other types of monitoring. This chapter also focuses on remote management and monitoring, third-party monitoring (TPM), and explains how different monitoring elements come together to form an overall M&E plan, and finally, looks at monitoring and reporting on results.
A strong project design is the foundation for successful monitoring. The proposal development stage clearly articulates the desired results an intervention aims to achieve, how it achieves them and stipulates how progress towards these results will be measured. Modules 1 (Conceptualization), 2 (Proposal design) and 4 (Project management and monitoring) of the IOM Project Handbook provide an overview of this process and show how the foundations for successful monitoring are laid.
This chapter builds on the main points mentioned in the IOM Project Handbook and provides further technical guidance, as well as expands on new concepts such as ToC. It primarily focuses on monitoring a project or programme and also shows that the principles of monitoring a project or programme are also applicable to monitoring a strategy and/or policy. While many concepts covered in this chapter are important for both monitoring and evaluation (M&E) – such as the development of a ToC, the IOM Results Matrix, IOM Cross-Cutting Themes, remote management and the development of an M&E plan – guidance for conducting evaluation in IOM is covered in detail in chapter 5 of the IOM Monitoring and Evaluation Guidelines. IOM focuses on four key areas for monitoring: (a) activities; (b) results; (c) budget and expenditure; and (d) risk.
While various definitions exist of the programme theory, this section focuses on approaches most suited for the IOM operational context.
Programme theory is a logical thinking process on how to address a situation and respond to it through an intervention. It can therefore be useful in providing a conceptual framework for monitoring, as well as for evaluation.
Various labels for programme theory exist, including logic model, intervention logic, causal model, results chain and ToC. Two complimentary approaches, which are pertinent for IOM interventions, are further elaborated in this chapter: (a) ToC; and (b) logical framework, which is represented by the Results Matrix at IOM.
While both approaches map out how an intervention leads to results, each has a slightly different purpose.
BetterEvaluation
n.d. Home page.
Rogers, P.
n.d. Develop programme theory/theory of change. BetterEvaluation.
Clearly articulating the expected results or desired change of an intervention is the foundation for M&E. It is necessary to identify what requires change, what expected change looks like and, finally, how such change can be achieved through IOM interventions. This is where ToC comes in handy. While there are many different definitions of ToC, this section focuses on approaches relevant to the IOM context.
A ToC may be viewed as a tool or methodology to map out the logical sequence of an intervention from activities to results, showing multiple pathways that may lead to change, including pathways not related to the planned intervention. It may also be viewed as a deeper reflective process and dialogue among staff and stakeholders, reflecting the values and philosophy of change that make more explicit the underlying assumptions of how and why change may occur as a result of an intervention. At its best, a ToC is a combination of these two views.
It is most often defined as illustrating a link between activities, outputs, outcomes and objectives, creating a chain of results, referred to as the pathway of change or the causal pathway.
|
What is a Theory of Change?
|
A ToC can be viewed as a product of a series of critical thinking exercises that provide a comprehensive picture of the different levels of change expected to occur due to an intervention, at the stage of its development, during its implementation, as well as following its completion.
Center for Theory of Change
n.d.a Theory of Change examples.
n.d.b What is theory of change?
n.d.c Home page.
Stein, D. and C. Valters
2012 Understanding theory of change in international development. Justice and Security Research Programme (JSRP) Paper 1. JSRP and The Asia Foundation.
The use of a ToC is more and more common and developed for all types of interventions. It can be applied to design, monitor, as well as evaluate different types of interventions and is best used to measure the complexity of transformation and change. Because a ToC acknowledges that change is not linear, but dynamic and complex, it often seeks to articulate social, political and community-based change(s) or empowerment initiatives. A ToC is a process-oriented approach that can be used to analyse the interrelations and/or interactions in complex systems in which IOM, partners and allies work. Such a process helps navigate in unpredictable and complex environments and helps track and assess change in the system to which an intervention may contribute.
It is important to note that different terminologies may be applicable when defining the multiple pathways of change, such as: (a) objectives, outcomes and outputs; (b) long-term, intermediate and short-term outcomes; or (c) outcomes and pre-conditions. This section uses the terms objectives, outcomes, outputs and activities to align with IOM’s M&E terminology and either develop or supplement IOM results matrices.
ToC is a guiding framework for all stages of thinking, action and sense-making for interventions involved with social and/or political change processes.
Graphic depiction
When graphically depicting a ToC, diagrams can be generally flexible in format and may be simple or complex. They can be vertical, horizontal or circular.
The graphic depiction of a ToC can help with mapping out multiple causal pathways to identify the most feasible one for a given intervention. Another advantage of graphically depicting a ToC is that it makes possible causal links more understandable and immediately visible. It enables comparisons between different pathways and can help identify implicit assumptions.
Participatory approach
The process of developing a ToC should be participatory and collaborative, and include key stakeholders, as well as the beneficiaries, or people that the Organization seeks to assist, and/or affected populations, who can offer their different perspectives to define what an expected change within a specific thematic field may look like.
Multiple pathways of change
A ToC acknowledges that change is dynamic and complex and can show different possible pathways that might lead to change (see Figure 3.2 where each colour of the arrows represent a different pathway). The process of developing a ToC helps discover these multiple pathways of change.
A common challenge when using the ToC is the lack of a “theory” and/or using a weak theory. For instance, weak theories do not explain how change is expected to occur or do not state/establish assumptions clearly. It is important to ensure that the ToC actually articulates a logical theory, which makes the implicit causal mechanisms explicit and supplements the graphical representation.
By developing a valid and relevant ToC, implementers can ensure that their interventions will be delivering the appropriate activities for desired and realistic results. It ensures that interventions are easier to monitor and evaluate, bring to scale, as well as sustain, as each step – from the ideas and assumptions behind it, to the results it hopes to achieve and resources required – are clearly articulated within the theory. A well-articulated ToC can also promote a common understanding of the intervention for all actors involved in implementation, thereby facilitating a cohesive and common approach.
The process of developing a ToC can help identify whether, and at which stage or level, assumptions, logical jumps or missing key steps in the change process are taking place. Developing a ToC is a good way to raise further questions such as the following:
a) Why is a particular change expected to happen?
b) What evidence is available to support that expected change will/has occur/red?
c) What logical jumps are made?
d) What assumptions are made?
When developing a ToC, it is important to understand its purpose. ToCs can be applied at different levels, ranging from world views, strategies and policies, to the project or programme level and all the way down to activity level. For instance, world views can help clarify social and political theories that inform one’s thinking. Organizational ToCs can help inform the vision, mission and values that the organization requires to contribute to social change. For policy ToCs, it can help identify how an organization expects change to evolve in a specific sector and how it contributes to it.
Scholars have not reached agreement on an overall definition of, and methodology for, developing a ToC, and donors may follow different approaches to drafting a ToC. For instance, the approach by the United Kingdom Foreign, Commonwealth and Development Office (FCDO) to drafting a ToC largely differs from that of the United States Agency for International Development (USAID) or that of the European Union.
Source: Vogel and Stephenson, 2012
Note: FCDO is formerly the Department for International Development (DFID).
Source: Kedzia, 2018
Irrespective of how different stakeholders approach a ToC, they all have one commonality: they enable the articulation of how, why and under what conditions a change is expected to occur within a specific context. While there is no one standard approach to developing a ToC, the following section illustrates a formula that can be applicable in most contexts and is commonly used by USAID to measure social and behavioural change.
Center for Theory of Change
n.d.a. TOC examples.
Kedzia, K.
2018 Theory of change: It’s easier than you think. USAID Learning Lab, 13 March.
Organisation for Economic Co-operation and Development (OECD)
2019 Better Criteria for Better Evaluation: Revised Evaluation Criteria Definitions and Principles for Use. OECD/Development Assistance Committee (DAC) Network on Development Evaluation.
Vogel, I. and Z. Stephenson
2012 Appendix 3: Examples of theories of change. FCDO, London.
While ToCs can be illustrated in different ways, the logic of the chain of results, or causal pathway, can be tested using if-then-because statements. In other words, it helps reveal assumptions that are “tested” through actions/activities, while assumptions play a central role in developing the ToC.
Every step taken, from the overall objective of the intervention to each of its activities, has a ToC behind it that can explain and articulate the logical connections (or the different pathways) between the lowerlevel results, such as between outputs and outcomes, as well as between the higher-level results, such as the outcomes and objectives.
A common challenge when designing an intervention are logical leaps and gaps. There may be a disconnect between strong problem analysis and seemingly unrelated activities, with weak links and/or assumptions between objectives, outcomes, outputs and activities. Through surfacing underlying assumptions, the ToC may provide a bridge between analysis and programming.
Generally, a ToC can be articulated using the “if X, then Y, because of Z” formula. That is, “if X action/ activity occurs, then Y result will occur, because of Z assumption(s)”. The process of surfacing such underlying assumptions can help identify where logical jumps are made or helps identify missing key steps in the change process.
The following section will focus on one of the many possible pathways illustrating the application of the if-then-because formula, noting that this exercise can be repeated for many different pathways for different levels.
The following elaborates on the example, identifying assumptions potentially surfaced through this process for this one particular pathway down to the output level. As many different pathways can exist for each level, this exercise can be done for each possible pathway.
🢂 Multiple pathways: During the process of identifying multiple pathways, it is important to note that not all pathways may be implemented by IOM, and that some of them can be implemented by actors other than IOM, and out of IOM’s control.
Objectives: Contribute to stability and security, and build a foundation for political and social development in conflict-prone communities of country X. |
Objective-level Theory of Change |
If relationships between the local authorities and conflict-prone communities in area Y of country X are strengthened, then stability and security, and building foundations for political and social development in conflict-prone communities of country X will be supported, because
|
🢂 Multiple pathways: With each result articulated at the outcome level, an if-then-because statement is articulated, and the assumptions surfaced at the objective-level ToC. |
Outcome-level Theory of Change |
If access to livelihood opportunities for conflict-prone communities of country X is increased, then relationships between the local authorities and communities in area Y of country X are strengthened, because
|
🢂 Multiple pathways: With each result articulated at the output level, an if-then-because statement is articulated, and the assumptions surfaced at the outcome-level ToC. |
Output-level Theory of Change |
If vocational skills among beneficiaries in conflict-prone communities of country X are developed/enhanced, then their access to livelihood opportunities will increase, because
|
🢂 Multiple pathways: With each result articulated at the activity level, an if-then-because statement is articulated, and the assumptions surfaced at the output-level ToC. |
Additional examples of ToCs surfacing assumptions for multiple pathways can be found in the resources listed at the end of this section. |
When reviewing a ToC, one may want to focus on the links a ToC is trying to make. The following checklist consists of five simple questions that may help with the review process:
- Are both the action (X) and the result (Y) clearly formulated? Is it clear what approach is being utilized? When considering the result, ensure that terms being used are clear. Ask whether the results are measurable, and if so, what would be a good source of verification for this. How would this be monitored and how would can sources of verification for this result be attained?
- Is the result realistic and achievable given the scale of action, resources and time frame? Try to assess whether the results are proportional to the scale and scope of action. Is the result observable? Is it immediate, intermediate or long term? How will the result be measured? Are there any logical jumps and/or gaps between a modest action and an ambitious result?
- Do the assumptions explain why the approach is suitable? A strong assumption is able to articulate why the chosen approach should produce the expected change/result. This reflects the intervention’s problem analysis.
- Are the relevant preconditions that are necessary for success included in the assumptions? Assumptions targeting necessary preconditions mostly reflect conditions beyond the intervention’s control that are believed to be necessary to attain results, but do not explain why/how change will occur. This may also be viewed as being linked to thinking about risk. To identify such assumptions, it is helpful to ask one’s self that if the intervention were to fail, what may have gone wrong.
- Does the ToC make sense if it is reversed?
Earlier, it was shown that ToCs can be applied to an activity, project, programme, strategy or policy. The following shows how to review a ToC at all the levels by applying the “if X, then Y, because Z” formula to each one. In cases where the “if X, then Y, because Z” formula is applied, the higher-level ToC’s “if” statement becomes the lower-level ToC’s “then” statement, meaning that for each statement, the action/ intervention of higher-level ToC should correspond with the result/desired change of the lower-level ToC.
Figure 3.8 applies this method of ToC review to the above example:
Monitoring the IF statements: Include questions into the data collection tools that directly relate to the if statement.
Example of output-level ToC:
- If access to livelihood opportunities for conflict-prone communities of country X are increased
- Was there an increase of knowledge and vocational skills? (Assess whether there was an increase in knowledge through the training pre/post-test; beneficiary feedback);
- Are beneficiaries using attained knowledge and vocational skills? If yes, how so; if no, why not (draft lesson learned)?
Monitoring the THEN statements: Include questions into your data collection tools that directly relate to the then statement.
Example of output-level ToC:
- Then their access to livelihood opportunities will increase
- Have livelihood opportunities for target beneficiaries increased?
- If yes, was it due to the intervention; if no, why not (draft a lesson learned)?
Monitoring the BECAUSE statements: Include questions into your data collection tools that directly relate to the because statement.
Examples of output-level ToC:
- Because improved vocational skills may increase the chance for beneficiaries to find a job
- Were beneficiaries unemployed prior to training?
- Did they find a (better) job due to increased skills (beneficiary feedback)?
- Because capacities are relevant to the opportunities in the target areas
- Ask beneficiaries if this is accurate.
- Opportunities exist in the area
- Conduct context analysis.
Other resources
Anderson, A.A.
2006 The Community Builder’s Approach to Theory of Change: A Practical Guide to Theory Development. The Aspen Institute, New York.
BetterEvaluation
n.d. Home page.
Brown, A.-M.
2016 What is this thing called ‘Theory of Change’? USAID Learning Lab, 18 March.
Davies, R.
2018 Representing theories of change: Technical challenges with evaluation consequences. Centre of Excellence for Development Impact and Learning Inception Paper 15, London.
Hivos
2015 Hivos ToC Guidelines: Theory of Change Thinking in Practice. The Hague.
Lysy, C.
2018 Illustrating models and theories of change. BetterEvaluation, 10 January.
Rogers, P.
2014 Theory of Change. Methodological Briefs: Impact Evaluation No. 2. UNICEF Office of Research, Florence.
2017a Using logic models and theories of change better in evaluation. BetterEvaluation, 19 May.
2017b BetterEvaluation FAQ: How do you use program theory for evaluating systems? BetterEvaluation, 21 June.
Valters, C.
2015 Theories of Change: Time for a radical approach to learning in development. Overseas Development Institute, London.
Vogel, I.
2012 Review of the use of ‘Theory of Change’ in international development. Review report. FCDO.
Graphical representation tools
Evaluation Toolbox
n.d. Problem Tree/Solution Tree Analysis.
Microsoft
n.d. Microsoft Visio.
Theorymaker
n.d. Home page.
As previously mentioned, also belonging to programme theory is the logical framework, which helps identify an intervention’s operational design and is a foundation for M&E. It is an overview of an intervention’s intended approach to attain results, based on the situation and problem analysis undertaken during the conceptualization stage. Specifically, it uses a matrix to summarize the logical sequence in which an intervention aims to achieve desired results, the activities required to attain these results and the indicators and sources of verification that help measure progress towards achieving results.
Within IOM, the Results Matrix included in the IOM Project Proposal Template, also known as the results framework, bears the closest resemblance to the logical framework. Module 2 of the IOM Project Handbook provides detailed guidance on the development, drafting and design of the Results Matrix. The IOM Project Handbook identifies the Results Matrix as a strategic management tool that facilitates the “planning, monitoring, evaluating and reporting on the implementation of a project and progress towards achieving its results.”
Project Information and Management Application (PRIMA) captures project-related data. All project-related documents required to track a project are available on PRIMA. This includes the proposal, the IOM Results Matrix and budget-related documents. A PRIMA User Guide is available internally on IOM’s intranet. |
IOM’s
|
IOM resources
2017a Module 2. In: IOM Project Handbook. Second edition. Geneva (Internal link only).
2019a PRIMA User Guide. MA/00651 (Internal link only).
Separating the ToC from the logical framework is challenging, because they both stem from the same family of approaches related to programme theory. A ToC, as described in the previous section, starts from the premise that the process of social change is complex, taking into account different perspectives and analysing the underlying assumptions of an intervention’s design. In contrast, a logical framework offers a more simplified picture of intervention logic, not considering all the underlying assumptions and related causal links, but rather only those related to the particular intervention and selected during the project’s initial conceptualization. A logical framework can be viewed as a more rigid and linear way of thinking about change.
Originally, logical frameworks were intended to summarize complex stakeholder discussions about the objectives and results that an intervention would reach and contribute to, as is also the case with the ToC rationale. The intention was to analyse internal and external dependencies that would influence the intervention’s effectiveness, including direct assumptions that require to be taken into consideration in the analysis.
Table 3.1. Comparing and contrasting Theory of Change and logical framework
Theory of Change | Logical framework |
|
|
Source: Adapted from Hivos, 2015, p. 15.
While assumptions play a critical role in the development of both a ToC and the IOM Results Matrix, there is a clear distinction between assumptions as they are used in the ToC and as elaborated in the IOM Results Matrix. The most important distinctions can be summarized as follows:
Assumptions: ToC versus IOM Results Matrix | |
Assumptions in the ToC | Assumptions in the IOM Results Matrix |
ToC assumptions help to articulate the logical connection/causal pathway between lower- and higher-level results.
|
Assumptions within the IOM Results Matrix are the preconditions (necessary and positive) on which the success of a higher-level result depends.
|
A further explanation of logical framework assumptions as used within the IOM Results Matrix is provided further down in this section.
Despite these differences, a ToC and logical framework remain complementary, can be used together, and ToC thinking can be applied to the process of drafting the IOM Results Matrix. This encourages intervention developers to make explicit the ToC assumptions about how desired change will occur, highlighting the change process and related causal pathways between expected results. These also can then be articulated when addressing the “why” and “how” within the rationale section of the Project Proposal Template, more specifically, providing the strategic thinking that informs the “assumptions and hypotheses underlying the causal relationships between activities, outputs and outcomes”.
IOM resources
2017a Module 1 and Module 2. In: IOM Project Handbook. Second edition. Geneva (Internal link only).
Other resources
Hivos
2015 Hivos ToC Guidelines: Theory of Change Thinking in Practice. The Hague.
Vogel, I.
2012 Review of the use of ‘Theory of Change’ in international development. Review report. FCDO.
Developing an IOM Results Matrix is one of the foundations for monitoring an intervention at IOM. It provides an overview of the parameters for the measurement of intervention results. The Results Matrix, mandatory for all IOM project proposals and available in the IOM Project Handbook, can also be used for the implementation of a strategy or policy and is drafted during the initial development stage.
Understanding how a results matrix is developed can contribute to improving general understanding of how to monitor intervention progress and results effectively. The following sections will illustrate the various steps involved in developing a Results Matrix.
Development of the Results Matrix should build on the analysis carried out in the conceptualization phase. In particular, the problem tree and solution tree, if developed when conducting the situation analysis, may already map out the various causal pathways.
Note: Visualization of an IOM Results Matrix located in Module 2 of IOM Project Handbook, p. 121.
Vertical logic
The term vertical logic refers to the “means-end relationship between activities and the results”, as well as the relationship between “the results and their contribution to the broader objective”.
The IOM Results Matrix uses the terms “objective”, “outcome”, “output” and “activity” to demonstrate vertical logic. The diagram below provides a visual representation of the vertical logic (as well as horizontal logic) within a results matrix.
Source: Adapted from Module 2 of IOM Project Handbook, p. 122 (Internal link only).
Vertical logic focuses on the results at each level. It is the process of taking the logical steps from the objectives down to the activities, with the aim of linking and demonstrating how the results at each level contribute to the next. Results that are properly linked demonstrate the causal connection from one result level to the next, forming a causal pathway or results chain.
Engaging in a participatory approach to the development of a results matrix, ideally including the views of key stakeholders, such as beneficiaries, or people that IOM seeks to assist, will lead to better formulated results and indicators. This is essential for successful monitoring once the intervention has begun.
Performing a stakeholder analysis during the conceptualization phase of an intervention identifies relevant stakeholders, assesses their interests, the ways these interests are likely to affect the intervention, as well as the level of their involvement. This process can support the identification of key stakeholders for involvement in the development process.
Horizontal logic
Horizontal logic “defines how each of the levels in the vertical logic will be measured and the assumptions that are required for the means-end relationship to hold true”.
Source: Adapted from Module 2 of IOM Project Handbook, p. 122 (Internal link only).
Horizontal logic completes the Results Matrix by identifying what assumptions are required for the results to occur and how progress on each of the results will be measured.
IOM resources
2017a Module 1 and Module 2. In: IOM Project Handbook. Second edition. Geneva (Internal link only).
When it comes to expressing results within a results matrix, there is diversity of terminology used by different organizations and agencies. However, the underlying logic for the development of the Results Matrix is similar, allowing for alignment of concepts and related reporting, as well as M&E processes. The following chart provides examples of the Result Matrix terminology IOM has used in the past, as well as the terminology IOM currently uses and compares these with the terminology used by several other key development entities. Careful consideration should be given to ensuring that the results at each level of the vertical logic line-up when transferring interventions between donor/partner formats and IOM templates.
IOM works with a number of different donors and other development actors. Familiarizing one’s self with the terminology used by these actors and how they are related to the IOM terminology allows for more accurate and effective monitoring and reporting of intervention results.
Source: IOM Project Development Training, 2018.
Note: FCDO is formerly the Department for International Development (DFID).
Applying vertical logic
IOM currently uses the terms objective, outcome, output and activity to demonstrate vertical logic, while indicator, assumption, data source and collection method, target and baseline help to elaborate the horizontal logic. The following is a summary of those definitions:
Vertical logic definitions |
||
Objective | The objective is the most significant, realistic goal to which the intervention can contribute. It seeks to align a broader, long-term strategy, whether internal or external. |
|
Outcomes | An outcome is the intended change in institutional performance, individual or group behaviour or attitudes, or the political, economic or social position of the beneficiaries, or people that IOM seeks to assist. |
|
Outputs | An output is the intended change in the skills or abilities of beneficiaries, or people IOM seeks to assist, or the availability of new products or services as a result of intervention activities. |
|
Activities | Activities include coordination, technical assistance, training, production, delivery, transportation and any other tasks organized and executed under the intervention. |
|
IOM resources
2017a IOM Project Handbook. Second edition. Geneva (Internal link only).
Other resources
Church, C. and M. Rogers
2006 Designing for Results: Integrating Monitoring and Evaluation in Conflict Transformation Programs. Search for Common Ground, Washington, D.C.
Tool to create results matrices
n.d. Microsoft Visio.
Applying horizontal logic
The following sections provide more detail on how horizontal logic is applied to the Results Matrix. The components of horizontal logic are assumptions, indicators, baseline, targets and data source and collection methods. The horizontal logic connects the measurement of results (through indicators) with the assumptions behind how the results are expected to occur.
Assumptions
Please refer to the visualization of the IOM Results Matrix above to identify the Assumptions column in the Results Matrix.
“Assumptions are the necessary and positive conditions that allow for a successful means–end relationship between the different levels of results.”
Module 2 of IOM Project Handbook, p. 137 (Internal link only).
Assumptions help complete the intervention logic by placing the intervention in the specific context in which it will be implemented. Assumptions also help identify important conditions on which the success of the intervention depends and that lie outside IOM’s line of control. In this sense, assumptions identify the required preconditions for results to occur.
An assumption checklist is provided as a tool within the IOM Project Handbook.
Examples of well-written assumptions are available in Module 2 of the IOM Project Handbook, pp. 137–140.
Indicators
Please refer the visualization of the IOM Results Matrix above to identify the Indicators column in the Results Matrix.
After having set up achievable and well-defined results and having identified related assumptions, the next step in developing a Results Matrix is to select indicators to monitor progress towards achieving those results. Indicators consist of information that signals change.
Indicators can be defined as “the quantitative or qualitative factors or variables to measure achievement or to reflect expected changes”.
Indicators are not intended to demonstrate why the intervention has made a difference; this is inter alia covered by evaluation. Similarly, they do not demonstrate how change occurs, which is, inter alia, covered by the ToC or by evaluation. They help understand whether change has occurred.
There is no specific rule on the ideal number of indicators for any one result, but they should be able to measure whether the result has been achieved.
Components of an indicator | ||||
What is to be measured | Unit of measurement | Target population | Direction of change |
Qualitative | Quantitative |
Examples include the following:
|
Examples include the following:
|
Binary indicator (can be qualitative or quantitative) | |
Examples include the following:
|
|
Proxy indicator | |
Example includes:
|
Table 3.2 summarizes the use of indicators at the different levels of results.
Table 3.2. Levels of intervention control
Result level | Indicator description | Level of intervention control |
Objective |
Helps confirm changes to the following:
|
|
Outcome |
Helps confirm intended change in the following:
|
|
Output |
Helps confirm the intended change in the following:
|
|
To ensure the measurement of the effects of an intervention on different intended or unintended target populations, it is important to disaggregate indicators by age, gender, migration status and any other identifiers relevant to the intervention
It is important to think about how to achieve outcomes within the time frame of the intervention or with an understanding of how to measure their achievement after the intervention concludes already at the development stage of an intervention. There can be short- or medium-term outcomes, depending also on the nature and duration of the project.
How to formulate an outcome to ensure that indicators can be measured within the implementation period or with mechanisms in place for measurement shortly after its completion will greatly depend on the following:
- Type and complexity of intervention;
- Type of results the intervention aims to achieve;
- Duration of the intervention;
- Resources available for verification.
A capacity-building intervention, which includes a training activity for officials from a government institution, would want to measure institutional change through the application of knowledge acquired. Depending on the topic and the local context, this may be measurable only three to six months after the training has taken place. Therefore, developers, M&E officers and intervention managers should consider when the training can take place within the project timeline to allow sufficient time to collect data on the short- and medium-term outcome-level results.
Although indicators need to be tailored to results of an individual intervention, it can be useful to look at the successful indicators of other interventions and modify them as needed. In some cases, standard indicators have been developed within specific thematic areas and may act as a guide to help measure results vis-à-vis certain international norms and standards; hence, they may require further adaptation.
It is also recommended to align indicators with existing country-level, regional, thematic and global strategies and policies. Examples of this can be to cross-check indicators to ensure their alignment with indicators in a country-level United Nations Sustainable Development Cooperation Framework (UNSDCF), a government strategy or national action plan to address a particular thematic area or topic, indicators established as a part of the Sustainable Development Goals (SDGs).
PRIMA helps monitor project progress through the Results Matrix and project indicators. When creating an IOM Results Matrix in PRIMA, users will be asked to enter indicators for the objective and for each results statement. For each indicator, PRIMA requires entry of an indicator type, which allows for two options: numeric or text. For a numeric indicator, intervention developers will only enter numeric data for this indicator. A text-type indicator will allow users to report using text-based data.
|
|
In the case of a numeric indicator type, users are asked to identify an indicator category, which provides three options: beneficiary, service or item. If beneficiary is selected, additional fields will be displayed for completion, including beneficiary category, unit of measure (individual, household or community), beneficiary type, as well as allowing for a multiplier or targeting individual or unique beneficiaries.
For more information on entering indicators into an IOM Results Matrix in PRIMA, see the PRIMA End User Training Guide: PD Creates Project Proposal (IOM Template) for internal IOM users only through the IOM intranet here.
While developing indicators, the following considerations should be taken into account:
- Indicator overload: Too many indicators, often overlapping and measuring the same thing. One to three indicators for a results statement may suffice.
- Output fixation: Indicators that are focused on counting outputs only.
- Indicator imprecision: Indicators that are unclear and may measure results at a too high or too low level.
- Excessive complexity: Indicators that are not clear and are very difficult to understand.
It is also important to keep in mind the specific, measurable, achievable, relevant and time-bound (SMART) criteria used to develop indicators:
Source: Module 2 of IOM Project Handbook, p. 147 (Internal link only).
An indicator must also be both relevant (directly related to the result) and achievable (requires a reasonable amount of time and resources to gather and analyse data). Assessing whether an indicator is achievable requires assessing the data source/means of verification (MoV).
Another useful approach to drafting indicators is to apply QQT targeting, which ensures that each indicator is measurable in terms of quantity, quality and time (QQT).
An indicator can define the how many, how often, how much, how long or a mixture of it as illustrated in the example below:
Step 1: Basic indicator | % of participants trained that report using the information gained |
Step 2: Add quality (What kind of change) | % of border management officials of country X trained that report using the tools and knowledge provided in their work |
Step 3: Add quantity (How much) | % of border management officials of country X trained that report using the tools and knowledge provided in their work on a regular basis |
Step 4: Add time (By when) | % of border management officials of country X trained that report using the tools and knowledge provided in their work on a regular basis six months after the training |
IOM resources
2017a IOM Project Handbook. Second edition. Geneva (Internal link only).
2019b PD creates Project Proposal – IOM Template, Results Matrix. IOM PRIMA End-User Training Guide (Internal link only).
n.d.a IOM PRIMA for All User Guide (Internal link only).
Other resources
Organisation for Economic Co-operation and Development (OECD)
2010 Glossary of Key Terms in Evaluation and Results Based Management. Paris.
People in Need
n.d. IndiKit: Guidance on SMART Indicators for Relief and Development Projects.
United Nations Statistics Division (UNSD)
2020 E-Handbook on the Sustainable Development Goals.
World Bank
2010 Constructing and targeting indicators using QQT. In: The LogFrame Handbook: A Logical Approach to Project Cycle Management. Washington, D.C., pp. 38–47.
Tool to support indicator development
n.d. Microsoft Visio.
Data sources and data collection method
Please refer to the visualization of the IOM Results Matrix above to identify the data sources and data collection methods in the Results Matrix.
In order to monitor indicators, a source of information to verify each indicator is required. Within IOM, these sources of information are referred to data sources, defined as “identify[ing] where and how the information will be gathered for the purposes of measurement of specific indicators.”
Data sources should identify what information to collect, how to collect that information and in what form (collection method) and with what frequency. Data sources can include documents (such as reports, government publications and records), data sets (such as national census data and project monitoring data sets), records (such as training attendance sheets and beneficiary case files) and people (such as beneficiaries and/or affected populations, stakeholders, project staff and government officials).
Indicator | Data source | Collection method |
Percentage of households earning more cash after project | People in households | Household survey |
Data sources can be primary or secondary. Primary data is collected by the implementing organization and may include personnel, budget, administrative data, surveys, interviews and direct observations. Secondary data is collected by others outside the implementing organization.
- Will there be access to the information?
- Where will the information be attained from?
- Can the data source provide quality data?
- How will the information be attained given limited resources?
- How costly and feasible is collecting the information?
It is required that each known data source collection method be specified in the initial Results Matrix, as time, capacity and budget must be built into the intervention in order to successfully implement them. Available resources and frequency of collection should also be identified during the development phase of an intervention. This is particularly important for M&E purposes, as it can also have specific budgetary implications.
It is equally important to collect only the data required for measurement of the indicators and data that is intended for further use. Collecting other additional information may result in added cost and time. Chapter 4 will provide further information on data collection methods and tools.
Baseline and targets
Please refer to the visualization of the IOM Results Matrix above to identify the baseline and target columns in the Results Matrix.
Baseline data and targets can be defined as follows: “Baseline data provides a foundation against which to measure change over time, while targets establish precisely the mark the project intends to hit.”
Baseline data can be considered as the starting point, depicting the initial conditions before the implementation of an intervention. The baseline provides the first measurement of an indicator. It sets the current condition against which future change will be measured.
Depending on the context and nature of an intervention, baseline data may not always be available during its development phase. In such cases, it may be appropriate to propose collecting baseline data (and subsequently the targets) once the intervention begins, in agreement with other key counterparts and donors.
In some cases, a baseline study or assessment may be required to identify the appropriate baseline data, if a budget is available for it, as it may have costly implications. For instance, in cases where changes to the general population are of interest, a census can be used as a baseline; however, this may not always be available, and it would be costly to conduct one related to the intervention. In other scenarios, it may not be possible to conduct a needed baseline study due to security restrictions or other reasons outside of IOM’s control. In such cases, data collected during the first monitoring visit, when a specific indicator is measured for the first time, can be considered a baseline for that indicator.
In other cases, particularly in multiphase programmes, baseline data may be available through other previously implemented interventions. However, in all cases, it is critical to be consistent in the way data is presented within the intervention, at the development phase and throughout reporting, as well as to always have a solid understanding and justification for why baseline data is presented in a particular way.
Establishing targets sets the threshold for a particular indicator; it establishes what the intervention hopes to achieve, measured against a baseline. If targets are met for all of a result’s indicators, the intervention can be considered as having successfully achieved its result. The target set for a particular indicator should be appropriate within a given context or location, keeping in mind that an appropriate target in one scenario, location or context, may not be appropriate for another context. Key considerations to establishing a target may include budget considerations (what is the maximum that can be accomplished with the available resources), donor or key counterpart priorities (what is the intervention being asked to achieve) and contextual limitations (what is feasible given the environment in which the intervention is implemented). Setting targets for some results may be straightforward, while for others, it can be quite complex.
IOM resources
2017a Preparing the Results Matrix, Module 2. In: IOM Project Handbook. Second edition. Geneva, pp. 123–152 (Internal link only).
n.d.b Samples of Completed IOM Results Matrices. Monitoring and Evaluation Sharepoint folder (Internal link only).
Once the Results Matrix is developed and finalized, it can be converted into a monitoring tool that can be used during the implementation of an intervention: the Results Monitoring Framework (RMF). The RMF is developed at the start of the implementation, after a project has been funded and activated. The RMF is the primary tool to monitor the results of any intervention. It enables all members of the implementing team, as well as stakeholders, to track the progress being made towards achieving intended results. By specifying the data collection method, the RMF also highlights the requirements to obtain high-quality data. The RMF can be used alongside the detailed workplan for monitoring activities, financial reporting tools for monitoring budget compliance and the risk management plan for monitoring risks, to ensure a holistic monitoring approach.
Results Monitoring Framework
Source: Module 4 of IOM Project Handbook, p. 262 (Internal link only).
The RMF reflects much of the same information contained in the Result Matrix, but it contains five additional columns: data analysis, frequency, responsible person, achieved and progress. The RMF additionally removes the assumptions column from the Results Matrix.
Figure 3.13. Results Matrix and Results Monitoring Framework comparison
Results Matrix
Results | Indicators | Data source and collection method | Baseline | Target | Assumptions | |
Objective | ||||||
Outcome | ||||||
Output |
Results Monitoring Framework
Results | Indicators | Data source and collection method |
Data analysis | Frequency | Responsible person | Baseline | Target | Achieved | Progress |
Objective | |||||||||
Outcome | |||||||||
Output |
The five new columns are completed as follows:
Data analysis |
This column is to be filled with a description of how the data collected will be analysed. The main categories of analysis are qualitative and quantitative. For example, if the indicator is “presence of legislation that reflects international best practice”, the data source would be where the information (data) comes from (copy of the legislation), while the data collection method would be a document review (review of the legislation). Data analysis would be qualitative in nature, for instance, if an expert would assess the degree to which the legislation is in line with international best practices. If the indicator was “percentage of households earning more cash after the intervention”, then the data source would be the people in the households, the data collection method would be a household survey, and the data analysis method would be mainly quantitative, that is, a calculation of the percentage of households that reported higher earnings.
|
Frequency |
Frequency refers to how often data will be collected (such as weekly, monthly, annually, quarterly, one-off and end of intervention). Frequency should correspond to the activities and indicators. For example, if one training is to be held, and the indicator being measured is “percentage of trainees, by sex, who pass the post-training test”, then the measurement would be taken once (one-off) following the completion of the training and the test. If an ongoing activity is being monitored, for example “transport assistance to refugees during a protracted crisis”, then it would make sense to monitor the number of persons being transported on a regular basis (such as weekly or even daily). |
Responsible person |
This column indicates the name of the person from the intervention team who will be responsible for organizing data collection, data analysis and data storage in line with the IOM Data Protection Manual. In cases where personal data is involved, the person specified in this column is the data controller, as per IOM Data Protection Manual. |
Achieved |
This column is to be filled with information, periodically or as it becomes available, which indicates the progress being made towards reaching the target. For example, if the target is to train 100 humanitarian workers on preventing human trafficking in emergencies, and 75 workers have been trained, enter 75 [out of a target of 100].
|
Progress |
This column is to be filled with information, periodically or as it becomes available, which analyses the extent of the progress towards reaching the target. If appropriate, this can be expressed as a percentage (such as if 75 out of 100 humanitarian workers have been trained, the progress towards the target is 75%). In some cases, a narrative description of progress towards the target may be more appropriate, particularly for qualitative indicators. For example, if the target indicator is “presence of legislation that reflects international best practice”, the confirmation of the existence of the legislation only partially reflects international best practice, then a brief description of how it reflects best practices and what gaps still remain would be most appropriate.
|
While the RMF is the main tool that can be used to keep track of the information required to monitor an intervention, additional tools to facilitate the collection of data may be needed for each indicator, depending on the specified data collection method. Examples of relevant data collection tools, such as surveys, focus group discussions, key informant interviews and others are further provided in chapter 4 of the IOM Monitoring and Evaluation Guidelines.
IOM resources
2010 IOM Data Protection Manual. Geneva.
2017 Module 2 and Module 4. In: IOM Project Handbook. Second edition. Geneva (Internal link only).
When monitoring an IOM intervention, four essential areas to monitor and the key tools associated with each area are considered.
Activities Key monitoring tool: Detailed workplan |
Results Key design tools: ToC, Results Matrix Key monitoring tool: RMF |
Budget and expenditures Key monitoring tool: Process and Resource Integrated Systems Management (PRISM) financial reports and PRIMA for All |
Risks Key monitoring tool: Risk Management Plan |
A variety of elements can be monitored in an intervention and what to monitor will depend on the specific information needs of an intervention.
In addition to monitoring budget and expenditures, PRIMA may be used as a tool to monitor results; it can specifically be used to report on a Results Matrix and RMF as these can be tracked using the system, as well as provide useful dashboard results.
Results monitoring in PRIMA is a two-step approach: (a) planning the monitoring of project results; and (b) updating/monitoring the project results by measuring progress against the indicators established in the Results Matrix. Using the Plan tab within the Results Monitoring Module, managers are able to enter information into additional columns that are added to the original Results Matrix to create RMF: (a) Data analysis (including options for data disaggregation); (b) Frequency; and (c) Person responsible. The additional two columns of the RMF – Achieved (indicated as Cumulative progress in PRIMA) and Progress – are updated using the Indicator Results tab in the Results Monitoring PRIMA Module. Once complete, the data entered into the RMF in PRIMA can also be exported into PRIMA-generated donor reports.
For more information on how to use PRIMA to monitor results, see PRIMA User Guide – Results Monitoring.
The following table looks closer at the different types of monitoring and outlines the tools most useful in conducting each type of monitoring.
Table 3.3. IOM’s four types of monitoring
Monitoring type | Description | Tool/s to use |
Activity monitoring |
Activity monitoring tracks progress, gaps and delays in activities against a detailed workplan. A manager should already have a basic workplan from the intervention proposal. At the start of implementation, this basic workplan should be further developed into a detailed workplan. The detailed workplan includes the activities and tasks identified within the Results Matrix, along with all other activities and tasks related to implementation. It includes, for example, a section for ongoing monitoring, evaluation and reporting activities. Ideally, the development of the detailed workplan should be undertaken by the intervention team, under the overall leadership of the manager. |
IOM Workplan Templates (Module 4 of IOM Project Handbook, p. 293) (Internal link only). The workplan helps to plan and monitor the implementation of activities, clearly distribute tasks among the intervention team and helps to ensure that the outputs are delivered within the time frame and budget available. |
Results monitoring |
Results monitoring tracks results. This type of monitoring is used to determine whether an intervention is on or off track towards its intended results (outputs, outcomes, objective). It is also recommended to reflect on and identify any unintended positive or negative effect. An example of this could be if training participants independently created a working group and continued to meet beyond the time frame of that training. The previous sections have shown how to monitor for results in great detail using the IOM Results Monitoring Framework and/or the Results Matrix. |
IOM Results Monitoring Framework Template (Module 4 of IOM Project Handbook, p. 262) (Internal link only). The RMF should always reflect the most recent agreed-upon version of the Results Matrix and should be reviewed regularly. Various data collection tools are mentioned in chapter 4. |
Financial (Budget and expenditure) monitoring |
Financial monitoring tracks costs by input against the planned expenditures as per the approved budget. When reporting, PRIMA is used to create interim and final financial reports and can, therefore, be a resource for financial monitoring. |
PRISM financial reports (see Module 4 of IOM Project Handbook, pp. 301–307) (Internal link only). PRIMA |
Risk monitoring |
Risk monitoring tracks whether previously identified risks are still pertinent, if new risk has emerged and assesses whether the likelihood and timeline of previously identified risks remain accurate. Risk monitoring also entails identifying and assigning risk treatment actions, which is part of setting up a risk management plan. Risk monitoring is often conducted in conjunction with context monitoring. |
IOM Risk Management Plan (Module 4 of IOM Project Handbook, p. 308) (Internal link only). PRIMA includes a Risk Management module for managing risk. For more information, see the PRIMA User Guide, available internally to IOM staff via the IOM intranet. |
Additional types of monitoring to consider | |
Process monitoring | Process monitoring tracks the use of inputs and other resources, the progress of an intervention’s activities and the delivery of outputs. It assesses how activities are delivered. This type of monitoring is often conducted in combination with compliance monitoring (defined below). Process monitoring tools could be checklists to ensure that processes are undertaken; registration forms and tracking forms could also be tools used for process monitoring. |
Compliance monitoring | Compliance monitoring ensures compliance with organizational and donor regulations and the expected results of the intervention, as well as with local governmental regulations and laws, contractual requirements and established ethical standards. One example of a compliance monitoring tool is a checklist. |
Context monitoring | Context monitoring tracks the situation in which the intervention operates and focuses on identifying risks and assumptions, taking into account any unexpected considerations that may arise. In this way, context monitoring is closely linked to risk monitoring. Context monitoring covers the direct area of operation, as well as the larger institutional, political, funding and policy context that affect the implementation of an intervention. IOM tools that may be used for context monitoring include the above-mentioned risk management plan. |
Beneficiary monitoring | Beneficiary monitoring tracks beneficiary perceptions of an ongoing or completed intervention. This type of monitoring encourages beneficiary participation and assesses beneficiary satisfaction or complaints, the level of their participation/inclusion, their access to resources, how they were treated within the intervention and their overall experience of change. Survey and questionnaire are examples of tools that can be used for beneficiary monitoring. |
IOM resources
2017a IOM Project Handbook. Second edition. Geneva (Internal link only).
- IOM Workplan template (Module 4, p. 293).
- IOM Results Monitoring Framework template (Module 4, p. 262).
- IOM Risk Management Plan (Module 4, p. 308).
n.d.c PRIMA User Guide – Results Monitoring (Internal link only).
n.d.d PRIMA User Guide Sharepoint folder (Internal link only).
Other resources
International Federation of Red Cross and Red Crescent Societies (IFRC)
2011 Project/Programme Monitoring and Evaluation (M&E) Guide. Geneva.
Monitoring a policy or strategy differs from monitoring a project or programme in that it looks at the bigger picture or macrolevel of what the organization is trying to achieve. Despite this difference, similar approaches can be used when monitoring a strategy. It is useful for monitoring purposes to differentiate between strategies developed at the country, regional and global levels. As strategies aim to attain results, usually through higher-level results than those found in projects and programmes, it is also possible to apply a results matrix to a strategy. The following table provides key considerations for the development of strategies to facilitate monitoring, as well as guidance on how to universally monitor strategies developed at different levels. Resources for relevant strategy development tools are provided at the end of this section.
Table 3.4. Monitoring a strategy
Type of strategy | Considerations for strategy development that facilitate monitoring | How to monitor |
Global strategy (such as a thematic or departmental strategy) |
|
Strategies, as other interventions, should be monitored on an ongoing basis, regardless of the level at which they are implemented. When drafting or developing a strategy, a section on how to monitor it should be included. Methods to monitor strategies include:
|
Regional strategy |
|
|
Country strategy |
|
Policies are often developed at the system, institutional and organizational levels and can facilitate efforts to address complex challenges. However, having sound and evidence-based policies is needed to address those challenges and achieve longer-term institutional and organizational goals. Therefore, it is critical to build M&E into the policy cycle, for many of the same reasons as in project-, programme- and strategylevel interventions, including to promote accountability and learning. With this in mind, many of the same considerations that promote the successful monitoring of a strategy can also facilitate the monitoring of a policy. Similarly, tools, such as the development and use of a results matrix, can be applied to policy monitoring.
For more resources on monitoring (and evaluating) a policy, see the Resources section below.
Where a Results Matrix or RMF has not been developed for a particular strategy or policy, it is still possible to monitor the intervention through frequent strategic reviews and reporting, in ways that may be more qualitative in nature. Tools that may support this include the following: (a) action plan; (b) midterm strategy review; (c) regular meetings of key strategy stakeholders on the strategic direction; and (d) linking specific projects to the strategy to identify potential implementation and/or funding gaps.
IOM Thailand’s country strategy provides a good example of how to develop a plan to monitor a strategy. It shows how project/programme-level monitoring essentials, such as the Results Matrix, is applied at the strategy level. Furthermore, it shows multiple results matrices, covering different thematic areas that respond to the country-specific migration needs, and indicates how they are linked to other global frameworks such as the SDGs.
IOM resources
2017b IOM Thailand Internal National Strategy 2017–2020. N.p. Monitoring and Evaluation Sharepoint folder (Internal link only).
2018a IOM Country Strategy Draft Outline. Internal template on Monitoring and Evaluation Sharepoint folder (Internal link only).
2018b IOM Country Strategy – Quick Guide. Internal template on Monitoring and Evaluation Sharepoint folder (Internal link only).
Other resources
National Center for Injury Prevention and Control
2013 Step by step – Evaluating violence and injury prevention policies, Brief 1: Overview of policy evaluation. Centers for Disease Control and Protection (CDC), Atlanta.
Organisation for Economic Co-operation and Development (OECD)
This section focuses on operating and monitoring in environments where reaching vulnerable populations is a challenge, including in contexts with medium to high insecurity, while maintaining the security of the organization’s personnel. It also addresses cases of large and complex programmes, some with wide coverage, including those implemented with the use of TPM. Remote management and monitoring strategies and TPM can help mitigate the challenges inherent to such situations and support IOM to continuously provide targeted assistance, while reducing risk to staff.
A management structure, which may have been set up as a temporary mode of operation, can rapidly become a semi- and/or permanent approach to implementation in countries with deteriorating security. While the proliferation of remote management approaches may have offered a number of recommendations to practitioners to improve their results in hard-to-reach areas, it has also revealed its limitations, including those related to intervention monitoring and AAP. This section will cover remote management, remote monitoring, as well as TPM of interventions.
Remote management approaches have substantial implications for monitoring and accountability practices, as well as the ability of the implementing organization to provide assurance of reaching project/ programme results. Where situations may restrict staff members from meeting with beneficiaries or monitoring activities directly, they must rely on other staffing approaches or external partners. Remote management approaches are required in circumstances where management may not have physical presence and hence limited technical oversight, monitoring and accountability, as well as in situations with an increased risk of fraud and corruption occurring. The following are some of the common situations in which remote management approaches can be adopted.
The COVID-19 pandemic in 2020 led to further challenges in managing and implementing interventions, as well as, relatedly, all subsequent M&E efforts. It presented yet another remote management scenario, where most activities and movements were severely restricted for all actors. These conditions further reinforce the need for having strong and flexible monitoring systems in place to support the implementation of interventions. In response to the challenges brought by the COVID-19 pandemic, IOM developed a guidance document, Continuity of Monitoring and Evaluation Interventions during COVID-19, which includes data collection alternatives that are equally helpful in remote management and monitoring scenarios.
In general, remote management, including remote monitoring and some aspects of TPM, can be a temporary or a more permanent response to security and logistical challenges in the direct implementation of programmes.
Considerations include the following:
- Physical security/operating environment – An elaborated risk and security assessment should include the following:
- A solid context and security environment analysis, including dynamics of conflict (United Nations Risk Management Unit), distance to project sites, transport types, availability and constraints and infrastructure;
- Security risk levels analysis (low/medium/high);
- Access to project sites analysis: None/irregular/regular but limited.
- Cost analysis of options – A cost analysis should include the following:
- How much can be invested for delegating responsibility in the implementation and having the least impact to programme quality; this may require analysing other cost-effective or cost-saving options;
- Possibilities of monitoring/conducting field visits, given the circumstances (security environment, remoteness and other factors affecting risk); availability of information systems; identification of efficient implementing partners; guaranteeing capacity-building of local partners; how to maintain effective relationship with beneficiaries, for instance through community networking systems; possibilities to assess the impact of the programme through remote management.
- Exit strategy – As remote management systems are often more expensive and less reliable than direct management, where applicable, an exit strategy to transition from remote to direct management should be considered and regularly reviewed to inform relevant implementation related decisions.
An IOM managing mission, in consultation with relevant regional offices and Headquarters decision makers, should also consider who makes the decision to engage in remote programming and what processes forms the basis for that decision, including the legal framework.
Remote management may have significant implications for the organizational set-up, accountability, monitoring and assurance of the quality of interventions. When a situation calls for remote management, the set-up and use of monitoring approaches require more attention. Due to logistical difficulties in conducting the monitoring in complex and remote environments, the need for additional training and contractual arrangements with additional reporting lines may arise. Strong remote monitoring approaches become key to supporting and contributing to remote management.
A common challenge of remote monitoring is the allocation of sufficient resources for planning and budgeting the set-up of rigorous and effective monitoring systems. It is therefore important to identify operational constraints and budgeting limitations encountered in those fragile and complex environments. This ultimately may also prevent abusive use of no-cost extensions. Considering such constraints and limitations in the monitoring section of each project proposal may also reassure the donor that attention is paid to monitoring in order to guarantee the overall quality of an intervention.
Through the use of specific remote management approaches, monitoring of implementation can still continue. The following table outlines key challenges in the context of remote monitoring and possible solutions to address them.
Table 3.5. Remote monitoring: Key challenges and possible solutions
Challenge | Explanation | Suggested solutions |
Potential deterioration in programme quality | Ensuring high quality in programming may be especially challenging when projects are technically complex. |
Where/if possible:
|
Weak monitoring and control mechanisms | Rigorous monitoring may be neglected when a project is already facing competing priorities and deadlines or budgetary constraints. The lack of staff capacity, standardized approaches and monitoring tools, infrequent monitoring visits, low quality of collected data and the lack of information triangulation are factors that can weaken monitoring and result in poor decision-making, potential deterioration in programme quality and corruption. |
Where/if possible:
|
Insufficient and/ or inaccurate data and reporting | Low quality of data can affect the quality of reporting. It can be related to limited staff capacity and/ or time spent in the field due to security concerns while collecting data. |
Where/if possible:
|
Reduced number of visits and access to implementation sites | Monitoring visits to implementation sites or field offices can, at times, be challenging. This can result in poor communication, lack of information-sharing and lack of control of information, which can ultimately negatively affect the quality of implementation. |
Where/if possible:
|
Limited staff/ partner capacity | The most common limitations are related to management, data analysis and reporting skills, as well as having a good understanding of concepts such as M&E, humanitarian action and beneficiary accountability. |
Where/if possible:
|
Weak technical oversight of implementation | Providing adequate technical support through remote management can prove to be more challenging. |
Where/If possible:
|
Potential weak communication between the main country office or delocalized main office and sub-offices in the field | Communication may suffer in remote management contexts. |
Where/if possible:
|
Increased risk of fraud and corruption | The risks of fraud and corruption are present throughout the implementation of an intervention and may arise in remote management settings, where monitoring is more difficult. Fraud or corruption can occur at various levels: at the organizational level with own staff, at the beneficiary level or at an implementing partner level. Certain socioeconomic and political factors can lead to increased likelihood of fraud and corruption. |
Where/if possible:
|
Insofar as it is possible, remote monitoring systems should allow for real-time feedback. Such systems have multiple benefits: they can enable a real-time and/or close to real-time two-way communication in remote environments; thereby serving not only as a management tool, but also as a check and quality assurance mechanism. By receiving near to real-time information/data, IOM staff is able to identify and respond to challenges through corrective efforts much faster.
Microsoft Office 365 is available to all IOM staff members and contains tools that can be used for (remote) monitoring. Some of the useful tools are as follows:
Real-time collaboration/sharing | Collaboration spaces/platforms |
|
|
Virtual calls | |
|
IOM resources
2020 Continuity of Monitoring and Evaluation Interventions during COVID-19. Version 8 April. DPP/ Evaluation.
Other resources
International Rescue Committee (IRC)
2016 IRC – Syria, Remote Management Guidelines. Last updated August 2016.
Norman, B.
2012 Monitoring and Accountability Practices for Remotely Managed Projects Implemented in Volatile Operating Environments. Tearfund, Teddington, United Kingdom.
Price, R.
2018 Approaches to remote monitoring in fragile states. GSDRC Helpdesk Research Report. University of Birmingham.
- Resources primarily linked to COVID19 that can also be applied to other contexts
Kopper, S. and A. Sautmann
2020 Best practices for conducting phone surveys. Abdul Latif Jameel Poverty Action Lab (J-Pal), 20 March.
TPM is the system of contracting third parties to conduct the monitoring, and/or cover monitoring functions, such as collecting (and at times verifying) data, on behalf of the implementing instance or donor. However, the definition can further be contextualized, and TPM modalities can take various shapes, especially in non-permissive environments. Therefore, when speaking of TPM, it is important to clarify which of the following possible situations is being discussed.
- IOM uses TPM – IOM, as the managing entity, uses TPM to monitor its interventions: IOM hires an external monitoring team to conduct the monitoring of its own and/or of its implementing partners’ performance.
- IOM is the third-party monitor – Another managing entity uses IOM to monitor its interventions: IOM is the TPM entity and conducts monitoring of non-IOM interventions.
- IOM is the subject of TPM by another entity – An entity (usually a donor agency) uses external TPM entities to monitor IOM’s performance: An IOM-managed intervention is being monitored by a TPM entity, working directly for the intervention donor.
IOM has experience in all three situations as elaborated in the following examples from IOM Pakistan:
- IOM uses TPM – IOM uses TPM to monitor the implementation of its own intervention and/or the work of implementing partners (such as local NGOs): At a large-scale USAID-funded community stabilization programme, the security situation and security protocols restrict IOM staff’s access to field activities. The programme, therefore, hired field staff through third-party contracts to conduct the monitoring of ongoing projects in the zones with restricted access. The staff was trained and supervised by IOM M&E staff through remote management and monitoring, using a M&E management information system (MIS) designed for this programme to enable real-time monitoring.
- IOM is the TPM – Another entity uses IOM as the third-party monitor to monitor its interventions: IOM is the TPM of an INL-implemented project, using a specific monitoring system.
INL is the acronym for the US Bureau of International Narcotics and Law Enforcement Affairs. IOM is not involved in the implementation of the project in any shape or form and is merely monitoring project performance as a TPM. - IOM is being TPMed – A donor agency uses TPM to monitor IOM’s performance: IOM activities were monitored by a third party, funded by USAID. The TPM reported directly to USAID. Any challenges that were flagged by the TPM to USAID were then forwarded to IOM for response. In case of conflicting findings, IOM was requested to provided evidence-based monitoring reports to the donor.
TPM is not only used in emergency situations; it can also be used in large-scale programmes, for instance within the United States Refugee Admissions Program, which uses TPM to monitor its activities implemented around the world by different agencies, or the European Union Trust Fund programme with IOM’s performance being monitored by another entity.
It is important when engaging in a TPM scenario to ensure that the parties involved have clear and realistic expectations. Contractual agreements define the parameters and conditions for any TPM scenario, including the obligations and requirements of the different entities involved. When entering into an agreement with a third party for the purpose of monitoring, in any of the three TPM scenarios outlined above, several key considerations should be kept in mind:
- Roles and responsibilities: Ensure that the roles and responsibilities within the scenario are clearly outlined for each party, including frequency of monitoring and reporting, lines of reporting, lines and means of communication between parties, staffing required for implementation, deliverables and division of work between the parties. Other important obligations may be identified and included depending on the context.
- Data protection: Identify potential data protection issues and address them directly within the agreement and at the outset of monitoring implementation. While IOM makes a commitment to being transparent with TPM partners, agreements must be in line with IOM Data Protection Principles, following the guidance provided by the IOM Data Protection Manual, and following guidance from the Office of Legal Affairs (LEG) regarding confidentiality-related language within contractual agreements.
- To reinforce IOM’s commitment to transparency, consider outlining any specific foreseen limitations on the sharing of data from the outset of a TPM arrangement, ideally during the initial negotiations between TPM agreement parties.
- Indicators: It is important to agree upon what is being monitored and how results are measured at the outset. Adding or altering indicators, beyond what was originally contractually agreed upon with the donor, and without agreement of all parties, may complicate or hinder accurate reporting of results, and, in some situations, significantly complicate established data collection mechanisms.
- Multiple agreements: In nearly all situations, a TPM agreement will be concluded within the context of an existing project or programme. In such cases, a donor agreement will already exist. It is therefore essential that all TPM agreements align with the obligations outlined for IOM in the original donor agreement. Where possible, if a TPM scenario is required by a donor, try to agree upon the parameters, including IOM’s ethical considerations related to data protection, as well as roles and responsibilities of all parties prior to the conclusion of the initial donor agreement.
- Finally, where there are questions or disagreements related to TPM agreements and the language used therein, remember to utilize LEG as a resource. For further guidance on procurement matters related to TPM, please see IN/284.
Challenges and risks of third-party monitoringThe following sections are drawn from information compiled by TRD, developed in consultation with DPP/Evaluation in 2018.
While TPM allows for the indirect tracking, verification, validation, real-time collection of data and course correction of IOM interventions, it also has some limitations. Such limitations can include poor-quality reporting of monitoring visits conducted, unexpected costs (inter alia, related to improving the quality), lack of transparency and difficulty using TPM to collect outcome-level data. Many of the risks inherent in remote management are also true for TPM.
Additional potential risks associated with third-party monitoring | |
Programmatic |
|
Institutional |
|
Other |
|
sdfds
- Allocate sufficient resources for the capacity-building of TPM staff to ensure a high quality of work throughout monitoring processes. Using manuals, handbooks, guidance notes and various templates for such remote monitoring systems are highly recommended. It is important to abide by IOM standards such as IOM Data Protection Principles to ensure compliance.
- It is important to agree at the outset of a TPM arrangement, whether in the agreement itself or an annexed or accompanying document, on the parameters related to the ethical collection, management and use of data, including linking TPM efforts to the IOM Accountability to Affected Populations Framework (see chapter 5 on IOM Cross-Cutting Themes: AAP).
- When clarifying roles and responsibilities in a TPM agreement, ensure that all parties have a realistic understanding of the human and other resources required to facilitate all TPM activities. Whether IOM is engaged in TPM or being TPMed, data collection requirements, such as the facilitation of monitoring visits or drafting of reports, should not limit the capacity of the implementing organization to implement the intervention.
as
The resources referenced above for remote management and monitoring can also be considered, depending on the situation, for TPM.
IOM resources
2010a IOM Data Protection Manual. Geneva.
2010b Annex X: IOM Data Protection Principles (Internal link only).
2021 Changes to Procurement, Implementing Partners Selection and Related Contracting Procedures (IN/284) (Internal link only).
This chapter has so far introduced various types of monitoring, including the four essential areas to monitor: results, activities, budget and expenditure and risk.
As noted in the Types of monitoring section of this chapter, PRIMA may also be used as a tool to monitor results, using the Results Matrix and RMF.
For additional information on how to use PRIMA to monitor results, see the PRIMA User Guide, Annex 5: Results Monitoring (Internal link only).
Taken together, these constitute the main elements that should be used in planning and carrying out the monitoring of projects, programmes, strategies and policies. Together with planning evaluation, covered in chapter 5 of the IOM Monitoring and Evaluation Guidelines, these elements constitute the M&E plan. The M&E plan outlines the overall plan for M&E across the entire intervention. It should specify the monitoring strategy, approaches, any studies, reviews or evaluations to be conducted, detailing data sources, timing, management processes, as well as summarize the overall programme theory, including ToC or Results Matrix.
While various definitions exist, the terms “M&E plan” and “M&E framework” are often used interchangeably. However, while both seem identical, they refer to two different concepts.
Table 3.6 shows some of the key elements.
Table 3.6. Key elements compared: Monitoring and evaluation plan and monitoring and evaluation framework
Monitoring and evaluation plan | Monitoring and evaluation framework |
In some cases, a detailed description of project logic, envisioned evaluations and monitoring activities, as well as M&E tools, may be required. Cases where this might be required include when the success of a very large or complex project needs to be measured, when a variety of partners will be involved in M&E activities or when the donor specifically requires a more detailed plan on how M&E will be enabled at the start of the intervention.
|
The most common definition of an M&E framework is a table that lists all indicators (and their brief definition), along with data source, baseline, target, how often it will be measured, data collection methods and tools, as well as who is responsible for measuring results. For IOM, the closest concept to an M&E framework is RMF. In many cases, RMF may be sufficient for monitoring the results of an intervention at IOM, along with the plans described in the M&E sections of the project document. |
An M&E plan should ideally be done at the initial (inception) phase of a project, before starting implementation and should be reviewed periodically to reflect changes in the implementation context, or after major project design changes are made. | The M&E framework is also developed at the start of the project. It may also need to be reviewed periodically to reflect the changes in implementation. |
The M&E framework can be a component of, and included in, the M&E plan. |
While the two concepts are presented as distinct here, it is always advised to seek clarity from a particular donor or partner to ensure a common understanding of the terminology and associated expectations.
An M&E plan helps tie together all the M&E components, such as field visit plans, the RMF, data collection and analysis and use requirements, learning and reflection activities and other information. In some instances, an M&E plan may also contain detailed descriptions of the intervention indicators. It is useful to compile all the different elements that enable M&E within an intervention into one document or folder in order to better manage M&E processes, promote access to all documents and a common understanding among the project team, as well as facilitate eventual reviews and evaluations.
Although there are various templates for M&E plans, the following outlines the main components of a basic and an advanced M&E plan.
Basic monitoring and evaluation plan template |
|
Advanced monitoring and evaluation plan template |
|
The following chart provides a graphic overview of an M&E plan, including how it builds on the project development and helps to ensure accountability and learning.
Note: Modified based on a graphic developed by IOM Regional Office Vienna (2017).
Reporting on intervention results is an essential component of ensuring compliance with a resultsbased approach. It is undertaken at different intervals throughout the life cycle of an intervention. Based on operational knowledge and best practices, reporting also contributes to the expansion of IOM’s institutional knowledge. It is a means of communicating information on intervention progress, achieved results, lessons learned, constraints encountered, the steps taken to overcome and address challenges and good practices identified during implementation.
Reporting is a mandatory component of intervention management. Informing donors, member States and other stakeholders regularly on the status of an intervention helps demonstrate transparency and accountability, as well as ensures compliance with contractual engagements. Internal reporting, such as through monitoring reports, reports from implementing partners, meeting minutes and lessons learned logs, is another important element of project management that can also facilitate eventual external reporting. Reporting requires maintaining a record of all actions taken during implementation; it is therefore also an important source of information for auditors and evaluators in assessing whether an intervention has been successfully implemented, in line with IOM rules and regulations, as well as with the donor’s agreement.
IOM offices should consider developing a reporting plan to anticipate reporting needs and ensure that sufficient time is allowed for the collection of data and the preparation of reports. This could be included as part of an M&E plan (see the previous section, Pulling it all together: Monitoring and evaluation plan) or it could be developed as a stand-alone plan.
A reporting plan can cover reporting roles (who is responsible for reporting), the types of reports required, how frequently a report is to be produced, procedures for report review and who is to receive the report.
Module 5 of the IOM Project Handbook provides a detailed guide on donor reporting and its three main phases: (a) preparation; (b) review and approval; and (c) submission. This section focuses on how to report on results using monitoring findings, the specific sections of the narrative donor report to enhance results reporting and how reporting can contribute to institutional learning.
Report writing is about communicating results.
Reporting is a critical component of M&E, as it enables the presentation of collected data and findings generated for key stakeholder use and overall performance. Reporting on results provides evidence and feedback to decision makers to inform their decision-making processes. Internal monitoring reports and external donor reports can serve several purposes, depending on an intervention’s information needs, as well as the information derived from M&E, which can be put to different uses. M&E information can serve to:
Collecting data for a results-based report
The data necessary to report on progress in achieving the targets set for indicators should be collected throughout the implementation of an intervention and compiled and analysed prior to reporting. This includes data needed from project stakeholders and implementing partners. Therefore, it is important to establish why, when and how data needed for reporting will be collected.
The Results Matrix as well as the RMF collects key information and can therefore be used as resource for results-based reporting.
In the case of a capacity-building-related result, if the focus is on the knowledge attained, with an indicator that measures the percentage of participants who increased knowledge or the percentage who reported being able to apply knowledge gained, then once this data is collected, reporting will also be able to focus on this information and demonstrate a change in capacity through increased knowledge.
On the other hand, if the indicator is not used and the data is not collected, then the report would only be able to state how many people were trained or how many trainings were implemented, which would be insufficient information to demonstrate a change, that is, the training provided achieved a change and the intended result of the capacity-building activity was reached.
Irrespective of how well data is collected and analysed, data must also be well presented, with the target audience in mind. If poorly presented, reporting becomes less useful, which wastes resources and time. When identifying the purpose of data collection, it is important to plan for strong reporting structures from the very beginning. The chart below provides a picture of the cycle of data collection and analysis, reporting and use:
Writing a results-based report
When looking at reporting from an M&E perspective, some additional practices can be used to ensure results-based reporting.
In the past, reporting often focused too much on the activities implemented and not on results at the output and outcome level, nor on the impact of the intervention towards achieving an intervention objective. Reporting should:
- Tell the story of implementation, moving from the activities to the results and demonstrating how resources were used to achieve those results;
- This story should provide an overview of the quantitative and qualitative data that demonstrates achievement towards results;
- Underscore the challenges encountered by the project or programme and where improvements might be needed.
Chapter 6 of UNICEF, 2017, pp. 146–148.
In addition, there are other recommended tips for writing results-based reports that adequately capture change.
Additional tips for writing to capture change |
|
|
Examples of results-based language could be as follows:
Activity | Results-based reporting |
IOM procured and installed 20 winterized containers at the border area. | By December, there was a 25 per cent increase in the number of migrants who slept in winterized accommodation in the country. |
IOM registered and provided transportation for migrant children to local schools in the area of the accommodation centre. | In 2021, the enrolment in schools of migrant children residing in migrant accommodation centers increased from 30 per cent to 90 per cent. |
For more examples and information, refer to Module 5 of the IOM Project Handbook.
A good balance needs also to be kept between descriptive writing and analytical writing throughout. Descriptive writing helps to describe what has occurred, while analytical writing supplements this by adding analysis that may help explain why these things happened, what they have achieved (beyond the activities and the immediate outputs) and what implications this may have.
Descriptive writing | Analytical writing |
States what happened | Identifies the significance of the findings |
States the order in which things happened | Evaluates strengths and weaknesses |
Explains how something works | Makes reasoned judgements |
Lists details | Gives reasons for each selected option |
In addition to being results-based and incorporating both descriptive and analytical writing, there are other considerations that strengthen the quality of reporting. Effective reporting is relevant, timely, complete, accurate, concise and user-friendly, consistent and cost effective. It also presents good or high-quality data. The following chart provides a list of questions to help identify whether reporting meets these criteria:
Relevant |
|
Timely |
|
Complete |
|
Accurate |
|
Concise and user-friendly |
|
Consistent |
|
Cost-effective |
|
Strong reporting can satisfy IOM’s relationships with donors, improve donor support and facilitate the monitoring of a project. Weak reporting can be an indication of a lack of accountability and transparency mechanisms, with possible consequences for the relationship with a donor or other stakeholders.
Planning for effective reporting can be used to identify specific reporting needs and information needs of the target audience early in the process. Information can also be for a particular purpose, such as strategic planning, ongoing project implementation, compliance with donor requirements, evaluation or organizational learning. Utilization-focused reporting, that is reporting information according to information needs that will be utilized, helps optimize the use of report findings and avoids unnecessary reporting.
When preparing for reporting, it is useful to know who will receive what information, in which format, in what frequency, and identify who will prepare and deliver the information.
The project manager is responsible for drafting, preparing and coordinating the donor reports in a timely manner.
- Review project documents (originals and any subsequent revisions) to ensure that the most up-to-date information is presented in the report; specifically, the most recent agreed-upon version of the Results Matrix, reporting requirements and deadlines.
- Review previous interim reports to ensure consistency of information and provide updates on any previous challenges raised in prior reports.
- Review donor agreement and amendments to confirm key reporting requirements stipulated in the agreement, in particular, the frequency of reporting, format and reporting language. If these are not stipulated in the agreement, the IOM Project Handbook provides additional instructions on how to proceed at p. 261.
In reviewing project documents, be sure to also refer to the Results Matrix and any corresponding ToC, the RMF, as well as the M&E plan for key inputs (if developed). If a reporting plan exists, refer to timelines and any already established reporting formats. The exact wording of the objective, expected results (outcome and output), as well as of the indicators, baselines and targets, reflected in the Results Matrix should be included the narrative part of the report to emphasize the reporting of results. Carefully reviewing these documents also facilitates the identification of unexpected results, which should also be included in any reporting.
IOM reporting template section II (Progress made towards realizing outcomes and outputs)Ibid., p. 364.
Section two of the reporting template describes results (outcomes and outputs) and related activities of the implementation during the reporting period. If the intervention has conducted frequent monitoring, information pertaining to results, such as outcomes and outputs, can be derived from its monitoring reports, its Results Matrix and/or its RMF. Progress made towards incorporating the cross-cutting themes should also be mentioned in this section of the report.
IOM reporting template section III (Progress achieved compared with the indicators in the Results Matrix)Module 5 of IOM Project Handbook, pp. 365–366 (Internal link only).
The third section of the reporting template focuses on reporting on progress against the indicators of the Results Matrix, with detailed information on how to complete each cell.
IOM Results Matrix in the reporting template
Results | Indicators | Baseline | Target | Progress made during the reporting period | Cumulative project progress |
Objective Insert the objective as stated in the project document. | Insert the indicator as established in the Results Matrix. | Insert the baseline data relevant to the objective. | Insert the target set for the objective, as stated in the project document and Results Matrix. | Report the progress made within the current reporting period towards contributing to the realization of the objective as measured by the objective indicator against the defined target. |
Indicate the total cumulative progress from the beginning of the project to the current reporting period as measured by the baseline versus the target of the outcome indicator. No entry will be made in this column for the first report. |
Outcome 1 Insert the (first) outcome as stated in the project document. | Insert the indicators as established in the Results Matrix for Outcome 1. Be sure to add any new indicators that have been established subsequently. | Insert the baseline data relevant to Outcome 1. | Insert the target for Outcome 1, as stated in the project document and Results Matrix. | Report the progress made within the current reporting period towards influencing the realization of the outcome as measured by the outcome indicator against the defined target. |
Indicate the total cumulative progress from the beginning of the project to the current reporting period as measured by the baseline versus the target of the outcome indicator. No entry will be made in this column for the first report. |
Output 1.1 Insert the (first) output as stated in the project document. | Insert the indicators as established in the Results Matrix for Output 1.1. Be sure to add any new indicators that have been established subsequently. | Insert the baseline data relevant to .Output 1.1. | Insert the target set for Output 1.1, as stated in the project document and Results Matrix. |
Report the progress made within the current reporting period towards the realization of the output as measured by the output indicator against the defined target. Example of a project with a 12-month duration: In interim report 1 (Jan.–Mar. period): 500 border officials, disaggregated by sex, trained on the identification of falsified travel documents. In the final report (Oct.–Dec. period): 350 border officials, disaggregated by sex, trained on the identification of falsified travel documents. |
Indicate the total progress from the beginning of the project to the current reporting period as measured by the baseline versus the target of the output indicator. Example of a project with a 12-month duration: In interim report 1 (Jan.–Mar. period): No entry will be made in this column for the first interim report. |
Activities List the activities accomplished during the reporting period towards the realization of Output 1.1 based on the initial activities in the results framework. |
Progress towards the realization of the objective, outcomes and outputs is recorded concisely in the column “Progress made during the period” while the progress made towards the results for the duration of the intervention is included in the column “Cumulative progress”.
A well-designed Results Matrix, and related RMF, with detailed indicators and data sources, can outline how data can be monitored at each results level – output, outcome and objective. Following the above, the data from monitoring findings is then recorded by being entered into the column “Achieved” of the RMF and, when reporting, into the column “Progress made during reporting period” for each level (output, outcome or objective – see above). As implementation progresses, monitoring reports and progress tracked over time in the RMF will show how progress on the indicators slowly lead to overall results at different levels that will inform reporting.
IOM reporting template section IV (Challenges encountered and actions taken)Module 5 of IOM Project Handbook, p. 367 (Internal link only).
This section of the reporting template is an important part of reporting on results, as it provides a space to explain how results were affected by unintended consequences during implementation. The section describes and analyses significant difficulties or delays faced during project implementation and summarizes the corrective measures that have been taken or are being planned to address and rectify the situation. An analysis of the impact of any assumption in the Results Matrix, which did not hold true, or any risk realized during the reporting period must also be included in this section. The effect of the unrealized assumption or realized risk on the delivery of specific results and the impact of the overall project implementation should be regularly monitored to diminish any negative effects. Monitoring can also contribute to the assessment of whether the issue was outside of IOM’s control, such as in the case of a political event or a natural disaster. If the problem was due to a flaw or oversight in project design or due to insufficient mitigation measures in the risk management plan, this should also be mentioned. Finally, it is important to describe the measures or treatment plans that have either been planned or have been taken to address the situation and how it will be monitored. A table is provided within the reporting template to guide users to match each challenge with actions taken.
IOM reporting template section V (Conclusion)Ibid.
In this section, a brief summary of the key achievements realized during the reporting period should be provided, as well as the next steps in the project’s implementation outlined. In the case of an interim report, briefly reiterate if there are any significant or persistent challenges anticipated for the upcoming period. It can be helpful to show how these future key activities are envisioned to lead to further results. For final reports, good practices and/or lessons learned during implementation should also be highlighted. To capture lessons learned, it is important to identify the exact challenges and remedial actions that were taken that lead to achieving positive results in subsequent activities. If they exist, lessons learned logs can be useful for this section of the report. Evaluations can also contribute to the lessons learned process and complement the final report.
As stated above, reporting based on M&E findings and data can contribute to organizational learning and help improve the quality of interventions overall. While chapter 5 of the IOM Monitoring and Evaluation Guidelines provides more detail on learning through evaluation, this section highlights the link between reporting and learning with a focus on reporting based on findings and data from M&E activities.
Results-based reporting is crucial for decision-making and planning during the implementation of an intervention, as it can provide the information necessary for evidence-based decision-making. The findings and data generated through M&E activities provide crucial input for project management and informs decision-making. Furthermore, reflecting on findings from M&E allows for learning from day-today activities. In turn, the lessons learned from daily activities help to improve the quality of both ongoing and future activities, as well as for future programmes.
Reporting, reflecting and learning should occur throughout the intervention life cycle. While at the conceptualization stage, findings and lessons learned can be incorporated from previous evaluations; at the implementation stage, monitoring and/or a midterm evaluation can provide information to help decision-making to improve performance and impact and contribute to learning. Finally, at the completion stage, project/programme staff can reflect on the intervention to prepare for donor reporting and an evaluation facilitates the collection of higher-level lessons learned.
IOM resources
2017a Module 5. In: IOM Project Handbook. Second edition. Geneva, pp. 355–416 (Internal link only).
Other resources
International Federation of Red Cross and Red Crescent Societies (IFRC)
2011 Project/Programme Monitoring and Evaluation (M&E) Guide. Geneva.
Kusek, J.Z. and R. Rist
2004 Ten Steps to Monitoring and Evaluation System: A Handbook for Development Practitioners. World Bank, Washington, D.C.
UNICEF
2017 Results-Based Management Handbook: Working together for children. New York.