M&E in Emergencies
An M&E system for an emergency response should remain light and dynamic to avoid placing a heavy burden on staff or detracting from the response itself and to stay responsive to the changing context and the evolving needs of targeted populations. Monitoring during the first phase of an emergency response is often characterized by systematic output-level data collection to strengthen accountability and management quality, and light and purposeful monitoring at the intermediate- results level to check on the quality of the response. Most emergency M&E systems include a real-time evaluation approximately six to eight weeks after a response begins, which provides a more rigorous check of the appropriateness and relevance, effectiveness, connectedness, sustainability, coverage and coordination of the response. 1. Early monitoring systems are simple, use-oriented and flexible to accommodate change in context and activities
The process for establishing a simple, use-oriented and flexible monitoring system during the first phase of a response can be summarized with four steps: 1. Count progress toward outputs; 2. Check the appropriateness and effectiveness of the response; 3. Change the response as needed based on findings; and 4. Communicate progress and results to stakeholders.
These Four Cs, when implemented efficiently, provide timely information that is immediately relevant for maintaining a high-quality emergency response. Each step is described below.
Count Project teams can use simple monitoring forms to count progress toward activities and output-level indicators and determine if targets are being met in a timely manner. These counts should begin when the first outputs are delivered and finish when the output-level components of the project are complete. Accurate and complete output-level data are essential for strong management quality, internal compliance and reporting to donors. The project team should create a simple Excel database to house output-level data. Ideally, all field locations use the same output-level tracking and reporting templates to allow for easy and timely compilation of results. In addition, the data should be made highly accessible (both within each field location and centrally) for easy verification and use by all project staff.
Record output- and activity-level data (depending on the intervention) into a matrix or table on a flipchart or a whiteboard on the office wall. Enter data daily into these tables or matrices to show progress by location and for important comparison groups. The results are then readily available during daily debrief meetings and for reporting.
To provide accurate results, the project team should ensure that all outputs (e.g., goods and services) are counted by the monitoring system. It is not appropriate to extrapolate output-level results from a sample. Complete and accurate records are necessary for strong management quality, reporting and project accountability.
Put counting systems in place from the very beginning of the response as it becomes much more complicated to reconcile records and information later on.
Check The M&E system should enable staff to check on the appropriateness and effectiveness of the response with light monitoring of IR-level indicators, and through collection of data on satisfaction and feedback from the people we serve. IR-level indicators generally focus on the use of the goods and services provided and, together with feedback mechanisms, can provide a clear picture of what has been most and least useful about the response so far.
These checks require a combination of quantitative and qualitative data collection methods and generally utilize postdistribution satisfaction surveys, simple checklists, semistructured key informant interviews, and direct observation. The monitoring tools should ask specific closed-ended questions and include observation to verify knowledge acquisition and the level and type of change in behavior, as well as open-ended questions to generate in-depth feedback that could explain why use or satisfaction is low, for example, and how to improve the response. Project staff can ask these questions in FGDs and household interviews separately to different subgroups, particularly males and females, where relevant, to capture their perspectives. The focus should be on the perspectives of the most vulnerable groups and households, as they are often the most relevant for project decision-making.
Direct observation plays an important role in verifying behavior change and the quality of the response, such as the adoption of water, sanitation and hygiene (WASH) practices or the quality of shelter materials distributed. Interviewers can collect direct observation data through simple checklists; they can also ask field staff to share any other informal observations or anecdotal information during project team debrief meetings that might indicate changes in the situation and conditions to which the project needs to adapt.
Staff should collect the intermediate results–level monitoring and feedback data soon after outputs are delivered so they can address any problems and make improvements quickly before many resources have been spent. These checks can begin immediately after the pilot distribution of NFI kits or a hygiene promotion activity to determine the quality and appropriateness of the kit‘s content or the relevance of the hygiene messaging. These checks will be fairly intensive initially (e.g., daily or weekly) until the desired level of quality or effectiveness is obtained; afterward, lighter and less frequent checking is sufficient to verify that the situation has not changed. Refer to Standard 2 on accountability for information on establishing effective feedback mechanisms.
Continue monitoring satisfaction levels and feedback and use of goods and services through the first phase of the response as needs and priorities may change with the evolving context. Adapt monitoring tools as new questions about appropriateness and effectiveness arise, and as the original questions related to quality or initial use may be answered by early monitoring results.
Whenever appropriate, the project team should consider whether more participatory methods can be used to collect this information. This is particularly useful to solicit participation of less literate or less vocal community members, such as women, and to generate discussion among respondents.
Use pile-ranking as a participatory method to determine which NFIs were most and least useful and whether any priority item was missed. Working with actual samples or photos of the NFIs provided can help respondents to quickly recall the quality and utility of items received. A postdistribution pile-ranking exercise tool is included in Annex A.
Consider how to triangulate between data sources to minimize data collection while ensuring the data provides an adequately accurate picture of satisfaction and use. Use purposeful sampling to collect data from the most relevant subgroups (e.g., young girls immediately expected to apply water handling messages, skilled labor involved in shelter reconstruction, and male and female
members of the poorest households most in need of the assistance provided).17 A light sample of two to three FGDs or household interviews may be enough if they capture diverse perspectives and yield the same answers. If the initial interviews or FGDs yield different results, additional data collection is needed to verify the data or to understand how and why answers or feedback vary between different subgroups.
If, through purposeful sampling, you determine a high level of use and satisfaction among the most vulnerable groups, it is likely that use and satisfaction is high throughout the target population. Change Response teams should adjust specific activities in the response if the monitoring data indicate that the support provided is not meeting quality standards or is not as effective as it could be in responding to priority community needs, or that new unmet needs have emerged. During daily project debrief meetings, the team should discuss how to address any gaps or areas needing improvement. For example, monitoring data may show that some items in the NFI package are not being used or are being used incorrectly. The project team should determine whether and how the content of the NFI package should be adjusted (e.g., replacing these items with more locally appropriate models or removing them altogether) or whether greater sensitization is needed for more appropriate use of NFIs. It is important to make these decisions in a timely manner to avoid spending resources on support that might not be useful or no longer correspond to priority unmet needs.
Communicate Good communication about successes and challenges is required for strong donor accountability. Monitoring results (e.g., counts and checks) and any changes to the response should be communicated regularly to stakeholders, including community members, local government and donors. For example, situation reports can be adapted to share with donors and other stakeholders as appropriate. The frequency of these updates varies over time depending on the fluidity of the response; daily situation reports and updates are not unusual in the first few weeks of a response, and weekly updates are common practice for most of the acute emergency phase. These updates should document output counts, initial IR-level checks (whether positive or negative), any change made in response to these and upcoming plans.
Teams should also communicate these results verbally, especially in the case of significant adjustments in the response that may require some form of preapproval from donors or the government. Response teams should justify and document any change to project activities in brief regular updates or reports to donors and other stakeholders. Clearly communicating monitoring results and any required changes can demonstrate flexibility and the ability to meet community needs and implement a high-quality project within a shifting emergency context.
Communicate any significant changes in the response to donors immediately. They are more likely to support flexibility and changes if the reasons have been explained in advance—make sure donors do not hear of proposed changes only after reading the next project report! Whether these changes require a formal project amendment or not, make sure to inform the donor and solicit their concurrence in a timely manner.
In addition to the Four Cs, Table 1 provides an overview of key characteristics of a strong, light monitoring system during the first phase of an emergency response.