M&E methodologies
An important part of planning for M&E activities is determining what kind of information you need to measure success or areas for improvement. While indicator options and planning for data collection in later sections of the toolkit are discussed, there are a few methodological approaches commonly used in M&E work that you should be familiar with before choosing your campaign indicators.
These are frequently used in M&E activities and are recommended when you are exploring something new or looking to explain an experience or perception with a description, narrative or score/ranking. Use qualitative methods if you have questions where participants are asked to explain how or why. Also use these approaches if you would like to ask open-ended questions in one-on-one interviews or focus group discussions, for e.g., when someone is being asked to explain their feelings or perceptions.
Scoring is a common qualitative approach used in M&E work. As highlighted in some of the toolkit examples, participants may be asked about the campaign assets using a Likert scale. The audience’s feelings can be assessed about the asset’s understandability, relevance, motivation, etc. by providing a sample statement and asking them to rate how much they agree or disagree with the statement. Likert scale values range from 1-5; from 1 as strongly disagree through to 5 as strongly agree.
These reflect numerical data, including when a question can be answered using yes/no, a number or another targeted response that does not require an explanation. Quantitative methods could be as simple as using count data, for e.g., number of participants in a group, number of sessions attended or they could include more complex surveys, causation, cost-benefit or cost-effectiveness analysis, or statistical tests, for e.g., used to gather information from a large group of people; you may want to know if they have seen your advertisement, how many times, when, where and so on.
Although many governments evaluate IYCF behaviours, surveys are only conducted approximately every five years. To measure behaviour change over time, pre/post evaluations are a commonly used alternative. Pre-evaluation measures baseline data before a campaign is implemented, then the post-evaluation is conducted following a set duration after implementation or at the end of the campaign. It is likely you will conduct a pre-post evaluation for the behavioural outcome(s) of interest rather than delaying evaluation until after a governmental survey is complete. To plan for a pre/post evaluation, consider the following:
- Schedule time for pre-survey data collection before the campaign has been implemented.
- Determine the measurement frequency based on priority behaviour(s) for the campaign.
- Employ validated survey instruments, if possible. If a validated tool is not available, develop a measurement tool and test it before using it to gather behaviour change data.
- Take steps to control for social desirability, like when participants tell you what you want to hear.
- Plan how to gather and analyse the data.