How can we improve the M&E of CLTS?

miscommunication

June 13, 2014

If you asked a public health practitioner whether monitoring & evaluation is important, you’d likely get a strange look. Practicing M&E is intuitively a good idea. After all, it can keep us focused on our desired results, provides an opportunity for learning and improvement, and promotes accountability to our supporters, colleagues and community members. So it comes as no surprise that most CLTS actors consider M&E to be an important part of sustaining behavior change and scaling up CLTS activities (Venkataramanan, 2012).

Good M&E is driven by the quality and relevance of the data that we collect. This is common sense. But if you look a bit deeper, many current M&E methods for CLTS are neither standardized nor rigorously applied (Venkataramanan, 2012). And these two issues are problematic because they make it difficult for us to share, trust, and understand each other’s data and conclusions.

There are two challenges

Developing standardized approaches

1First, practicing standardized M&E means developing common definitions and indicators, harmonizing our information sharing systems, and using accepted tools that reflect good practice. This would be beneficial for comparing results, identifying trends and patterns, and seeing where the differences lie.

How do we do it?

An example of standardized M&E might be common indicators for a safe and accessible latrine. Safety may include factors such as a lock on the door, the availability of soap, or the presence of flies. Accessibility may include distance from the home and the number of users (i.e. shared vs private). How do your indicators compare with those of your partners?

In Practice

Signpost Icon

For the Testing CLTS Approaches to Scalability project, we conducted a review of 115 ‘grey literature’ documents (non peer-reviewed publications) on CLTS and identified and categorized the indicators used for data collection and reporting (Venkataramanan, 2012). We identified 23 indicators and grouped them into 8 categories: costs, triggering and follow-up, access, ODF, sanitation/hygiene behavior, perceived impact, structural/institutional, and health outcomes. They could also be grouped into the levels of inputs, process, outputs and outcomes. However, we noticed a distinct gap in indicators measuring inputs or processes – only 3 of the 23 indicators related to these.

The only two indicators used consistently in the grey literature were the number of triggered communities and number of communities declared open-defecation free (ODF). But there was some inconsistency. For example, triggering-related indicators were referred to as process indicators in some documents and as output indicators in other documents. ODF indicators were considered to be output indicators in some documents, whereas other documents referred to them as outcome indicators.

Aggregated list of CLTS Indicators from grey literature
(Excerpted from Venkataramanan, 2012, p.8)

> Show / Hide the Indicator List

Type of Indicator Indicator Category
Inputs Program cost ($/person or $/household) Costs
Process/Output # of communities triggered Triggering and Follow-up
Process/Output # of follow-up visits till ODF achieved Triggering and Follow-up
Output/Outcome # of people with access to latrines in community Access
Output/Outcome # of people using latrines in community Access
Output/Outcome # of toilets : # of households in community Access
Output/Outcome # of communities declared ODF ODF
Output/Outcome # of people living in ODF environment ODF
Output/Outcome # of communities regularly monitoring ODF status ODF
Output/Outcome # of people washing hands at appropriate times Sanitation/hygiene behavior
Output/Outcome # of households disposing child feces in latrine Sanitation/hygiene behavior
Output/Outcome # of people aware of good sanitation/hygiene behavior Sanitation/hygiene behavior
Output/Outcome Spread of CLTS to neighboring communities Perceived impact*
Output/Outcome Individual sense of security from owning latrine Perceived impact
Output/Outcome Ability to defecate at any time of day Perceived impact
Output/Outcome Reported odor level in community Perceived impact
Output/Outcome Reported presence of flies in community Perceived impact
Output/Outcome # of people trained in CLTS Structural/ Institutional
Output/Outcome Local government expenditure on sanitation Structural/ Institutional
Output/Outcome # of communities with sanitation committees Structural/ Institutional
Output/Outcome CLTS incorporated into District Action Plan Structural/ Institutional
Output/Outcome # of cases of diarrheal disease Health outcomes
Output/Outcome Household health expenditure on diarrheal disease Health outcomes
*Perceived impact is a qualitative indicator

Being more rigorous

2Second, we also need to strengthen the rigor of the tools and practices we use to collect, analyze and share that data. This will improve the quality and the relevance of our data, and allow us to draw logical and sound conclusions and ideas for program improvement.

How do we do it?

Examples of rigorous M&E practices include well-designed surveys, checklists for improving data quality and reliability, the frequency of data collection at regular and appropriate intervals, and utilizing practices that eliminate bias and conflict of interest.

In Practice

checklist icon For the Testing CLTS Approaches for Scalability project activities in Ghana and Ethiopia, UNC researchers and Plan International’s in-country implementation teams developed checklists for data collection (Crocker, 2014). The checklists are used to document training sessions for local actors and community visits, and provide a systematic guide on how and what data to collect: who attended the activities, what was presented, skills trained, discussions held, and observations made. Collecting this information is important because it enables us to verify whether every training and community visit was implemented as planned and then analyze how this could be strengthened or improved.

> Download our Data Collection Checklists for Trainings and Community Visits (PDF, 291KB)

survey icon For the Testing CLTS Approaches for Scalability project in Ethiopia, UNC researchers worked with Plan’s in-country teams to develop a mid-line household survey to record demographic information, and water, sanitation and hygiene practices, and information about their interactions with local actors as they relate to CLTS. Surveys are conducted at three points during our project: the beginning, the mid-point, and the end. In combination with other data, a well-designed survey conducted at regular and appropriate intervals enables us to rigorously evaluate household-, village-, and kebele-level outcomes as a result of the project. Methods to strengthen rigor include: independent surveying at intervals, a mix of self-reported and observational data, oversight mechanisms to improve data accuracy, and good questioning practices.

> Download a Sample Page of our Midline Household Survey (PDF, 469KB)

Related Resources

Project Resources

External Resources