Managing for Impact in Rural Development - A Gude for Project M&E - Section 5
HOME
Introduction
M&E and Impact
Design, Planning and M&E
System Set-up
What to Monitor
Information Management
Capacities and Conditions
Critical Reflection
Glossary
Logframe Sample
M&E Matrix Sample
M&E Methods
Sample TORs


 

 

 

 

5.1 An Overview of Deciding What to Monitor and Evaluate

Have you ever had the experience of going full speed ahead and then realising you are heading in the wrong direction? This is what happened to a cooperative in Chile and was a result of tracking the wrong information.1

Despite several years of hard work, by December 1998, the cooperative found itself unable to repay one of its loans. The cooperative had become top-heavy, with revenue unable to cover its operational and non-operational expenses. It had 11 paid employees when only 100 hectares of vegetables were being grown.

Since 1994, the cooperative had been able to organise many small farmers with little external support. It had strong leadership, a sound understanding of marketing constraints and a clear vision of how to overcome them. The cooperative was trusted by all parties, including INDAP the national agricultural development institute and expanded rapidly due to larger loans and more grants. The results of the earliest investments were considered sufficient proof that this cooperative could make it, and analysis of future prospects became increasingly relaxed.

Monitoring was reduced to tracking physical outputs: a larger warehouse, irrigation systems installed on members farms, more trucks, more production, etc. Little attention was given to the economic and financial results of these investments, even less to their sustainability. "We never had a method for monitoring this process, we were following the wrong indicators, we did not ask the correct questions and were far too short-sighted," says an INDAP staff member, adding, "In my opinion, the same happened at the cooperative." Another external advisor familiar with the process remarked, "There were two blind persons [INDAP and the cooperative] driving a very fast car."

To get to where you want to go, you need to know what information to seek to guide the journey. If you dont ask the right questions, you will not get useful answers. But the choice of what to ask is vast. How do you know what to choose and whom to involve in the process? How can you balance impact-level insights with tracking operational expenditure? When it comes to detailing precisely what will be tracked, documented and analysed, many choices have to be made by project stakeholders.

5.1.1 Keeping in Mind Different Information Needs

When deciding what information to monitor and evaluate, keep the following in mind:

  1. Seek the information needs of different stakeholders with them. Do not consider only project management information needs.
  2. Be sure to include information that can help you answer the five core evaluation questions: relevance, effectiveness, efficiency, impact and sustainability.
  3. Include information that can help you understand how well the project is dealing with cross-cutting issues such as the quality of participation, gender-balanced impacts and reaching the poorest.
  4. Remember to include information for each level of the objective hierarchy: goal, purpose, objectives and activities. This will help you answer the five types of evaluation questions (see point 2).
  5. Include enough operational information to know if you are making optimal use of resources and that operations are good quality.
  6. Seek information that can help you not only to check targets but, especially, to explain progress. Only by knowing why something is happening or why not, do you have a basis for deciding what corrective action is needed.
  7. Look out for the unintended. Tracking information related to the objective hierarchy will only keep you up to date on what you intend to achieve. Seek out unintended positive and negative impacts in order to take any corrective action that might be necessary.
  8. Last but not least, stick to the "less-is-more" principle. Only include a piece of information if someone in the project clearly uses it to improve impact. Regularly revise your list of information needs to filter out the information that does not seem to be critical to manage for impact.

5.1.2 Value of the M&E Matrix

The logical framework approach (LFA) that all IFAD-supported projects need to follow does not provide much detailed guidance on what information is useful to track. The standard logframe matrix provides insufficient space for detailed M&E comments. Only two columns are suggested in which to summarise M&E: a column for "indicators" and one for "means of verification" (see Section 4). This is not enough to be able to implement M&E.

To make M&E operational you need much more detail. This can be summarised in the "M&E matrix" (see Section 5.3), which contains the following information:

  • performance questions;
  • information needs and indicators;
  • baseline information requirements, status and responsibilities;
  • data-gathering methods, frequency and responsibilities;
  • required forms, planning, training, data management, expertise, resources and responsibilities;
  • analysis, reporting, feedback and change processes and responsibilities.

Looking at the matrix you might well wonder about the need for all this detail. A rule of thumb is "if everyone knows what they need to do when, why and for whom, then you have enough detail". Until then, keep detailing with the appropriate people.

Developing the M&E matrix after project start-up involves six steps:

  1. Identify performance questions.
  2. Identify information needs and indicators.
  3. Know what baseline information you need.
  4. Select which data-gathering methods to use, by whom and how often.
  5. Identify the necessary practical support for information gathering.
  6. Organise analysis, feedback and change.

The rest of this section details how to work with the M&E matrix. Annex C provides an example of the M&E matrix, which is based on the logframe example in Annex B.

5.1.3 Performance Questions and Indicators

It is common practice to jump straight from having refined the objectives in the logframe matrix to detailing the indicators. This causes a series of problems as people drown in detail before agreeing on why the indicators they suggest might be of interest and how they could support decision-making.

Identifying performance questions for each level of the objective hierarchy (see point 2, 5.1.2 above), before detailing indicators, helps you focus your information-gathering on what will truly advance understanding and improve project performance. Performance questions are very useful for projects that are trying to innovate the how to of development. For example, the MARENASS project in Peru disburses all funding through farmer competitions, while the FODESA project in Mali sub-contracts all activities. They need to learn how to do this well so must monitor the quality of the process not just whether targets are hit.

With performance questions, you can start identifying what information you need. This can include indicators and, possibly, additional background information that allows you to interpret the data from the indicators. Indicators will only ever show a partial view. They represent a simplification or approximation of a situation. An indicator simply helps communicate changes that are usually more complex. Using an indicator often means reducing data to the symbolic representation of a project objective, in a way that is relevant and significant for the people who will use the information.

Almost any topic that needs to be monitored can be assessed using either quantitative or qualitative indicators, according to the kind of information you need. Many indicators use adjectives. Common adjectives in indicators are: successful, adequate, equitable, good, effective, participatory, empowered and well functioning. When using adjectives in indicators, make sure everyone involved agrees on what they mean.

When working with indicators to assess impact, you are trying to create an overall picture built up of various aspects. A typical project will want to know its impact on "quality of life" or "poverty alleviation". Yet each project component makes a unique contribution: health activities reduced morbidity/mortality, agricultural development helped increase yields and incomes, functional literacy built self-esteem, etc. So one indicator or even several will not be adequate to understand the changes. For impact assessments, a descriptive analysis rather than single indicators often better capture the overall changes.

5.1.4 Comparing to See Change

One of the first concrete tasks that you, as project director or M&E unit coordinator, are likely to face is establishing baselines. To see change, you will need to make a comparison. A baseline serves as a point of comparison. You have three options, each with their advantages and disadvantages (see 5.6):

  1. Compare the situation "before the project started" of, for example, a community, household or organisation with the situation "after it started".
  2. Track changes with and without a project presence, which means comparing changes inside the project area with those in similar locations outside the project area.
  3. Compare the difference between similar groups one that has been working with the project and a so-called control group that is not within project influence.

Three alternatives are: (1) using the first measurement as the starting point, even if it is after your intervention has started; (2) using a rolling baseline wherein you collect information of a site or group only when you start working there or with them; and (3) making optimal use of existing documentation to develop an overview of the situation.

5.1.5 Updating Your Information Needs and Indicators

The sign of a healthy M&E system is that it evolves over time. As the project evolves, activities will change, groups will evolve, and the understanding of what information is useful will grow. Plan regular revision of the list of information needs and indicators.

Back to Top

5.2 Knowing What you Need to Know

5.2.1 Information for - and with - Different Stakeholders

To decide what you need to know, first make an effort to understand the information needs of different stakeholders (see Box 5-1). This requires analysis with the stakeholders of the information they need, either by asking them to develop their own list of information needs or by checking a suggested list with them. Stakeholders are likely to choose to focus their M&E requirements on their areas of specific interest (see Table 5-1). Including different stakeholders in identifying what information to track will also increase the likelihood that the information will be used.

 

Box 5-1. Knowing who needs to know and do what in Bangladesh

The core team of the ADIP project (Bangladesh) recommended working with various stakeholders (target primary stakeholders, NGOs and their group facilitators, government staff, etc.) to monitor impact according to their specific interests, as follows: "Target groups should be encouraged to observe and document changes in self-employment, production and income, and improvements of their living conditions in terms of food security, child education, water and sanitation, assets and housing. The NGO group facilitators should be enabled to monitor group development, gender relations and the advancement of group members individual capacities (literacy, book-keeping, etc.). Field extension officers should be trained in applying simple methods to monitor changes in knowledge and skills, adoption of new agricultural and horticultural management techniques, and diversification and intensification of production."

 

Table 5-1. Examples of indicators for different stakeholders in a farmer-to-farmer extension project, Mexico 2

Funding Agencies

Extension Workers and Technical Advisors

Farmers

Learning opportunities

Agro-climatic conditions faced

Marginal areas

Widening impact

Alliance/Network-seeking

Health and gender awareness

Local vision and support

[Work with] Indigenous populations

Ability to speak the farmers language

Owning the project

Sideways extension (number of experimental plots)

Impact of learning workshops

Changes in income/wealth relative to others

Strength in defending technical experience locally

Changes in behaviour

Yields

Acquiring knowledge

Persistence

Results maintained over time

Commitment of extension agents

Simplicity in language and management of technology required

Erosion (control)

Nutrition and vitamins

Yields

Quality of crop

Labour, input

Variety in production (diversification)

Income

Ease of cultivation

Working together as a group

Creating independent income

Not inspiring the criticism of others

Self-respect

Discouraging migration

Providing employment

Teaching something useful, practical

List all key stakeholders and organise meetings with them to define their information needs (see Box 5-2). Be aware that not all information needs can be anticipated ahead of time. As the project evolves and stakeholders develop their visions for and understanding of the project, information needs will have to be adjusted (see 5.7).

The project M&E unit may need to coordinate the information flows to ensure that pieces of information complement (and do not duplicate) each other and to organise everyones access to each others data and analysis. See Section 6 for more on developing an M&E communication strategy.

 

Box 5-2. Compiling ideas before deciding on indicators in Zimbabwe

In an irrigation project in Zimbabwe, when the logframe was being revised, an initial set of indicators had been collected by project staff and consultants through visits to irrigation schemes and discussions with male and female farmers, district officials and extension workers. To refine this set, two 1.5 day workshops were held with about 40 participants each. First, participants learned about the concept and purpose of monitoring in the project. Project outputs and collected indicators were presented. The scheme-specific indicators were refined with the farmers. The institutional indicators were refined through discussions with project management. Institutional linkages and roles/responsibilities of monitoring at the scheme, district and national levels were also discussed.

 

5.2.2 M&E for Different Levels in the Objective Hierarchy

Start by identifying your information needs in relation to the objective hierarchy. Each level of the objective hierarchy (goal, purpose, output and activity) has unique performance questions and therefore its own information needs. In general, as you move from activities up to goal in the objective hierarchy, M&E becomes less straightforward (see Table 5-2). For example, at the activity and output levels, you can quite easily track which activities have been completed and their direct outputs. This is operational information. However, it is more difficult to identify the outcomes of the outputs together.

At the impact level, assessing the extent to which a project has reduced poverty and improved peoples livelihoods requires careful thought about the performance questions and indicators that will be appropriate. In general, as you move up the objective hierarchy, you will probably find it necessary to integrate qualitative and quantitative information, relying less on single quantitative indicators to make sense of progress.

Table 5-2. Shifting information needs in the objective hierarchy

Level in Objective Hierarchy

What to Monitor and Evaluate

Activities

Have planned activities been completed on time and within budget? What unplanned activities have been completed?

Outputs

What direct tangible products or services has the project delivered as a result of activities?

Key outcomes/components

What changes have occurred as a result of the outputs and to what extent are these likely to contribute towards the project purpose and desired impact?

Purpose

Over its life, overall, has the project achieved the changes for which it can realistically be held accountable?

Impact

To what extent has the project contributed towards its longer-term goals? Why or why not? What unanticipated positive or negative consequences did the project have? Why did they arise?

5.2.3 The Five Key Questions

After you identify what basic information you will require to gauge whether you are proceeding according to plan, you might have to add more information needs. You must ensure that you can answer the five standard types of evaluation questions (see Section 2.1), referred to here as "the five key questions":

  1. Relevance Was/Is the project a good idea given the situation needing improvement? Does it deal with target group priorities? Why or why not?
  2. Effectiveness Have the planned purpose and component objectives, outputs and activities been achieved? Why or why not? Is the intervention logic correct? Why or why not?
  3. Efficiency Were inputs (resources and time) used in the best possible way to achieve outcomes? Why or why not? What could we do differently to improve implementation, thereby maximising impact, at an acceptable and sustainable cost?
  4. Impact To what extent has the project contributed towards its longer-term goals? Why or why not? What unanticipated positive or negative consequences did the project have? Why did they arise? To what extent has the project contributed towards poverty reduction (or other long-term goals)? Why or why not? What unanticipated positive or negative consequences did the project have? Why did they arise?
  5. Sustainability Will there be continued positive impacts as a result of the project once it has finished? Why or why not?

The M&E of operations will focus on the questions of "effectiveness" and "efficiency". More strategic reflections, like during annual reviews and supervision missions, will look at the questions of "relevance", "impact" and "sustainability". Some projects are also asked to prove their cost-effectiveness (see Box 5-3).

 

Box 5-3. Understanding cost-effectiveness

Increasingly, projects are asked to prove their cost-effectiveness. This means showing how much they spend per "product" or per "service". For example, per person who attends the new health clinic, how much has the project spent for staff time, training, kilometres of transport and construction materials? This can be calculated by comparing the real costs of the project to the original estimated costs. Another more common version is to calculate unit costs and compare these to such costs in other, similar projects. Cost-effectiveness analysis should lead to the greatest benefits at the lowest possible cost-per-unit for each benefit however benefit might be defined.

However, in practice, this type of calculation is difficult when working on less tangible issues such as local organisation strengthening, increased womens awareness, stronger democracy, etc. It is not as easy to count one unit of "extra democracy" as it is to count increased clinic attendance. Also, what is considered the "effective" use of resources in one context may be considered a waste in another.

 

5.2.4 Keeping an Eye on Cross-Cutting Concerns

Many IFAD-supported projects strive towards encouraging gender equality hand in hand with poverty reduction. Knowing how well you are doing on the gender-equality scale will require an M&E system that tracks gender-disaggregated differences. Without this, a project will find it very difficult to prove its effectiveness for any gender-sensitive objectives such as "increased purchasing power" or "increased access to land". Indicators will need to be formulated that enable gender-disaggregated data collection and analysis. Different aspects of the baseline and interim thematic studies also need to be gender sensitive.

In a Zimbabwe project, during workshops for preparing the monitoring system, there was strong debate among participants about how to include a gender-sensitive perspective. Gender concerns are crucial for a successful project, as gender imbalances persist in terms of plot-holding, division of labour, access to profit, etc. Yet gender issues had not been spelled out in project objectives. Focusing on gender when monitoring change allowed it to appear as a cross-cutting concern.

  • Your specific gender-related information needs will relate to your objectives, so the following examples are only to provide inspiration:
  • incidence of stunting among boys and girls;
  • number and type of households participating in micro-credit related income-generating activities, with special consideration of female-headed households from poor and very poor households;
  • the number and gender of out-of-school children and dropouts;
  • number of male and female farmers affording basic food, increased from x% to y% of the target population by the end of the programme;
  • number of diseases among women/men and girls/boys related to malnutrition, decreased from x to y by the end of the programme.

Other social differences that a project considers critical also need monitoring. For example, in Nepal, a project will be disaggregating data not only by gender, but also by caste and ethnic groups. This will help the implementing team determine whether the most vulnerable groups are benefiting.

5.2.5 Remembering Operational Information

Information for managing project operations is just as important for overall performance as information about achieving the project strategy. Operational information monitoring tends to be straightforward for most projects, partly because physical and financial monitoring involves simple counting. But as there is so much that can be counted, the trick is to limit this type of monitoring to the necessary. For the key areas of operational management, Table 5-3 lists the main management tasks and the information needs.

Table 5-3. Key areas of operational management, management tasks and information needs

Operational Management Area

Key Management Tasks

Information Needs

Work planning and activity tracking

- Annual, quarterly and weekly activity planning

- Allocation of resources to activities

- Checking progress on activities and responding to problems

- Detailed activity, sub-activity and task lists for achievement of outputs

- Lists of required resources per activity

- Activity and task progress

Financial management

- Allocation of financial resources to activities and tasks

- Monitoring expenditure according to budget

- Revising budgets as needed

- General project financial-management information

Plant, building and equipment management

- Purchasing and maintaining equipment

- Allocating equipment

- Asset register

- Vehicle use

- Equipment maintenance schedule, standards and responsibilities

Staff management

- Developing and monitoring staff work plans

- Staff performance appraisal

- Time use of staff

Contract management

- Developing contracts

- Monitoring delivery of contracts

- Copies of contracts

- Dates of completion of contracts

- Report on quality of contract fulfilment

5.2.6 Tracking Quality and Context to Explain Progress

In Indonesia, project staff said, "We need to understand the link between physical progress monitoring and the benefits of physical outputs for the rural poor. For example, we dont know what effect it has on the poor when the monitoring data shows that 50 of the 100 km of feeder roads have now been built. So we dont know the benefits of our investments. With our current physical indicators, we cannot see the link between investment, activity, progress and benefit."

To explain progress and not just measure how much of something occurred you can:

  • monitor the quality of the implementation process;
  • use qualitative methods that ask people about their opinions on the process;
  • keep up-to-date on the operating environment.

As project director or M&E unit coordinator, you will probably find that keeping track of the use of inputs and of targets for activities and outputs is time consuming. Yet it is essential. Furthermore, the example of the Chilean cooperative, at the beginning of Section 5, shows that it is not enough. You will need to know why something is working well or not so you are able to provide strategic guidance and make appropriate adjustments. Simply knowing that you have, for example, built 86% of the roads within the expected timeframe, does not tell you if these are of good quality, in the right place and impacting poverty, or whether capacities have been built to maintain them.

Lets take a practical example to see how targets are linked with monitoring that explains progress. Many IFAD-supported projects intend to "build capacity" or "develop local institutions". Common indicators for this are, for example, "number of small farmer groups formed" or "number of extension staff trained". However, this tells you nothing about the quality of the work or about impact. You might have helped initiate 100 small farmer groups but find that six months after the first meeting, only 18 are still functioning. So you will need to monitor, for instance, the quality of the process through which these groups are set up so that, later, you are better able to make the adjustments needed to sustain the groups. Another example is if you want to assess impact. You would need to evaluate with group members how their membership in the group is improving their livelihoods (or not).

In practice, monitoring in order to be able to understand what the numbers mean requires the use of qualitative methods (see Section 6 and Annex D).

Keeping informed of the operating environment is also critical to interpret success or failure. Section 2.2.3 discusses ways to keep track of the project context. Those involved in the projects will update themselves through existing information sources and via their formal and informal networks. But updates can also be sub-contracted as pieces of research on key topics relevant to your project. You can also organise an annual seminar to which you invite specialists to provide an overview of trends. The issues you will need to track depend on the project focus. Common issues include: legislation, macro-economics (markets, prices), agricultural price policies and trends at the national/international level, poverty status, gender relations, the organisational landscape, demographic change and health trends.

5.2.7 Looking Out for the Unintended

Indicators are critical to projects. They represent information you know will interest you. And what about important information that we do not expect? In the Chile example, they did not think to look at the financial results of the new investments.

Some projects include in their annual, mid-term and completion evaluations the question of unintended positive and negative impacts that are not part of the objective hierarchy. This is a good M&E practice. Section 6 and Annex D describe some ways to assess unintended impacts.

You can also track the unexpected through more regular reflections. When deciding what to track, you cannot anticipate the unknown. But you can plan time to reflect on the unexpected. Ask yourself, "What happened with respect to this project activity/relationship/output/component that we did not expect?" To work through this, the project should address the questions:

  • What happened since we last met that was unexpected?
  • How was it different from what we expected?
  • What are the implications of the unexpected for our work (e.g., for a specific activity, a relationship with another organisation or a specific project output)?

5.2.8 The Less-Is-More Principle

One of the most difficult tasks for projects is to monitor within their limits. Ministry staff involved in one project in Indonesia said, "In central Jakarta we only get data on a monthly basis from 30% of the groups. In the two provinces that perform the best, we receive data from 80% of the groups." If the requirements were reduced (frequency, number of indicators and level of detail), the project might get a better response rate and be able to use its limited resources for more optimal monitoring.

Probably the biggest complaint of project M&E staff is that monitoring many indicators gets in the way of the "real" work of implementation. It is very important to reduce data collection to the minimum necessary to meet key management, learning and reporting needs. Trying to monitor too much can ruin the entire M&E system.

The PADEMER project in Colombia encountered many difficulties due to the numerous indicators suggested in the appraisal report. So, the monitoring unit facilitated a revision process for the indicators with the national technical coordination unit and the implementing NGOs. All agreed to continue using the key impact indicators as given in the appraisal report ("variation in incomes" and "generation of employment"). They then formulated indicators for the five project components: productive development, business management, markets and marketing, organisational development and financial services. They reduced over 100 indicators to 18 key ones that can demonstrate the changes the project stakeholders expect to deliver.

With those involved in detailing the operational M&E plan, screen all proposed indicators before agreeing to monitor them. For every indicator or piece of information that you or others are suggesting to monitor and evaluate, ask yourself, "Who needs to use this information, when and to do what exactly?" In a project in Indonesia, data on livestock, farm inputs, group details (e.g., savings, loans, training completed and technical progress made) and finance and administration information are recorded. Fieldworkers collect information from 13 different record books kept by each farmer group. Perhaps screening this projects indicators for quality and end-use could make monitoring more useful and less of a burden.

When there is doubt about an indicator, seriously consider excluding it from your M&E plan as tempting as it might be to think that someone may find it of interest. Including what is nice to know will only make your life difficult. Try to include only what you need to know.

Back to Top

5.3 Using the M&E Matrix for Detailed Planning

5.3.1 About the M&E Matrix

To make M&E operational you need much more detail, which can be summarised in the "M&E matrix" (see Table 5-4). The rest of this section and Sections 6 to 8 provide the details on how to deal with each column. Here we will briefly outline the M&E matrix, looking at each column in turn.

Table 5-4. Contents of the M&E matrix

Performance Questions

Information Needs and Indicators

Baseline Information Require-
ments Status and
Responsib-
ilities

Data- Gathering Methods, Frequency
and
Responsib-ilities

Required Forms, Planning, Training,
Data
Manage-ment, Expertise, Resources and Responsib-
ilities

Analysis, Reporting, Feedback and Change Processes, and Responsib-
ilities

EXAMPLE Project Key Outcome 1:

 

 

 

 

 

 

EXAMPLE Project Activity 1.1:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

5.3.2 Step 1. Identifying Performance Questions

Rather than starting with indicators, first identify performance questions. This helps you focus your information gathering on what you will really use for understanding and improving project performance. Identifying performance questions (indicators and selection methods) will be iterative: make an initial choice, assess its feasibility, accept and use it or reject it and find the next option. Step 1 is discussed in 5.4.

5.3.3 Step 2. Identifying Information Needs and Indicators

Using your performance questions, you can more easily identify useful indicators and other information needs for which you will need to collect data. Only data that help answer your performance questions are necessary. This helps avoid collecting information that is difficult to use to guide the project strategy and operations. This step is treated in detail in 5.5.

5.3.4 Step 3. Knowing What Baseline Information You Need

Many baseline studies suffer from information overload and lack of use. When deciding whether you need to collect baseline data for a particular performance question, ask yourself if you need to compare information to be able to answer the question. If not, or if information already exists, then you will not need to collect baseline data. This step is treated in detail in 5.6.

5.3.6 Step 4. Selecting Which Data Collection Methods to Use, by Whom and How Often

Once you have decided what information is needed and what indicators will be used, you need to decide which methods will be used for gathering the data. You have many options: methods that are more qualitative or more quantitative, more or less participatory, and more or less resource intensive. Each will provide information of varying degrees of accuracy and reliability.

Deciding which methods to use requires balancing these different factors (see Box 5-4). When you examine the consequences of a particular performance question or indicator, you may need to change it if it is impractical or too expensive. This includes looking at who will be using the method and how often it will be applied. For example, if you have no existing capacity to use your preferred method, you need to plan training or choose another method if you have no resources for this.

Frequency of collection also needs to be established. This will vary per question and indicator. If data for one critical indicator needs to be collected often, then you may need to reduce the frequency of another less important indicator or delete it altogether. Methods are considered in detail in Section 6 and Annex D.

 

Box 5-4. Balancing cost, type of information and extra benefits

Suppose your performance question is "What improvements have there been in household food security as a result of the projects activities?" You will need to know two main pieces of information:(1) the types and extent of changes in food security as experienced by the target households and (2) the extent to which these changes can be attributed to the project. This type of information would not be analysed very often, as it would only change slowly. So a survey once every two years or so should give you an indication of changes.

 

To gather information on food security changes you could consider three different methods: (1) a detailed household survey conducted by independent researchers, (2) a participatory assessment process where women household members do their own monitoring and discuss their findings, or (3) focus-group meetings to discuss changes that specific social groups have experienced. The first method would be the most resource intensive but may yield the most quantifiable outputs. If well facilitated, the second method can also yield precise results but at a lower cost than the first method and perhaps with interesting discussions from which new ideas emerge. An extra advantage of this method could be better understanding about the project by village women. The third method would yield the least precise and least quantitative information but would be the least resource intensive. Before embarking on resource-intensive data-collection exercises, carefully consider whether a simpler method would yield sufficient information of good enough quality for your purposes.

5.3.7 Step 5. Identifying the Necessary Practical Support for Information Gathering

For a method to lead to the information you require, you will need to organise the conditions to make it work. These are often forgotten in the focus on identification of indicators but are critical to success. For each method, consider if and how you need to:

  • develop forms to record data;
  • develop forms, filing systems and databases for collating and storing information;
  • train staff, partners or community members who will be involved;
  • check and validate data;
  • organise external M&E or research expertise that may be needed;
  • agree on responsibilities for different tasks;
  • ensure everyone has sufficient financial resources and equipment.

This topic is dealt with in detail in Sections 6 and 7.

5.3.7 Step 6. Organising Analysis, Feedback and Change

In the rush to get out and start collecting data, many M&E units pay insufficient attention to the process of using the information for analysis and directing changes in the project.

To make sure that data will be used and not just collected think about how you will organise the analysis of information for each performance question. Sometimes a performance question cannot be answered without prior analysis of several bits of information. Who will do it? When will it happen? Also consider what form information should be in so that it can be used by different stakeholders. For example, will it be useful to present information visually, in graphs or maps? Or do you need to organise several community meetings to get more feedback on the initial analysis of the information?

Most importantly, consider how the generated information can be used to check progress and make improvements as the project proceeds. This topic is discussed in Sections 6 and 8.

Back to Top

5.4 Being Guided by Performance Questions

5.4.1 What Is a Performance Question?

At project start-up, most projects will move straight into identifying quantitative indicators after revising their objective hierarchy of the logframe matrix (see 3.3). This commonly results in long lists of quantitative indicators that focus only on targets, leaving out other information essential to explain the resulting numbers. Without understanding the "why", it is difficult to adjust the project strategy and operations to achieve more impact.

Instead, try starting by identifying the key questions performance questions that you need to answer for each activity and output and for the purpose and the goal. Focusing first on questions you can avoid being overwhelmed by indicators that, in the end, may not tell you what you really need to know in order to improve the project.

A performance question helps focus your information-seeking and information-analysis processes on what is necessary in order to know if the project is performing as planned or, if not, why not. Once you have your performance questions, you can more easily decide what information you need to track rather than what is nice to track.

A performance question makes it easier for you to analyse different kinds of information together by giving you a structure for combining the information. This is particularly important at higher levels in the objective hierarchy. Having a structure will reduce the problem of having different indicators from different levels in the objective hierarchy and not being able to figure out what is going on. Table 5.5 shows this clearly. Projects without the performance question will only have the information/indicators in the right hand column, which they then have to make sense of in relation to the goal, purpose or output.

Lets take an example related to training, which can be found in most projects. Suppose one of your project objectives is "agricultural extension workers using more participatory approaches in their work with farmers". The related project activity might be "organise five 10-day training courses for a total of 60 extension workers". It is obviously easy to keep track of the number of courses run, for how long and for how many participants. At the output level, you could simply add up the number of extension staff who have received training in participatory methods. But you are probably aiming to improve the extent to which these participatory methods are actually used in the field and, then, the contribution to farmers adopting improved farming practices.

A quantitative indicator could be "the per cent of trained extension officers using participatory methods in the field". But to what do the terms "participatory" and "using" refer? It says nothing about the extent or quality with which the methods are being used so the indicator provides relatively useless information. In this case, a performance question is more useful. For example, "Are the trained extension officers using their participatory skills effectively in the field?" Self-reporting by extension agents about how their work in the field is progressing can be supplemented by reports from farmers with whom extension agents interact. Only by counting the numbers, by knowing how well the skills are being applied and how farmers value this change will you have an answer that helps you know if the project is being managed for impact.

Remember that the activity level in the logframe does not need indicators so performance questions will also not be needed at that level.

Table 5-5. Examples of performance questions and the link to information needs, including indicators

Example Objective

Examples of Performance Questions

Examples of Information Needs and Indicators

Goal: sustained improvement in the off-farm income of 135,000 poor households living in the Penkalingo lowlands

- What kinds of improvements have been made as a result of increased income opportunities facilitated by the project?

- Who has benefited from these improvements?

- Which target groups have not benefited?

- What is the likelihood that improvements will be sustained?

- What are the unintended negative or positive impacts of these enhanced income-generating activities (IGAs)?

- Types of improvements per target group

- Level of income changes (increase/decrease) per target group

- Peoples own assessment of why incomes have increased or decreased

- Per cent of households who have not benefited

- Threats to sustaining income increases

- Negative impacts of IGAs (social, environmental, etc.)

- Other positive development impacts of IGAs

Purpose: enhance income-generating activities for the project target groups

- What types of income generation have been created?

- How many people have taken up which new IGAs?

- Types of IGAs created

- Number of people who are pursuing each IGA

- Types of IGAs for which people feel a need

Output 1: savings and credit services available to the poor improved

- Who has benefited from which type of services?

- Who has been excluded?

- Types of savings/credit services

- Numbers of people making use of each service

- Problems with services and their causes

-Numbers of target group excluded from each service

- Level of local capacity to sustain services

Output 2: entrepreneurial skills among participating households developed

- What types of skills have been improved among how many households?

- Is there a gender balance in skill development?

- Do these skills fulfil a need in the project area?

- Types of entrepreneurial skills developed

- Level of skills developed (women/men)

- Numbers in target group (women/men) with new skills

- Numbers of target group excluded from skill development and the causes of this

- Local demand for new skills developed

5.4.2 Working with Performance Questions

Project staff are so used to immediately diving into indicators that at first they might find it a bit confusing to focus on performance questions beforehand. The following question can help you find a good performance question for each level of the objective hierarchy:

What questions would you need to answer to know the extent to which you are achieving the objective and to explain the success or failure of actual results?

The performance questions you identify may be quite simple. For example, at the "activity" level in the objective hierarchy, all you need to do is find out if the activity has been carried out well and on time. Also at the "output" level of the objective hierarchy you will often be able to limit the questions to a few that are relatively easy to quantify. For example, in Table 5-5, the output level questions are "What types of skills have been improved among how many households?" "Is there a gender balance in skill development?" and "Do these skills fulfil a need in the project area?"

At the purpose and goal levels in the objective hierarchy, the performance questions become more qualitative and more effective when posed along with other questions. This is because observable changes at these levels are the result of all the underlying activities or outputs. To assess performance at the purpose and goal level, you will need to consider the interactions between the changes at each level and whether the changes you see can be attributed to project activities or outputs.

One particularly important type of performance question concerns projects trying to innovate how they deliver certain activities or outputs. Learning by trying out new ways of working becomes vital. For example, the project might have planned to support the establishment of self-reliant water users associations. But you might only discover the best way of doing this after several attempts and corrections. For example in the FODESA project, Mali, management will initiate annual participatory reviews and impact assessments in each community. They are sub-contracting this responsibility to NGOs and consultants. As it is a methodological experiment, important performance questions for FODESA could include "Do villagers feel that the sub-contractors are facilitating the participatory reviews well?" and "Is the information coming from these annual reviews helping guide the project strategy and operations?" In effect, this becomes a mini research project for FODESA. As it becomes clearer how best to do annual village reviews, the performance questions will change or even be eliminated.

Performance questions do not have to be elaborate nor do you need many. The most basic types of performance questions are shown in Box 5-5. After the performance questions are agreed, then you can decide what information you need to answer them. This includes indicator identification.

 

Box 5-5. The basic performance questions per level of the objective hierarchy

  • Activities What have we actually done?
  • Outputs What have we delivered as a result of project activities (e.g., number of people trained)?
  • Outcomes (results) What has been achieved as a result of the outputs (e.g., extent to which those trained are effectively using new skills)?
  • Impacts What has been achieved as a result of the outcomes (e.g., to what extent are NGOs more effective)? What contribution is being made to the goal? Are there any unanticipated positive or negative impacts?
  • Lessons What has been learned from the project that can contribute to improved project implementation or to building relevant fields of knowledge?

Back to Top

5.5 Focusing on Key Information and Optimal Indicators

Once you have drafted a list of performance questions, the next step is to identify what information is needed to answer the questions. First check if the question can be answered with a simple, reliable indicator. For activities and outputs this may be possible. If it is not possible (see Box 5-6), then you need to think more carefully about the different types of information you require to answer the performance questions. This will be the case particularly for the higher levels in the objective hierarchy goal and purpose where indicators are rarely able to provide the insights needed to judge outcomes and impacts.

 

Box 5-6. Knowing when a single indicator is not enough

You might have an objective as follows:

"By the end of the fifth year of the project, 50% of the families in the project area cover 25% of their annual cash needs from selling services based on the skills they acquired through training provided by the project."

There is no single indicator to measure this objective. You will need different types of information:

  • per cent of families earning income from skills they acquired through training workshops provided by the project;
  • amount earned by households who participated in the training workshops provided by the project;
  • annual cash needs per household.

 

5.5.1 Types of Change and Information

Your first step will be to be completely clear about what information you need to answer your performance question. Do you want to know about changes in:

  • the presence of something (e.g., numbers of seed banks or farmer-led field trials)?
  • the type of access to an innovation or new service (e.g., are worse-off people or better-off people participating in new crop trials)?
  • the level of use (e.g., the frequency with which each farmer uses a rotating fund or other credit source)?
  • the extent of an activity or coverage (e.g., number of members of the credit group or number of people involved with maize trials and who is excluded)?
  • the relevance of the agricultural innovation (e.g., do seed banks resolve a key production bottleneck or not)?
  • the quality of an innovation (e.g., the quality of seeds in the seed bank or the effectiveness of an integrated pest management approach to banana weevil control)?
  • the effort required to achieve a change (e.g., the labour required for new soil management with contour line ploughing)?

Box 5-7 describes an interesting framework that can help you identify the changes in which you are interested.

 

Box 5-7. Identifying impact indicators using the Grassroots Development Framework 3

The Inter-American Foundation created the Grassroots Development Framework (GDF) to measure the results and impact of projects. It is based on the premise that grassroots development produces results at three levels individuals, organisations, society and impacts of two types tangible and intangible. The combination of three levels of results and two types of impacts means that there are six main categories that represent local development objectives and for which locally relevant indicators can be chosen.

  • At the individual or family level, tangible impacts relate to changes in quality of life, including peoples environment and livelihoods. Intangible impacts refer to personal capacities, concerning changes in individual expectations, motivations and actions.
  • At the organisational or social-capital level, tangible impacts pertain to local management and reflect the capacity of organizations and municipalities to engage in local development. Intangible impacts refer to commitment to collaboration and look at changes in the development values and practises of local leadership.
  • At the level of the society as a whole, tangible impacts include creating civil society opportunities that deal with the institutionalisation of democracy. Intangible impacts measure the basis of citizenship in terms of changes in culture of citizenship, or collective behaviour, towards greater tolerance and respect for social and cultural diversity.

You can ask different stakeholder groups to discuss each category. Discuss the compiled indicators with stakeholder groups. Prioritising the indicators can give you a solid base for assessing impact, as you will be capturing a wide range of impacts.

 

Irrespective of what information you seek, you will need to understand the reasons for the changes you observe. If these are different than anticipated, you will need to ask the question, "Why is there more or less change than anticipated?" in order to manage the project for better impact (see Box 5-8).

 

Box 5-8. "Why" does not come from numbers

In the REP project, Ghana, beneficiary contact monitoring has allowed more participation of project clients by gathering their ideas and opinions for review and action by management. But the focus is still on quantitative indicators. For example, after each training, evaluation forms are sent to participants to collect information/opinions on aspects such as: teaching style (methods, materials, teachers attitude), perceived immediate and long-term contributions of the training, technical competency of the teacher and overall usefulness. Respondents only answer "yes" or "no". This does not encourage the explanation of why. Data interpretation is only adding up the number of responses, without understanding causes and seeking ideas through open-ended questions.

 

You are likely to need a variety of information to answer your performance questions (see Box 5-9), including:

  • indicators: simple quantitative indicators, complex or compound indicators, indices, qualitative indicators (see Table 5-6);
  • focused qualitative information;
  • open-ended qualitative information;
  • background information;
  • general observations of interactions with stakeholders.
 

Box 5-9. More than indicators in Uganda

As part of the DDSP programme in Uganda, the planning unit of each districts local government is responsible for monitoring implementation progress and assessing impact. To fulfil these responsibilities, it seeks many kinds of information:

  • Physical and financial progress information so decisions can be made (or revised) about spending and resource distribution, helping keep the project functioning and within its budget.
  • Information on the distribution of project benefits, e.g., some people may benefit more than others. This is useful to groups wanting to monitor project equity and accountability.
  • The target populations responses to the services and inputs being provided by the project. Such information can help ensure acceptability and usefulness of project activities.
  • Studies on the specific implementation problems a project faces so that the cause(s) can be identified and practical solutions recommended.
  • Information about the impact on the target population, especially on changes in quality of life and living standards (income, health, empowerment, relationship to environment, etc.).
  • Other evidence for compliance and accountability to meet donor requirements.
 

5.5.2 Different Kinds of Indicators

Indicators are the most common type of information associated with M&E. Table 5-7 describes different kinds of indicators.

Some indicators are simple and straightforward, particularly those that deal with measuring progress with activities, for example, "the number of kilometres of irrigation cannel constructed". Other indicators, such as the human development index (HDI), used by UNDP to rank all countries, compares peoples wellbeing via a combination of several weighed indicators. Table 5-6 shows examples of indicators for four common categories.

Table 5-6. Example of four common categories of indicators in rural development projects 4

Food Security

Poverty

Empowerment of Grassroots Institutions

Empowerment of Women

- change in food production

- change in cultivated area

- change in yields of staple food

- change in consumption of staples

- change in prices for staple food

- change in access to markets

- change in on-farm food storage capacity

- change in chronic malnutrition among children

- change in rate of stunting (under 5)

- change in household real income

- change in access to off-farm income

- change in access to capital

- change in access to labour

- change in access to irrigation facilities

- change in availability of basic needs services

- change in access to safe water

- change in access to basic education

- change in access to basic health services

- change in farmers groups participation in decision-making at project/local level

- change in autonomous farmers group formation in project area

- change in grassroots ability to self-monitor and evaluate own progress

- change in capacity to market own products

- change in terms and conditions of marketing arrangements

- change in female enrolment in primary education

- change in number of womens groups formed in project area

- change in number of loans approved /disbursed for womens groups

- change in number of womens groups accessing second and third loan

- change in number of women members of local production/service associations

- change in womens decision-making capacity at household level

- change in womens participation in decision-making at project/local level

Table 5-7. Examples of different types of indicators

Types of Indicators

Examples

Explanation

Simple quantitative indicators

- kilometres of roads built

- person-days of training in X subject conducted

- average yield from X crop in Y areas

This indicator requires only one measurement of a straightforward unit.

Complex quantitative indicators

- number of months for which households experience food shortages

Here there are a number of different bits of information involved. Months, households and types of food shortages. Without specifying which types of households are experiencing what types of food shortages and to what degree the indicator will not be so useful. This makes the indicator more complex than just measuring one simple factor such as average crop yield.

Compound indicators

- number of effectively functioning water users associations in the project area

- number of village development plans completed that meet funding criteria

These indicators have a standard in them that needs defining and assessing. Effectively functioning needs to be defined and means you need to assess the quality of each association. The same is true for the village plans they need to be assessed against funding criteria. Only then can they be counted.

Indices

- index of irrigation system performance

Indices combine a number of different indicators to enable comparison. The human development index is a well-known example. Working with indices is statistically complex and so they are not commonly used in project M&E.

Proxy indicators

- per cent of households with bicycles

This is an indicator that is not precise but rather is used as an approximate, symbolic. This example could be a proxy indicator for a certain level of wellbeing in an area where bicycles are expensive and difficult to buy.

Qualitative indicators open-ended

- perceptions of stakeholders about the overall performance of the project

Open-ended qualitative information enables you to find out from people what is important to them. Open-ended questions enable you to gather information on things about which you may not have thought to ask.

Qualitative indicators focused

- Perceptions of stakeholders about a very specific aspect of the project

Focused qualitative information is important when you want specific information.

5.5.3 Formulating a Clear Indicator

To be useful, an indicator must be clear. This makes it possible to measure. But most project staff know that finding a clear indicator is more difficult than it might first appear. What is needed to make an indicator clear?

By looking at the performance questions for the goal, purpose(s), outcomes and outputs, you can identify what type of data you need to collect to answer the questions. For example, if your output is "to rehabilitate degraded lands in the X area", then you might want an indicator such as "area of degraded land rehabilitated. But what do "degraded" and "rehabilitated" mean?

A clear indicator includes the following elements:

  • specified target group to which the indicator will be applied;
  • specific unit(s) of measurement to be used for the indicator;
    specific timeframe over which it will be monitored;
  • reference to a baseline/benchmark for comparison;
  • defined qualities (if an adjective is needed see below);
  • specific location in which indicator will be applied.

Lets take an indicator proposed by an IFAD-supported project in China to assess impact at the purpose level of the project: "enterprise start-ups, in particular by women". This is too vague to be measurable. Specifying this indicator precisely would turn it into, for example, "the number of new formal and informal enterprises each year started by poor female-headed and male-headed households in province X as compared to the original number". Another example of a weak indicator is from a project in Yemen: "number of fodder-processing equipment". To be able to monitor this, you need to be specific, for example, "the annual increase in the number of newly purchased fodder-processing machinery of type X since the beginning of the project per target group household".

You might be wondering if a qualitative indicator can be specific. By definition a qualitative indicator is not as precise as a quantitative indicator, since you are consciously leaving it open-ended. Section 5.5.4 discusses this in more detail.

Special attention must be paid to those indicators that include an adjective. Common examples include "successfully implemented", "adequately used", "effectively applied", "degraded land" or "people with too little food". Such descriptive terms can be interpreted in many ways and so can lead to confusion.

A common example occurs in projects that aim to establish micro-credit groups, community self-help groups or community plans. Because you want to know their quality, your indicator will probably include adjectives such as "well-functioning micro-credit groups", "empowered self-help groups" or "participatory community plans". For example, what does a "participatory community plan" mean? Does it mean that 50% of the adult people were asked to contribute ideas or that 80% agreed with the final plan or that it has been approved by the local village council? You will need to define any term precisely that might have multiple meanings.

The more precise you can make each indicator, the less likely you are to have misunderstanding about it among the people involved when it comes to collecting the data and analysing them. Seeking local indicators can put out some useful results (see Box 5-10).

 

Box 5-10. Examples of local poverty indicators

  • type and size of funerals (used in Ghana and Burkina Faso where spending on funerals is valued)
  • availability of new clothes for celebrations (many locations)
  • postponement of marriages due to lack of dowry (Somalia)
  • regular use of shoes (India)
  • eating of a third meal per day (various locations)
  • possibility of sleeping in a different room than the farm animals (India)
  • women who possess cooking utensils or plates for guests in adequate size and quantity (Mali, Sudan)
 

To get people thinking about possible indicators, particularly qualitative ones that might be difficult to formulate, here are some questions to inspire concrete answers:

  • If the project is headed for failure, how will you know? (Word these indicators of "failure" in the positive and you will know what you want to see change.)
  • What do you mean when you say "improved nutrition"? (or whatever objective/purpose/outcome you are discussing)
  • How do you notice when an impact has occurred?
  • Can you give a concrete example of how you observe an impact?

Some methods are also useful for identifying indicators, such as matrix scoring and impact flow diagrams (see Annex D).

5.5.4 Working with Qualitative Information and Indicator

The strong focus of M&E on quantitative data in the past is increasingly being balanced by a focus on qualitative indicators as people expect these to provide more in-depth information. However, these types of indicators are interchangeable and compatible (see Box 5-11). For example, to assess the quality of a workshop on integrated pest management, you can gather the opinions of farmers who attended the course and make lists of their views about strengths, weaknesses and areas of improvement. Alternatively, a more quantitative approach would be to ask the farmers to indicate whether they are satisfied with the quality of the training on a scale of 0 to 5, and then count the numbers of farmers in each category. Clearly the ranking will not give you ideas on what to improve but it does give a picture of the degree of satisfaction.

 

Box 5-11. Qualitative depth in quantitative indicators

One of the key distinctions of Ugandan consultant Dan Kisauzas way of using the logframe is to discuss how to build the logframe based on how the project staff should implement the vision, not what they should do. This requires focusing on qualitative, rather than quantitative, aspects of the project when developing indicators. This can be done by turning indicator development into the development of a statement about how staff intend to implement the activities to meet their objectives, incorporating a process dimension into the plans. For example, instead of a more common quantitative indicator for the wider goal of food security such as "two new varieties of X developed," the new indicator would be "two new varieties developed in collaboration with farmers (with some evidence of farmer acceptance of the varieties)".

 

For qualitative indicators to offer rigorous insights into important questions, you need to be specific, just as with quantitative indicators. Specify a qualitative indicator by defining the following:

  • the topic of interest (based on your performance question);
  • the type of change you are trying to understand, including the unit of analysis (e.g., changes in a household, in a village, in a region);
  • the timeframe over which it will be monitored;
  • the location in which the indicator will be applied.

For example, "perceptions of 25% of participants attending each training programme on topic Y, about how it has assisted them to carry out their work responsibilities better" is much easier to implement than one that is commonly found, "skills of workshop participants". The rules for qualitative indicators are the same as for quantitative indicators they must be measurable, representative, reliable and feasible.

For qualitative indicators, the idea of "measurable" refers to the ability to find data on it rather than being able to count it. For example in Zimbabwe, a project explicitly stated that it would "produce major unquantifiable benefits to the inhabitants of the project area, and to the nation". Examples they gave included "increased capacity of inhabitants to command the assistance of agricultural extension and research workers" and "development of a policy and development framework for public investment in drier areas".

You might well have a set of qualitative aspects of development that cannot be molded into indicators to measure (see Box 5-12). Examples include "social mobilization process", "collective management" or "linkages with service providers". In such cases, the use of case studies that describe what is happening in a community may help you understand such processes (see Box 5-13).

 

Box 5-12. Measuring the immeasurable

In Bangladesh, IFAD-supported projects work with community-based organisations (CBOs). The implementing partners are NGOs, which need to monitor the growth of CBOs. CBO growth can be monitored with indicators such as: existence of need assessment conducted by the CBO itself, democratically elected leaders and CBO-initiated resource mobilisation. Such indicators can be discussed in a workshop setting with the CBOs, where participants can also talk about appropriate corrective actions needed by whom, in case CBOs experience constraints.

In a rural poverty programme in the USA5, "community revitalisation" was a prime goal. The chosen indicators of success were "attitudes of people (community spirit), voting in elections, trash collection, clean-up of dilapidated structures, home ownership and community capacity measured by number of empowerment community organisations with networks formed and the ability to access resources and develop leaders".

 

 

Box 5-13. Focused qualitative studies to deal with complex aspects of change

In the WUPAP programme, Nepal, the performance of the programmes approach at the village level will be assessed as follows:

  1. Measure the degree to which participatory approaches have been used in the field.
  2. Document the communitys response towards the programme as a whole.
  3. Measure changes in the vision of the community and role of poor, women and children at present and in the future.
  4. Document changes in attitudes and approaches of service providers.
  5. Assess the communitys willingness and capacity to take on more responsibility.
  6. Examine the benefits of programme activities and distribution among different groups.
  7. Record early signs of impact on livelihoods and improvements in material well-being.
  8. Suggest changes in the social mobilisation process, structure of the CBO and terms of partnership.

These case studies should be undertaken in relatively mature CBOs of different districts, by examining CBO records, plans and progress reports and participatory techniques. The programme management is responsible for presenting findings and recommendations based on the case studies in the annual stakeholder workshops.

 

Often a fundamental part of many projects, one that relates strongly to qualitative indicators, is the institutional development of community-based organisations (CBOs). The creation and strengthening of these is seen by many IFAD-supported projects as the key to sustained impacts. Many projects, therefore, need to assess issues such as group dynamics, equality and transparency in the group, learning orientation of the group, etc.

An increasingly common approach to assessing the quality of CBOs is the use of a grading system. This combines a qualitative assessment of progress in institutional development with a quantitative score. Box 5-14 shows several applications of this approach.

 

Box 5-14. Using grades for organisational development in Africa, Asia and Latin America

In Ghana, the LACOSREP project has a vision of "socially cohesive and democratically managed water users associations" (WUA). Each WUA is graded on a scale of 0 to 5 in terms of various aspects of their organisational performance. This includes the "existence and adequacy of by-laws", "level of democracy in electing executive members" and "decision-making by consensus". Financial mobilisation is also tracked, using the same grading system but for other indicators, such as "executives ability to collect water levies", "amount mobilised versus expected" and "judicious use of funds by WUA".

For a study of CBOs in Bangladesh6, each CBO was given a grade on the basis of eight indicators:

  • need assessment/action choice whether the community initiates need assessment;
  • organisation whether organisations are externally imposed or already existing;
  • leadership whether organisational support fully reflects community interests at large;
  • training whether local community workers are supported by pre-service and in-service training;
  • resource mobilisation whether communities organise fund-raising;
  • management whether communities are responsible for management and supervision;
  • orientation of actions whether communities have impact-oriented targets;
  • monitoring and evaluation whether communities receive monitoring feedback and are aware of their problems.

In the TNWDP project in India, staff use a system of grading self-help groups (SHGs) to assess their credit rating. The grading of each SHG is done with a member of another nearby SHG and an NGO fieldworker. The indicators mix qualitative and quantitative aspects, such as "80% of members are aware of rules and strictly adhere to them". While this deals with a quantity (80%), it focuses on awareness among group members of the group rules and regulations (qualitative). The rating outcome is discussed with group members to analyse problems and so increase the groups chances of success. This kind of monitoring helps to trigger discussion about problems, find solutions and sustain the development impact. Furthermore, groups are given an overall grade (A, B, etc.). The desire for upgrading provides a powerful incentive for improved performance. The same grading system has now been approved for the implementing NGOs themselves.

An IFAD-supported project in Mexico used 14 indicators to track the strengthening of target group organisations. Organisations are ranked based on their total point score: between 14 and 22 points, "in development"; between 23 and 32 points, "strengthening"; and between 33 and 42 points, "consolidating". Based on the organisations rank, decisions are made about actions to undertake or reinforce, such as training on particular themes. A couple of examples of the indicators and related point scores are:

  • sale of produce: individual = 1 point, in groups = 2 points, organised and planned = 3 points;
  • post-harvest activities: no management = 1 point, selection and traditional packaging = 2 points, selection and adequate packaging = 3 points;
  • stability of organisation: 15% or more lapsing members = 1 point, 5 to 14% lapse = 2 points, less than 5 % lapse = 3 points.
 

5.5.5 Checking the Quality of Indicators

Being clear about an indicator is what makes it measurable. But other factors will determine if you can use it. The need for a manageable, and therefore small, set of indicators makes it especially important to ensure they are high quality. Review each potential indicator to ensure that it is not only clearly defined but is also representative, reliable and feasible.

If an indicator fails on any of these counts (see Table 5-8), then it will not help you answer your performance question and you will need to adjust it or find a substitute.

An indicator is fully representative if it covers the most important aspect(s) of the objective you want to track. As this will be hard to do for higher-level objectives, you will probably need several indicators to make sure the set of indicators is representative of the type of change you want to understand.

An indicator is more likely to be reliable if it is accurate, measured in a standardized way with sound and consistent sampling procedures, and directly reflects the objective concerned. It should also be well-founded, with a well established or probable relationship to the objective. For example, stunting (low height-for-age) in children is a well-founded indicator of lack of food, since many studies have demonstrated the relationship.

An indicator is feasible if it requires data that can be obtained at reasonable cost and effort. You will need to consider both financial and technical feasibility:

  1. Use your budget limit to decide what you "need to know", not how you can include all that is "nice to know". Most projects start with defining what they want to know, then later discover that it takes too much effort and money to collect the data. Rather, budget for M&E during project planning and assess how much monitoring is possible given the available budget. Ask what and how much information can realistically be generated given the resources you are prepared to allocate to the task. Also consider how easy or difficult it is to get hold of these data. Be aware that some indicators may appear to represent little additional financial cost but will cost the time of the respondents to answer and of the staff in terms of data entry, processing and analysis.
  2. Confirm that you have the human capacity to assess the indicators. Project M&E staff in Morocco had always simply recorded progress to meet the numerical targets of the project. They soon became aware of their limitations when outlining how to assess the wider project impact of improved living conditions. For example, they identified the need to analyse whether planting along contour bunds would increase dry matter production for cattle, whether this in turn would lead to increased cattle weight and whether this would then increase household income. However, in this case, further development of such performance monitoring was restricted both by the lack of access to resources persons with the skills to carry out these kinds of analysis, and also by the lack of support for M&E from the project itself.
  3. Avoid duplication. Find out which organisations already have information you need. Some statistical data are readily available from national institutions (national statistics bureau, private companies, census bureaux, statistical office of the ministry of agriculture, banks, etc.). This can be vital background information to explain progress. Systems for tapping such "secondary" data should be prepared for at start-up. For example, every year in Indonesia, the bureau of statistics conducts household surveys (200,000 households) and an agricultural census is undertaken every five years. One project manager in Indonesia said, "If we want to know if our livestock project is making progress, we should get data from the sub-district health posts on under-5 mortality and illnesses. Also figures on the total savings in credit schemes and in banks provide very accurate information on progress with farmer groups."

Table 5-8. Deciding if indicators are of good enough quality7

Indicator Quality

What to Do with the Indicator

The indicator is measurable, representative, reliable and feasible.

Fine, use it.

The indicator is measurable, reliable and feasible, but not representative enough.

Use it and try to find additional types of information or indicators until you feel the performance question can be answered.

The indicator is measurable, representative and feasible, but not very reliable.

Is it reliable enough to use if everyone is made aware of its flaws? If so, use it and try to find additional information that together could produce a more reliable picture. If not, drop it and try to find a substitute.

The indicator is measurable, representative and reliable, but not feasible.

Can another indicator or set of indicators represent the objective reasonably? If so, drop the one first suggested. If not, re-examine the indicator's feasibility. There may be a more creative and cost-effective way of finding the required data.

The indicator is measurable and feasible but not representative enough and not very reliable.

Is it reliable enough to use if everyone is made aware of its flaws? If so, use it and try to find additional information to help produce a more reliable picture. If not, drop it and try to find a substitute. In any case, since the indicator has two significant problems, be more inclined to drop it than keep it.

The indicator is feasible, but not measurable or not representative or not reliable.

Forget about it.

5.5.6 Participatory (Impact) Indicator Identification

Indicator identification can be pursued with different methods and with varying degrees of stakeholder participation. Particularly when assessing impacts, some projects ask primary stakeholders to define what they see as impact and to use their indicators to monitor and evaluate. The process for participatory indicator identification is very similar to overall indicator identification.

  1. Decide what aspect of M&E will be participatory is it to be impacts or implementation aspects (e.g., activities, quality of service providers).
  2. Reach agreement on who should be involved in determining indicators.
  3. Create a good event (time, location, facilities, facilitation) for all groups to make a meaningful contribution.
  4. If there is more than one stakeholder group, you have two options.

    Option 1. Draft the indicators with each group. You should end up with an initial list of possible indicators, missing information about the indicators and the rationale for these indicators. Share the lists of indicators with all the groups. Organise an event with group representatives to select the most appropriate indicators, as there are usually too many. Decide which ones best answer the performance questions in which you are all interested. Begin by developing criteria for selecting indicators. You can use matrix scoring to facilitate the prioritisation (see Annex D).

    Option 2. The project team and implementing partners can draft an initial list, which is then reworked with primary stakeholders. Follow a similar process of prioritising the indicators to monitor.

  5. Define units of analysis (e.g., credit groups, household, community organisations) and the sampling procedure.
  6. Decide on data collection methods (see Section 6). This might require a revision of the indicators, if the methods prove inadequate.
  7. Design data processing formats and decide on the analysis process (see Sections 6 and 8).
  8. Pre-test the indicators, methods and data analysis. Make sure that they are adequate and manageable and that they will give you the information you need to answer the performance questions. Dont skip this step! It can save you much wasted effort and resources.

Consider that involving more stakeholder groups in identifying indicators requires a process of negotiating about what "success" means for each group, therefore requiring more time. The negotiation process becomes critical, as different views and priorities need to be reduced to a limited number of indicators. Make sure primary stakeholder participation is meaningful and not token .

Negotiations can reinforce a shared vision of development, particularly when working with groups that differ strongly. This can be an important benefit of participatory development of the M&E system.

Remember that you will need to keep updating indicators as peoples development visions or policies change and information needs shift.

A good example of the link between ownership of indicators and empowerment comes from a large forestry programme in Nepal. The implementing partners worked with forestry user groups (FUGs), using parallel sets of indicators. Programme staff identified one set and the other came from the groups themselves. In one area, a third set of indicators was identified by local women, who had additional, specific concerns that did not emerge in the FUGs initial indicator set.

 

Box 5-15. Participatory indicator identification in Mexico8

In a farmer-to-farmer extension programme in Mexico, the project team followed these steps to develop indicators:

  • Define broad indicator areas (based on higher-level objectives).
  • Select currently available indicators for these areas, according to existing programme use and literature.
  • Define stakeholder groups.
  • Select stakeholder groups to be consulted.
  • Develop indicators with different stakeholder groups.
  • Test these across different stakeholder groups to assess their significance to others and effectiveness at indicating change.
  • Agree on a priority list among indicator options.
  • Carry out fieldwork to gather data for the indicators.
  • Create lists of indicators for full evaluation use, indicators with specific importance for different actors (with a limit, e.g., three key indicators for each stakeholder group.

The programme team identified the range of different institutional and individual actors who affect and are affected by the project. They then prioritised three stakeholder groups to be consulted for indicator development in this trial phase: farmers (participating and non-participating), farmer-extension agents (and their wives) and funding agencies.

The research team initially proposed seven indicator areas. These were eventually narrowed down to four, based on the groups objectives: (1) changes to local, regional, political and sectoral practice and policy (e.g., level of dependence on external resources, involvement of local people, growth of local institutions and changes in policy and practice); (2) dissemination impacts: extension to other localities/regions (e.g., horizontal and vertical linkages with other projects, agencies and NGOs beyond the region); (3) changes to the roles of individuals in the project (primarily the coordinator, outside advisors, immediate project participants and family of NGO staff); and (4) changes in the institutional structure (within and beyond the actual project).

 

When you choose participatory indicator identification and your project follows an overall participatory approach, you will need to be extra flexible. Such projects commonly start tentatively with small interventions, based on participatory appraisals or with capacity-building activities. Only after discussions have led to consensus about which activities will be implemented, can you start with precise indicator identification. During the course of such projects, new partners often join, new insights are generated and new development goals emerge. Each change brings the need to review existing indicators (see Box 5-16), as the following projects saw:

  • In Laos, farmers shifted from wanting to monitor negative criteria, which reflected their apprehension about the new technology being introduced, to positive ones once the technologys beneficial effects emerged.
  • In an NGO-managed project in northeast Brazil, only 17 of the initially selected 22 indicators were monitored, as some indicators and methods proved too difficult in practice. For example, the indicator "production from banana stands where weevil control was being practised as compared to control plots with no weevil control" was impossible. Comparing production from different plots with many uncontrollable variables would make the data unreliable.
  • In Nepal, shared understanding was weak about key areas of work, such as "institutional strengthening" and "timber yield regulation", and also indicators were of low quality. As understanding grew, the indicators became more precise.
 

Box 5-16. Trading off participation in M&E for stable indicators?

For those interested in seeing trends for fixed indicators, primary stakeholder participation may pose a problem. Any change to an indicator means reducing the possibility of producing a time series of data. Yet if a monitoring process is going to be participatory, this means including new partners as the project evolves. A participatory M&E system has to adapt to changing information needs, to the changing skills of those involved and to changing levels of participation as new partners join and others leave.

 

Back to Top

5.6 Making Comparisons and the Role of Baselines

5.6.1 Having a Basis for Comparison

Monitoring involves repeated assessments of a situation over time. Having an initial basis for comparison helps you assess what has changed over a period of time and if this is a result of the projects presence. So you must have information about the initial starting point or situation before any intervention has taken place. This information is what is commonly known as the "baseline" of information. It is the line of base conditions against which comparisons are made later on.

A baseline study can also help in redefining the project at start-up. The PROCHALATE project in El Salvador undertook a baseline study early on, which allowed the team to identify significant differences between the diagnosis information of the appraisal report and the actual situation. This information was used to adjust the projects goals.

Most projects have great difficulty with baselines. Few projects have one that is useful for judging change. Some common problems with baseline studies are that they are made late or not at all, are excessively detailed or too general and irrelevant, have a sample that is too large and is beyond the analytical capacity of the project or implementing partners, do not include a control group, contain data on farmers that are not within the primary target group, etc. Often baselines cannot fulfil their prime purpose of facilitating evaluations, so are rarely used during impact assessments (see Box 5-17).

Even if you do not use a baseline, you will need to find some form of comparison to know what the project has achieved.

 

Box 5-17. Overwhelming baselines in Bangladesh

The ADIP project in Bangladesh gathered an impressive amount of baseline data: household information for over 1,900 households, as well as district and municipal information profiles. Implementing NGOs created socio-economic profiles of groups, with data on each beneficiary group at the moment of group formation, to confirm the eligibility of the selected persons as marginal and landless or as small farmer group members. These data were kept by the NGOs. However, the resources spent on collecting the information was not justified as it was hardly used. The baseline data were only partly useful because:

  • the data were actually collected before the selection of groups;
  • the samples did not systematically include farmers actually participating in the project;
  • the data did not refer to specific project-participant groups and households.

These factors also made it impossible to use the data for retrospectively composing a control group. If these surveys, and future ones, are to be useful for monitoring impacts, then a sampling procedure that includes farmers "with project participation" and farmers "without project participation" is necessary. The project can also build on recent participatory impact monitoring by establishing a small sample of marginal and landless or small farmers male and female and including a control group, for continued impact assessment with annual surveys.

 

In participatory projects, baseline studies need extra attention. Such projects may start tentatively and with smaller and more diverse interventions. Given the uncertainty about the final orientation of such projects at the onset, it is difficult for them to determine early on precisely what information to collect for the baseline. The idea of a "rolling baseline" might be useful (see below). Other organizations undertake open-ended participatory appraisals as the beginning of a baseline, which they follow up with focused surveys once it is clear what additional data are needed.

5.6.2 Options for Making a Comparative Analysis Possible

Proving a project impact requires comparing changes that result from the project. You have three options for this:

  1. Compare the difference between "before" the project started and "after" it started.
  2. Track changes "with" and "without" a project presence. This means comparing changes inside the project area with those in similar locations outside the projects sphere of influence.
  3. Compare the difference between similar groups one that has been working with the project and a so-called "control group" that is not influenced by the project.

Each option has advantages and disadvantages (see Table 5-9). All three options can be undertaken with or without the use of pre-determined indicators, and in more qualitative or quantitative ways.

In the TNWP project in India, project management used a control group (see option 3 in Table 5-9). Baseline surveys were carried out among the potential target group and a control group. Initial identification of the beneficiaries for the baseline survey was made by the implementing partners, local NGOs, followed by a survey of these beneficiaries for verification. The baseline survey among target and control groups was supplemented by economic data collected on a sample basis in project villages covering all three districts during the first three years of implementation.

Table 5-9. Comparing the different options for comparison

Type of Comparison

Basis of Comparison

Advantage

Disadvantage

Before/After project

Changes over time in the project area

- Offers clear moments for data collection

- Requires understanding which other factors influenced the outcome

- May be difficult to explain the changes observed due to other influencing factors

With/Without project

Changes between one geographic area where the project has been active and another where it has not

- Can make it easier to explain causal factors of the change

- Might be difficult to find comparable areas

Control /Target group

Changes among groups of people who have been targeted by the project and similar groups of people who have not been targeted

- Focuses well on the impact on the projects target group

- Can help explain causal factors of the change

- Is in the same area so does not have the problem of location-related variation

- Poses the ethical problem of knowing you are excluding certain groups from development opportunities and yet using them to measure change

- Ensuring the two groups are comparable is difficult

- Changing the project midway will distort findings

5.6.3 Developing and Using your Baseline

Given that it is possible to collect all kinds of information about a situation and that projects are not always clear about their detailed activities from the onset, how much time and effort should you invest in establishing a baseline? The M&E matrix (see 5.3) includes a specific step that asks you to decide whether a certain information need must have a baseline or not. Not all information requires a related baseline.

The most streamlined baseline studies are objective-driven they only measure the status of focal aspects of the project. This means they are best if designed after the project logframe matrix has been revised. But, with a clear appraisal report, a project can start early on a baseline (see Box 5-18). Besides information related to your objective hierarchy, you will always need additional information about the context in order to be able to explain changes that you observe. If you have identified qualitative and quantitative information to answer the performance questions, then your baseline survey will include both types of information as well.

 

Box 5-18. Including qualitative information to balance numbers in Uganda

The baseline study for the DDSP programme in Uganda was completed before the start-up workshop and based on information needs identified in the appraisal report. The study was a quantitative survey complemented with a qualitative study in some of the same villages. The qualitative part of the baseline aimed to provide more detailed explanations for the results coming out of the quantitative survey to avoid misinterpretation of numbers due to inadequate understanding of village contexts. So the baseline survey provided the basis for good-quality impact assessment.

The baseline study was presented to key district stakeholders before and during the start-up workshop to seek additional insights and to decide how to incorporate the baseline into ongoing M&E work. There was a recommendation that some of the sites (where both qualitative and quantitative work had been undertaken) could continue as "sentinel sites" for the programme. In the qualitative survey sites, all the information documented by the villages and parishes was left with the local authorities as a basis for their own M&E baseline.

 

Keep in mind the following when developing your baseline:

  1. Only collect what you are going to use. So you need to know what you will use. As a rule of thumb, only collect baseline information that relates directly to the performance questions and indicators that you have identified. Do not spend time collecting other information.
  2. Plan baselines like you would any survey. As with any data collection and analysis process, you will need to plan for the following once you are clear what information you need to collect:
    • Find out what existing information you can use and check its quality.
    • Identify where you will find the information.
    • Decide on methods (see Annex D).
    • Decide what resources are needed.
    • Agree on responsibilities for data collection, analysis and use and the timing of each of these moments.
    • Agree on when and how the baseline will be revised during the project life.
  3. Keep it feasible. A baseline will never be perfect it will always be a case of "good enough". Better a small baseline that is used than an extensive one that collects dust on a shelf. The SDP MA project in Tanzania had not budgeted adequately for the follow-up to the baseline study. The M&E officer tried to follow up the baseline with a questionnaire but lacked the funds to conduct a field-based survey. Instead, he sent them out but, in the end, did not collect them or analyse them due to lack of money.
  4. Be creative with methods. The methods for collecting monitoring data are the same as for baseline studies. In fact, they should be the same to make the data comparable. A standard method is a quantitative survey or PRA (participatory rural appraisal), but videos and photographs can also be used (see Box 5-19). In Venezuela, the PRODECOP project developed a participatory video baseline. Every time work started in a new community, the project team worked with local residents to create a video of their local livelihoods and living standards. Three years later, videos will be made of the same communities to show what has improved as a result of the project intervention. In China, the World Food Programme is using "before" and "after" photographs of housing to assess the impact of their food-for-work programmes among participants. See Section 6 and Annex D for more ideas on methods.
  5. Dont forget poverty and gender issues in the baseline study. The PADEMER project in Colombia undertook 302 surveys via implementing partners. The baseline study included a solid gender focus. It was not limited to sex-disaggregated basic information but also analysed differences between men and women in terms of, for example, the working day, time dedicated to rural microenterprises and differences in income and employment.
 

Box 5-19 Visual appraisals for comparison 9

The Aga Khan Rural Support Programme (AKRSP) is a Pakistani foundation that supports local village groups in using their natural resources in a sustainable and equitable manner. AKRSP helps these groups carry out their own appraisals and plan their development priorities. As part of the pre-project appraisal, local people prepare detailed maps of their village that incorporate their analysis of available resources, how these are used and ownership, problems and constraints. These detailed maps represent an inventory of resource-related issues and are used as the basis for planning village projects. All the proposed activities are depicted on the maps and include soil and water conservation, minor irrigation, forest planting and protection, etc. The maps are kept in the villages and are displayed in a convenient location that is accessible for all group members. During meetings and project reviews, these maps are used to monitor the project activities and resolve problems.

 

The most important aspect of a baseline is using it. Otherwise it is a waste of time. To use baselines actively:

  • know when you need to conduct the next round of data collection and who is responsible for it;
  • budget adequately for all subsequent rounds of data collection you require to make regular comparisons;
  • when a second dataset is available, plan a moment with those for whom the data are relevant to compare the information, analyse the findings and agree on corrective actions, if necessary.

5.6.4 Alternatives to Standard Baseline Studies

Many projects find baselines difficult to undertake well and on time. Not surprisingly, the use of baselines is being increasingly questioned. A few alternatives to the standard survey approach to baselines are emerging.

  1. First measurements as a starting point. One alternative is by indicating whether there is an improvement or a decline from the first measurement or in comparison to a desired condition, your target. In Brazil, an NGO-managed project is using the first year of monitoring data as its "baseline". They simply cannot afford more detailed surveys.
  2. Rolling baseline of profiles. This involves collecting baseline information to develop profiles not at once, but on a rolling basis as village organisations are formed, as credit groups start or as communities are taken up in the intervention strategy. The notion of a "rolling baseline" represents a middle-ground option between undertaking a comprehensive baseline and a totally retrospective impact-assessment approach. Note that information from this type of baseline may need to be complemented by general context information.
  3. Optimal use of existing documentation. Yet others solve the baseline problem by working up a description of the original situation that does not require field data collection but is based on existing documentation (see Box 5-20).
 

Box 5-20. Unconventional approach to establishing a baseline for pastoral poverty reduction in Kenya10

In north-east Kenya, the Wajir Pastoral Development Project began with a series of intensive participatory rural appraisal (PRA) exercises with communities to determine the project goal and strategy. The project originally thought to collect baseline data against which all aspects of the project could then be assessed. But on reflection, management had several concerns: it would not be using pastoralists as information sources rather than stakeholders in the project, be biased towards quantitative data, fail to capture qualitative aspects and potentially undermine the participatory nature of the project. So instead, the project did the following:

  • Integrated the initial PRA findings and those from subsequent PRA exercises, into a "background document" that included secondary data to put these perspectives within a broader context;
  • With communities, developed several participatory systems to monitor different aspects of the project continuously;
  • Conducted a participatory impact-assessment of key indicators identified by pastoralists themselves;
  • Regularly monitored a sample of randomly selected households over a long period to understand changes in household situations and what could be attributed to project activities.

Although the project is not using a baseline study in the conventional sense, its M&E system included enough different ways of understanding development changes and to what extent they can be attributed to the project. Furthermore, the processes reinforced a sense of joint responsibility between the implementing organisation (OXFAM) and the pastoral associations for achieving the project objectives.

 

Back to Top

5.7 Updating Your Information Needs and Indicators

As with all aspects of the M&E system, update your information needs and indicators. You will need to update your information needs and indicators simply because a project evolves. The automated system of monitoring of the Cuchumatanes project in Guatemala has been updated several times by the M&E unit according to new information needs and new activities, reflected by new indicators. In Bangladesh, when reviewing and updating their M&E system, project management of the ADIP project identified the need for qualitative indicators to measure change in credit groups. The original indicators, such as "number of groups formed", did not capture the maturity of the groups, which was indispensable information for identifying how the project could support the groups. Qualitative indicators needed to be identified with due consideration for the local context. These indicators were developed with the stakeholders.

Reassess indicators by simply asking, "Who is using (or going to use) the information?" If no one is using it, drop or change the indicator. If you notice important gaps, fill them by identifying what information you now need.

Updating is also necessary in the more participatory forms of M&E, since everyone is just beginning to learn about M&E as they implement it. At the beginning, few will know what makes a good indicator, what methods exist and are best, how often data should be collected and what kind of information is actually going to be useful.

In participatory projects, indicators will also change due to local differences and as groups evolve. An irrigation project in Zimbabwe works with a core set of indicators for all 36 irrigation schemes. This is supplemented by additional and more specific indicators for individual schemes, according to the judgement of the farmers and to the pace of development of the scheme.

By reviewing and adjusting your list of information needs and indicators, you will develop an increasingly relevant and viable M&E system.

Back to Top

Further Reading

Germann, D., E. Gohl and B. Schwarz. 1996. Participative Impact Monitoring. Stuttgart: FAKT. Set of 4 booklets available in English (limited number of free copies for the South). Booklets 1 and 2 also available in French and Portuguese. Order via: fakt_ger@csi.com or FAKT GmbH, Gänsheidestr. 43, D-70184 Stuttgart, Germany.

Margoluis, R. and Salafsky, N. 1998. Measures of Success: Designing, Managing, and Monitoring Conservation and Development Projects. Washington, DC: Island Press. Order

MacGillivray, A., C. Weston and C. Unsworth. 1998. Communities Count! A Step-by-Step Guide to Community Sustainability Indicators. London: NEF. Download (Search for the term "communities count" and you will be directed to the link.)

Oakley, P., B. Pratt and A. Clayton. 1998. Outcomes and Impact: Evaluating Change in Social Development. Oxford: INTRAC. Order via: publications@intrac.org or INTRAC, P.O. Box 563, Oxford, OX2 6RZ, United Kingdom.

Website on indicators: Note. This detailed Website focuses on indicators applicable in the North and on their use for assessing sustainability.

1/ Berdegúe, J. 2001. Cooperating to Compete. Associative Peasant Business Firms in Chile. Published PhD thesis. Wageningen: Wageningen University and Research Centre.

2/ Blauert, J. and Quintanar, E. 2000. "Seeking Local Indicators: Participatory Stakeholder Evaluation of Farmer-to-Farmer Projects, Mexico". In: M. Estrella (ed.). Learning from Change: Issues and Experiences in Participatory Monitoring and Evaluation. London: Intermediate Technology Publications. pp 32-49.

3/ See www.iaf.org

4/ IFAD. 2000. IFAD's Revised Operating Model for Impact Management Process. IFAD: Rome.

5/ Community Partnership Center. 1998. Findings and Recommendations of the Community Partnership Center EZ/EC Learning Initiative. Knoxville: University of Tennessee.

6/ Shrimpton, R. 1995. Community Participation in Food and Nutrition Programs: An Analysis of Recent Government Experiences. In: P. Pinstrup-Andersen (ed.). Child Growth. Ithaca, N.Y.: Cornell University Press.

7/ Adapted from: IUCN. 2001. A Resource Kit on Sustainability Assessment.. Gland: IUCN. Download original document.

8/ Blauert and Quintanar, see footnote 2.

9/ Shah, P., G. Bharadwaj and R. Ambastha. Participatory Impact Monitoring of a Soil and Water Conservation Programme by Farmers, Extension Volunteers and AKRSP in Gujarat. RRA Notes 13: August 1991. pp. 86-88.

10/ Action Aid. 2000. ALPS: Accountability, Learning and Planning System. London: Action Aid.

Download PDF Version (212 KB)

Back