We are developing evaluation toolkits to help those working in the field of cohesion and integration to evaluate their work better. Meanwhile, here are some tips on evaluation and links to existing resources.
Evaluation challenges
Those of us working in the field of cohesion and integration often face challenges when it comes to measuring the impact of our work. Organisations are often small and face financial constraints and other pressures. They often have limited resources to expend on evaluation and it can be difficult to choose to invest in this area when there are so many other pressing needs. Cohesion and Integration projects are often seeking to bring about subtle changes that can be difficult to quantify – such as changing people’s attitudes and behaviour, or helping people to feel more confident in their surroundings – and this change can happen over long periods in ways that are challenging to measure. And sometimes the impact of a project is only felt when a particular event happens, for example a trusted network to support cohesion can in a crisis or emergency be mobilised to calm local tensions and quickly connect community activists and leaders.
Pressure from funders and others to prove the impact of our work can feel like a burden. Certainly, more work needs to be done to increase the awareness of the challenges and complexity that those of us working in this field face when it comes to measuring the outcomes of the work we do.
Why evaluation?
Despite the challenges, evaluation can be an extremely fruitful and rewarding process. If done well, it can both fulfil the requirements of funders, and provide opportunities for us to reflect on our work and improve the quality, creativity and impact of what we do. All of us working in this field need to do more to measure our impact, so that others can learn from our work, and, crucially, so that we can make the case for investment in this long term ‘social glue’ work. We hope that, through the ongoing work of Belong, we will be able to support you with this process. In the long-term we are seeking to build a strong evidence base so that we can strengthen the support that organisations working in this important area receive. In the meantime we have provided some basic guidance below.
Some common questions
The cornerstone of any evaluation is collecting information so that we can understand how well we have achieved our aims and objectives and find out what can be improved. This kind of information is referred to as “data”. There are two types of data:
- quantitative data is numerical information, such as how many people took part in our activity and how many activities took place
- qualitative data is detailed information about how people taking part experienced and benefited from the activity.
Whether you use quantitative, qualitative, or both methods in your evaluation, you need to have a system in place to gather this information. Without a data collection system, you can’t do an evaluation.
Keep it simple!
The results you get will only be as good as the processes you put in place to capture the data. It’s easy to become creative about the types of questions you ask when seeking to measure impact, however, on the whole it’s always better to keep it simple. The simpler and more straightforward your approach is, the more likely you are to get good information that you can use for reporting on evidence of impact and improving what you do.
Think about how much data you need to collect. At some point you will have to sit down and make sense of all the information you have collected. For example, asking 12 people whether they want to say anything else in an evaluation form may well give you some manageable data. However, asking 144 people the same question may give you so much information, on such a wide spectrum, that it becomes unmanageable, unless you have access to a team of researchers.
You will need to decide at what point to capture your information. Some quantitative information can be captured on a routine basis as you go about your activity. For example, how many sessions or activities took place, how many people attended, details about the people attending (age, gender, etc.). Qualitative information – where people tell you what they thought of your activity or session and how it impacted them – is usually captured at the end of an activity or series of activities and it encourages people to look back and reflect. This information can be obtained through an evaluation form that participants fill in themselves. This form is usually comprised of tick box responses to questions and a small number of open-ended questions.
Another effective way to measure impact is through administering a ‘baseline questionnaire’ at the beginning and end of your project. Through a baseline questionnaire, participants are asked a set of questions that capture information – such as how they feel about a certain issue, or their level of confidence or skill in a certain area – before they take part in the project. They are then asked the same set of questions at the end of the project to evaluate distance travelled. We have included an example of a good baseline questionnaire in the resource centre here
It’s vital that you give a lot of close attention to designing any evaluation form you use and, if you can, test it out first. Do the questions make sense? The questions you ask should be related to your desire to improve the quality of your service and, evidence of impact. This means that in designing your questions you should probably start with the objectives or strategic outcomes you are aiming to achieve and the strategic aims and objectives of the activity you’re carrying out. They should both be very similar.
When you finish an evaluation, you will probably want to write a report, which may include the following headings and areas:
- Aims and objectives
- Number of Activities
- Types of Activities
- Number of people taking part
- Age, gender and backgrounds of those taking part
- Impact data – what did people think of the activity? (This will be qualitative data)
- Conclusions – this may include: lessons learned; a summary of impact, and next steps.
This is a very crude template, but it gives you a rough idea about the type of information you need to collect when you begin to plan the process.
A theory of change is essentially a diagram that sets out the outcomes of your project, or organisation as a whole, and describes the steps you need to take to achieve these outcomes. It can also highlight the assumptions you are making about how that change is going to happen.
Creating a theory of change does not need to be complicated. You probably instinctively know what your theory of change is already. The best theories of change are simple, can be understood quickly by anybody, and help you to articulate the key objectives of your work. Sometimes getting somebody from outside of your organisation to spend a day with you to develop your theory of change can be helpful, as it brings fresh insight and perspectives. Links to some helpful tools for developing your theory of change can be found in our resource centre. See links at the bottom of this page.
An indicator is a marker used to measure progress. So, for example, if the desired outcome of your project is to reduce the isolation of a group of women in your community, one of your indicators might be: “women access support and services in their community at least once a month”. You will measure how many women are achieving this, during or at the end of your project to see if your project is having an impact. You can draw up a set of indicators to measure all of the outcomes of your project.
There have been efforts over the years to develop standardised indicators to measure cohesion and integration. Government is developing a few resources on ways to measure integration outcomes that will be published shortly, that will provide practical guidance to projects that are interested in measuring the integration outcomes for new migrants (including asylum seekers and refugees) and established communities across England. These guides will be made available on the website for people to use. Having standardised indicators will help organisations to understand what they are working towards in trying to promote integration. It will help us to think more strategically about what we are trying to achieve when we talk about increasing cohesion and integration in our local area.
At the same time, it is worth bearing in mind that approaches to integration and cohesion are unique to every context and local area. We will never be able to come up with indicators that fit every kind of intervention and approach and every local context. We hope that as part of our work at Belong, we will be able to gather information from around the country that will feed into this process of developing indicators, and to help organisations to develop their own measures for their work.
Links to further helpful evaluation resources
The following resources are helpful guides for monitoring and evaluating your work. Click on the link and you will be taken to our resource centre, which includes a summary page telling you more about each resource.
Helpful ‘hands-on’ and practical guidance
Paul Hamlyn Foundation Evaluation Resource Pack
NCVO Guidance on impact and Evaluation
Evaluating community projects: A practical guide (published by Joseph Rowntree Foundation)
Practical Monitoring and Evaluation: A Guide for Voluntary Organisations
Social Integration Measures (as identified by the Greater London Authority’s – GLA Intelligence Unit)
Community cohesion: Seven Steps (published by the Home Office)
Qualitative Research (produced by Search for Common Ground)
Reflective Peacebuilding: A Planning, Monitoring and Learning Toolkit
In-depth explorations of measuring cohesion and integration
Indicators of Integration final report (published by the Home Office)
‘What Works’ in Community Cohesion
Leicester Community Cohesion Evaluation and Assessment Framework