Evaluation

AuthorMary Lind and Barbara Tint
Pages132-168
132 Mary Lind and Barb ara Tint
The DDP team included research and evaluation activities as an integral part of our
process; from start to finish, evaluation and planning went hand in hand. When
designing project activities, our team also planned how and when to assess whether
those activities were achieving the intended impact. We were interested in deep-
ening our understanding of these populations and processes and contributing to
emerging knowledge about reconciliation between diaspora populations from
regions of conflict.
Everything that we learned through evaluation was then used to inform the
planning of future dialogue sessions, training activities, and other community
processes. Thus, our evaluation activities allowed us to learn as we moved forward,
contributing to positive impacts based on feedback from participants. At the end of
this chapter we have outlined the key evaluation tasks and tools that have emerged as
our best practice recommendations for other groups setting out to do similar work.
Key Issues inEvaluation
Why Evaluate?
There are two primary reasons for evaluating a community reconciliation project:
(1) to determine whether the project is achieving its purpose, and (2) to capture
strengths and weaknesses in the project so that improvements can be made along
the way and can inform future endeavors.
Evaluating Project Activities andProcess
Known as: Formative Evaluation or Process Evaluation
Key questions: How can we improve the project? What is going well? What
is not working as planned? What changes need to be made?
When it takes place: roughout the project, providing practitioners with
feedback and opportunities to make changes along the way.
Evaluating Project Outcomes
Known as: Summative Evaluation or Outcome Evaluation
Key questions: Has our project achieved what we set out to achieve? Are we
having the impact we intended to have?
When it takes place: At the end of the project and at the end of key intervals
designed to accomplish specic outcomes. Sometimes it takes place at the
beginning of the project as well, in order to establish a baseline, if comparing
data from before and aer the intervention is a goal.
Evaluation 133
Evaluating Around Key Questions: What Do
WeWant toLearn?
In the early stages of project design, it is vital to develop guiding questions in order
to identify intended outcomes. From there, outcomes can be evaluated along the
way and at the end of the project. Key questions include:
What do you want to learn, achieve, test, and explore?
What impact do you want to create?
What are you hoping will happen as a result of the dialogue project?
Our Ethical Duty toEvaluate: Do No Harm
When we introduce new activities, which are designed to create opportunities for
change and/or transitions in peoples lives and communities, we have an ethical
duty to acknowledge, record, and reflect on the actual experiences of our partici-
pants. It is not enough to set out with good intentions and create an informed
project design; we must also recognize that people might be impacted in ways that
were not anticipated. Evaluation provides an opportunity to expose any adverse
impacts, in order to address them responsibly and make any necessary changes to
avoid them in the future. In this way, we honor the trust and participation of those
who have taken the risk to engage in our project. In this way, we practice a commit-
ment to Do No Harm, as a minimum standard, while aspiring to higher goals such
as personal transformation and community reconciliation. For more detail on the
concept of Do No Harm and its connection to peacebuilding and reconciliation
processes, please see:
The Do No Harm Handbook: http://www.globalprotectioncluster.org/_assets/files/
aors/protection_mainstreaming/CLP_Do_No_Harm_Handbook_2004_EN.pdf
and
The Do No Harm Project Workbook: http://cdacollaborative.org/sdm_downloads/
do-no-harm-project-trainers-manual-workbook-and-exercises/
Reflective Practice
Beyond the minimum standard of doing no harm, we believe that, as dialogue
practitioners, it is our responsibility to observe and reflect on the experiences of
participants and ourselves in order to revise, refine, or enhance the project. As
reflective practitioners, we make our assumptions and intentions behind decisions
134 Mary Lind and Barb ara Tint
explicit so that they can be tested or questioned for accuracy. Essentially, we want to
be conscious and transparent about our goals and the reasons for our intervention
choices and design decisions, so that our experience can contribute to improve-
ments in our project and in the larger field of practice. The reflective practitioner
takes actions while seeking opportunities to learn from the actions as they play out.
Evaluation activities provide the opportunity to document and thus capture lessons
along the way, so that they can inform planning at the next opportunity.
Quality Assurance
The credibility of evaluation is tied to the accuracy of data gathered, which is asso-
ciated with the objectivity of the person(s) and methods used to gather data. Thus,
the use of professional evaluators who are not part of the project team is often rec-
ommended. For many community‐level projects, contracting an outside evaluator is
not possible due to capacity, time, and costs, leaving the task to be completed by
project staff. If a team is conducting evaluation activities without the use of an
outside evaluator, it is critical that internal evaluators adopt an independent
stance– that they are not seeking to prove assumptions but are interested only in
capturing the true experiences of participants. For more on quality assurance, see
the work of the American Joint Committee on Standards for Educational Evaluation,
available online at http://www.eval.org/p/cm/ld/fid=51.
Evaluation Methodologies: How Can WeLearn What
WeWant toKnow?
There are two general areas of methodology that will inform project design and
evaluation: quantitative and qualitative methodologies.
Quantitative Methods: These methods produce numerical data– numbers.
Typically, quantitative methods are used to gather data from sample groups
large enough to make generalizations or hypotheses about findings. Examples
of quantitative techniques include: surveys, questionnaires, and pre‐ and
post‐tests.
Qualitative Methods: These methods typically generate narrative data–words.
While findings cannot be generalized to a larger population, qualitative evalua-
tion methods can produce more descriptive findings related to individuals or
groups. Examples of qualitative techniques include: observations, interviews,
and focus groups.
The choice of methodologies will be informed by project goals, guiding questions,
capacity, and practitioner philosophy around research and evaluation. Many projects
will use a mixed methodology as an evaluation strategy.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT