In April we produced a webinar that focused on using Developmental Evaluation for continuous improvement purposes. Across the next three blog posts, we will build on the themes presented as well as the companion webinar Developmental Evaluation: What is it? How do you do it? For this blog, we explore in greater depth What is Developmental Evaluation?
What is Developmental Evaluation?
To answer this question, let’s first consider more broadly, what is program evaluation? Program evaluation is the application of social science methods to systematically investigate the effectiveness of social intervention programs. Program evaluation as a profession grew, primarily, out of the education sector by applying academic research principles to measure the impact of real-world initiatives. Over time, two frameworks for approaching evaluation have emerged and dominated program evaluation: formative evaluation and summative evaluation. As the use and practice of evaluation has expanded to other disciplines and contexts, additional approaches to evaluation have also emerged, giving rise to Developmental Evaluation (DE).
As a relatively new method available in the evaluation toolkit, DE, can be conceptualized as a specialized form of traditional program evaluation by using social science methods to “…support development to guide adaptation to emergent and dynamic realities in complex environments” (Patton, 2010; see Table 1).
The key practical differentiation between traditional program evaluation and DE is not in the methods, DE is not inherently less rigorous than traditional program evaluation. Rather, the defining difference is more concerned with the degree to which the evaluator will provide consultation. Because of this role many evaluation professionals argue that the DE evaluator is less objective than the traditional program evaluator. However, this need not be the case. Admittedly, implementing a DE may require an evaluator to “live closer on the project” than with a traditional formative and summative evaluation effort. This occurs because in implementing a DE, the evaluator will need to understand the twist and turns of the initiative to identify emergent evaluation needs. As such, there is a perception of the DE evaluator as a “team member” rather than an objective third party. By virtue of attending more project team meetings and being at the table to listen to programmatic developments to provide consultation to fully leverage the evaluation, does not necessitate a DE evaluator to lose their independence nor objectivity. Therefore, while a DE evaluator may be a team member, this role is not the defining characteristic of a DE evaluator.
Another differentiating factor between traditional program evaluation and DE is the recognition of the impact of human systems on the initiative. Again, because traditional program evaluation borrows many of the discipline norms for academic research, namely, underestimating the force human systems have on the implementation of initiatives. Evaluators’ training tended to emphasize the importance of methodology rather than strategies to translate theoretical knowledge of human systems into real-world, practical strategies to navigate how to optimize the implementation and use of evaluation. As such, the pragmatic interplay of human systems, the evaluation, and the initiative is not necessarily at the forefront of our thinking. To this point, this may be a reason why many professionals perceive wading in these waters as a loss of “objectivity.” But within DE advocating for the evaluation by fully leveraging the context, does not intrinsically jeopardize independence.
With a deeper understanding of what developmental evaluation is, in the next blog, we’ll explore When should (or could) a developmental evaluation be done?
Newcomer, K.E., Hatry, H.P., & Wholey, J.S. (2015). Handbook of practical program evaluation 4th edition. San Francisco, CA: Jossey-Bass.
Patton, M.Q. (2010). Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. The Guilford Press: New York.