Developmental evaluations apply evaluation principles to nascent innovations and complex systems. This approach is useful for evaluating new initiatives as evidenced by the Department of Labor’s solicitation for Strengthening Community Colleges Training Grants prescribing a developmental or adaptive evaluation to be incorporated into the grant funded work.
In this webinar, we will introduce the key concepts of developmental evaluation and explain how to use them by walking through a Department of Labor-funded project that used developmental evaluation. Participants will leave the webinar with a better understanding of what developmental evaluation is and how it is applied.
Welcome, everyone! thank you for joining us today for our current coffee break webinar on Developmental evaluation what is it and how do you do it. Hopefully, you have a cup of coffee with you, and if the weather is like here in Dayton, Ohio, then you have a cold cup of coffee. Before we get started, let me go over a couple of housekeeping items:
It’s important that you have an opportunity to ask questions, so use the question function on your computer to be able to do that. We have with us today Alyce Hopes, our outreach coordinator who will be helping to moderate the Q & A as well as to address any questions or computer issues that you may have. Let me introduce myself to you as well – I’m Lana Rucks, the principal consultant of The Rucks Group. The Rucks Group is a research and evaluation firm that gathers, analyzes, and interprets data to enable our clients to measure the impact of their work. We were formed in 2008 and over the past several years we’ve had the privilege of working with a variety of clients primarily within higher education and grants funded by private foundations and federal agencies such as the National Science Foundation, Department of Education, and the Department of Labor.
To create context for today, I want you to think about when you’ve renovated space or remodeled a room. And if you’re like me you had expectations for something like this – very pristine and a nice clean space. Also, if you’re like me, the reality of family intrudes and the way the room looks is probably more like this – not so pristine and a little messy. Well that can happen too when we are talking about grants.
When we’re planning out a grant, we have expectations that things will go a certain way, but then reality hits and some of those emerging factors impact an otherwise well-crafted plan. This brings us to the importance of what development evaluation is. Development evaluation is really intended as a structure for handling the messy and unexpected aspects of implementing a new initiative. Developmental evaluation used to be a concept that was primarily discussed among evaluation circles, but I’m starting to see that conversation move beyond the evaluation space. For instance, in the recent funding opportunity announcement by the Department of Labor (DOL), there was an encouragement to use developmental evaluation.
With this context, there are a couple of things I want to be able to do during our time together: I want to define what developmental evaluation is; Then I want to simplify that and give some information about what that actually looks like in practice; As well as provide some examples of how to incorporate developmental evaluation into an initiative; and I want to be able to answers your questions. Again, please make sure to use that chat function on your computer.
So, let’s start first by talking about, what is developmental evaluation?
Developmental Evaluation was originally introduced into the literature by Michael Quinn Patton who is a prolific writer on evaluation. He defines developmental evaluation as the following: “[developmental evaluation] supports innovation development to guide adaptation to emergent and dynamic realities in complex environments”. Let’s try to unpack this just a little bit.
It “support innovation and development”. Developmental evaluation is really relevant to new ideas and new initiatives as well as projects that have not been well established, or initiatives in which there’s not a lot of associated research, even in regard to how to appropriately implement.
The other piece of this is that it’s intended to “guide adaption”. So, it helps to provide insight into how to be flexible and how to adapt to situations and occurrences that are not expected. This is going to happen because they’re being implemented within “emergent and dynamic realities”. So, I know for many of us right now, in the current context with the COVID-19 pandemic, there’s a lot that we’ve had to adjust in terms of emergent and dynamic realities.
But even in normal situations, very often, implementing new initiatives can occur within these emergent and dynamic contexts and thus the environment in which they’re being implemented is complex. When you take all these factors together, there’s just a lot of variables that are impacting how the project is being implemented.
So, that gives you a sense of what developmental evaluation is in terms of its denotation. Let me see if I can try to put some meat on the bones by thinking about the characteristics, or some of the connotations associated with developmental evaluation, or at least the connotations that I think of (so this is really from my perspective and trying to really unpack what exists in the literature around developmental evaluation).
The first characteristic of developmental evaluation is that it reflects a high level of interest in learning by the project team. Let me give you an example of that:
We work with a project that was funded by the Department of Labor and the purpose of this project was really to create a national model for flexible apprenticeships to increase the pipeline of workers in a high-demand area. What’s important to know is that in this context the project team partnered with an evaluation entity, even though the funder didn’t require it. That’s in large part because they wanted to make sure that that learning and that evaluative information was intentionally being gathered.
Not only did this project team have this emphasis on evaluation, but the other piece is that they make sure to actually use the evaluation findings. So, when measuring out the before and after learning of a workshop, they are looking at abnormalities in the findings. For instance, when people are reporting that they knew more on a particular topic before their workshop than afterward, the team digs back through that workshop training to understand if something was conveyed that may have been confusing? Even on a small level looking at something like response rates it’s something that they’re very responsive to.
In a situation in which they were disseminating a survey and they had a nine percent response rate, they were uncomfortable with that in terms of the quality of the data and questioned its ability to guide decisions (because we know that the larger the response rate the more representative it is of the target audience). So, they went back and strategized on how they could increase the response rate and were able to increase that to 40 percent. Together, this really reflects that high level and high emphasis and interest in learning.
I think another piece to keep in mind about developmental evaluation is that it’s not necessarily new methods and new techniques – it’s how you approach traditional evaluative approaches and how they’re combined together, as well as the perspective that you take in the implementation process. If you think about the traditional evaluation you have the formative evaluation and then you have the summative evaluation that’s associated with it. In this situation what you can have instead from a developmental standpoint, is a developmental emphasis and the summative evaluation after that. The way to think about that developmental piece is that it really maps onto that “plan-do-study-act” model and you’re cycling through in terms of tweaking and making modifications to the initiative until you feel comfortable to really be able to begin the summative evaluation piece of that.
I want to talk about another DOL grant. In this particular situation, the project was really focused on being able to provide displaced and incumbent workers an opportunity to quickly advance their credentials in the field with job openings. So, on a rolling basis they were participating in a credentialing program and what was really important was retention and completion of that program. Over a 21-month period we tracked out retention rates and this is what retention rate looked like over that time frame. However, I should highlight that those first four cohorts were really focused on developmental evaluation, in which there was a lot of consideration and a lot of changes in terms of how the project was being implemented. Once that phase was completed, then the actual outcomes or the summative evaluation was conducted. That’s important because, in that first kind of new learning phase, the retention rate was close to 48%, but once a lot of the new tools and resources were implemented retention rate was actually increased to 63%. So, this is a way in which developmental evaluation can be folded into an evaluation even though you’re still primarily looking at and interested in summative evaluation components.