Continuous Improvement in the Real-World: How to Use Developmental Evaluation to Realize Your Vision

Overview

No one wants to be evaluated. That is usually because people equate evaluation with judgments that focus on flaws. Indeed, a common definition of evaluation relates to “judging the value of.” The challenge with being judged, particularly when it comes to new and innovative projects, is that it does not necessarily provide high-value information to project teams. Project team members already know their initiatives – particularly if they are groundbreaking – will not work at first and that it will take time to get things the way they envision them.

A way to resolve this conflict is through the use of “developmental evaluation.” Developmental evaluation is an adaptable application of program evaluation principles to innovative initiatives. The key advantage is that it optimizes project leaders’ ability to address emerging issues effectively. A developmental evaluation is structured to provide more value by not simply “judging”, but by answering the question of “why” an outcome is being observed and directs attention toward learning and continuously improving in order to implement the new initiative.

In response to our webinar viewers who have shared a high demand to learn more about how to effectively use developmental evaluation, we are providing a complimentary Coffee Break Webinar on Continuous Improvement in the Real-World: How to Use Developmental Evaluation to Realize Your Vision. For an optimal learning experience, we encourage you to review our previous Coffee Break Webinars, Developmental Evaluation: What is it? How Do You Do It? and 5 Best Practices for Hiring the Right Evaluation Partner.

Transcript

Part 1

Lana Rucks

Welcome everyone! So excited to have you join us today for our next installation of our Coffee Break Webinar, entitled Continuous Improvement in the Real World: How to use Developmental Evaluation to Realize Your Vision. Before jumping into the conversation for today, I did want to provide some contextual information for why we’re having this conversation. Over the past two years, we’ve hosted, a little bit over a dozen webinars and the topic that has generated the most conversation as well as views on our YouTube page has been related to developmental evaluation. We’ve received a lot of reach out questions, points of clarification, and just overall interest on this topic. So, we wanted to follow up on some of the ideas that we presented earlier. So if you haven’t seen the previous webinar on Developmental Evaluation, I encourage you to go back and review that.

Another motivation for today’s topic is that in solicitations there’s a greater emphasis on continuous improvement, and developmental evaluation is a great approach for being able to achieve that. And that could also be partly why it’s a highly viewed topic – because you’re seeing this more within solicitations. We’ve always seen an emphasis on formative evaluation, but I think that what you’re seeing now is this greater emphasis and more dialogue around continuous improvement. You may have seen this if you are in the National Science Foundation (NSF) space in which the most recent solicitation had more information about continuous improvement. Also, I think what’s changed over at least the last 15 years that I’ve been working in this practice with The Rucks Group, is that there’s more emphasis in not just completing a formative evaluation, but how is that information’s going to be used by the project team. In the same vein, I would just pull out, Department of Labor(DOL), Strengthening Community Colleges (SCC) as well, but there’s actually in the most recent solicitation, a real emphasis in trying to explain what is developmental evaluation and what is adaptive evaluation and the, the role that that can play in really trying to evaluate initiatives. So, you’re just seeing more of this within solicitations.

So, with that as background, what I would really like to be able to achieve for today, is to be able to demonstrate how the principles of developmental evaluation can be used for continuous improvement. And towards that end, there are a couple of things that I want to do. First, I want to define developmental evaluation and provide some context relative to summative and formative evaluation about what developmental evaluation is. Then I want to talk about some of the key characteristics of the evaluator and the project team to be able to effectively implement a developmental evaluation. One of the components that you’ll see as we have this conversation is that the difference is not so much in terms of the methods with developmental evaluation, what distinguishes developmental evaluation from other forms is really in terms of the mindset and the approach that you take to the learning process. So, I want to talk about what the evaluator needs to bring to that space and then what the project he needs to bring to that space. And then I want to walk through a developmental evaluation in practice and give some highlights of what that looks like. And then, of course, I want to answer your questions.

So, if this is the first time you’ve joined us, I am Lana Rucks, Principal Consultant of The Rucks Group. The Rucks Group is a research and evaluation firm that gathers, analyzes, and interprets data to enable our clients to measure the impact of their work. We were formed in 2008 and over the past, 14, almost 15 years, we’ve worked with a number of higher ed institutions related to federally funded grants such as those funded by the National Science Foundation, The Department of Labor, and The Department of Education. We also have with us Alyce Hopes, our Outreach Coordinator. So, I’ll turn things over to her for a moment for her to be able to share what her role will be on the webinar today.

Alyce Hopes

Good afternoon, everyone. As Lana mentioned my name is Alyce and I’m the Outreach Coordinator for The Rucks Group. For today’s webinar, I’ll be helping to facilitate some of the Q&A portions that occur throughout the webinar. So, as questions emerge for you feel free to ask them and you can do that by looking for the question mark symbol that appears on the right side of your screen. I do want to say too, that there is a little bit of a delay from the time that you send in your question and when we receive it on our end. So if we don’t get to it right away, we’ll do our best to try to get to it in the next segment. And anything that doesn’t get answered 30 minutes together, we’ll be sure to follow up with you all, uh, through email afterward. There’s also a chat function, which we may use to share other resources with you or just other general communications. With this function, I do want to say that this will only appear after there’s a new message for you to review. So be on the lookout for that symbol on the right side as well. With the I’m going to pass it back over to you Lana so we can go ahead and get started.

Lana Rucks

Great. Thanks so much, Alyce for that background information. So let’s go ahead and get started first by defining what developmental evaluation is. So again, I mentioned that we had an earlier webinar, and if you’ve seen that then you’ll be familiar with this definition. Or if you’re familiar with Michael Quinn, Patton’s work then you may also be familiar with this definition of developmental evaluation, which reads as “supporting innovation development to guide adaptation to emergent and dynamic realities in complex environments.” There’s a lot within that, but I really want to highlight this idea of innovation, emergent, and complex. And those are some of the ideas I’ll come back to in a moment. Another definition that I want to pull in that I think is really useful is that it “supports the development of interventions that are innovative in engaging highly complex systems” a very similar type of definition, or “that are changing in response to conditions around them.” So there are a lot of things that you’re having to change and adapt to because of the changing conditions. One of the ways to be able to understand developmental evaluation is in terms of thinking about it as conceptually distinct from summative evaluation and informative evaluation. And let me first talk about summative evaluation because that’s probably the one type of evaluation that does seem most conceptually distinct from developmental.

If you’re familiar, summative evaluation is really about measuring program outcomes and impacts. And we think about this as really occurring at the end of an evaluation effort or at the end of the project. So it could be at the end of the project or just after a year that you’re engaging in summative evaluation. What could cause more confusion is in thinking about formative evaluation because formative evaluation is about improvement, particularly the way that the program is being delivered. So, if you think about formative evaluation and then think about developmental evaluation, there could be some confusion in trying to think about what makes those two concepts of evaluation distinct? And one way to be able to think about formative and developmental evaluations is if you think about them as different ends of a continuum. We’re really talking about differences in degree, or at least that’s how I conceptualize it.

And we’re talking about it in differences in degree, I think along three different dimensions. Those dimensions are the nature of the intervention, the complexity of the situation, and also just the overall uncertainty of the implementation as well. In thinking about the intervention in general, developmental evaluation usually is appropriate when the initiative itself is less developed and that formative is more appropriate when it’s more developed. When I think about it being ‘developed’, it’s not so much in terms of what’s written on paper, but in terms of how actually being developed and implemented and what has already been learned. With a developmental evaluation, you may have very clear ideas and thoughts about how it’s going to be implemented, but it hasn’t gone through the purification process of implementation and that learning process, whereas with formative evaluation that has already occurred.

The other component is in regard to complexity. When thinking about developmental evaluation, you may be experiencing implementation of an initiative that is impacting on multiple systems and new systems in which relationships need to be built and having to figure out how to adapt what you were thinking of and what the ideals were about the initiative in response to the complexity. With formative evaluation, it may impact less systems overall and those systems may be slightly more established. For instance, an initiative that involves partnership with another department that you already are working with within your institution may be more informative in nature. Whereas if you’re working with different departments across different institutions, that may be more developmental in nature. And then the final dimension is whether there is agreement among the key partners on how that’s going to be implemented. If there’s less agreement, then there’s probably more room for a developmental evaluation and if there’s more agreement then there’s probably more of a role with a formative evaluation. And I should say within all of this, is that this is more in terms of a pure definition standpoint and that the approach and some of the characteristics that you want to bring to a developmental evaluation or bring to an evaluation, whether it’s developmental or formative, can still be applicable. Hopefully that helps in terms of defining out what those differences are. So let me pause there for a moment, Alyce, and see if there are any questions.

I can almost see how sometimes developmental evaluation could encompass both summative and formative evaluation. Would you agree with this?

Yes, I actually would. Um, one of the ways that I often think about developmental evaluation is that developmental evaluation actually comes out of improvement science. If you think about the plan-do-see-act model, there’s a rapid kind of feedback loop. And so, you can almost think about that feedback loop in terms of how I was defining formative evaluation which is as something that’s more established. I think if you look in the literature and at some of the published works, that’s how that’s being defined as well. But, I do think in terms of practice thinking about that feedback loop and really being focused in on learning, is part of what makes developmental evaluation very distinct.

Part 2

Lana Rucks

So, let me go on and talk about some of the evaluator and project team characteristics. Again, what also makes developmental evaluation slightly distinct is what perspectives the evaluator and project team bring. So, if you think about an evaluation, it’s partly a partnership between the evaluator and the project team, particularly when it’s developmental because there’s a lot of learning that has to occur. Both entities need to bring a certain mindset and a certain perspective to be able to make that work. Let’s first talk about this from the evaluator’s perspective.

So, I think that what the evaluator needs to bring, and I know this is something that The Rucks Group really focuses on bringing to our clients, is to play a strong consultative role. Part of that consultative role is in regard to giving feedback, catalyzing conversations, helping with the use of evaluation findings, and providing insights. And I should say that within that role, it really should be a two-part dialogue between the evaluator and the project team. The project team still has that subject matter expertise, but the evaluator is more removed from the project, so there may be different things that we see and we want to make sure that we’re able to catalyze that type of conversation. Another characteristic is that the evaluator should be willing to follow the ‘emergent’ in terms of the evaluation. If you’ve ever talked with me about projects, one of the phrases I often use is that we like to ‘live close’ to projects. The reason why we like to live close to the project is because so much can emerge over time, and we want to make sure that we are gathering data at the right moment to be able to really help the project team make decisions about what the next step should in regard to the initiative. The final characteristic that evaluators should bring to the developmental evaluation is the appreciation that there is some level of vulnerability of the project team and the initiative itself. When I say that, it doesn’t mean that we’re not sharing out results and it doesn’t mean that we’re not providing truth. It simply means we appreciate the fact that the project team has an investment, whether that investment is because they are very passionate about what they’re doing, or the investment is just because of the weight of having a grant and wanting to make sure that they live up to the expectations of what they have outlined in that proposal. It’s really important the evaluator has that appreciation. We often talk about this relationship between evaluation and research, in that evaluation does follow a lot of the guidelines and approaches of research, but one of the unique parts of evaluation is that really does live within human systems – not to say that research doesn’t, but evaluation really does live within human systems and there just needs to be an appreciation for that in terms of the interaction with the project team. Again, I want to emphasize that it doesn’t mean that we’re not providing out the truth or that we’re hiding findings, it just means that in conversation, we are appreciating the perspective(s) of the project team.

Now let’s take a couple of minutes and look at it from the project team’s perspective and what it is that they need to bring to the evaluation. One is that they just need to have a willingness to share challenges and obstacles and not just successes. Sometimes that willingness to share is reflected through an evolution of the relationship. If you think about what evaluation is in terms of its traditional definition, it’s “a judgment of the quality of”, and so if you are perceiving evaluation in this way, or if that’s the approach to the evaluation in which it’s all about the quality of what you’re doing, then there would be more reluctance to sharing out information. But if there is this more emphasis on learning and reflecting, then you should be willing to be able to share out what’s working and what’s not working to so that the evaluator can help play that consultative role. In some ways, that information eventually come out in the evaluation anyway, so the earlier that you’re sharing that information, the earlier that as a team, there can be some dialogue on what can be done and how those obstacles can be addressed. I already started to share the next characteristic of what the project team should bring and a seeking for learnings and to use hose learnings to inform the project decision making process. Sometimes I think there’s this idea that the project implementation lives in one space and evaluation lives in a different space, but they should really be living in the same space. Again, going back to the idea of developmental evaluation coming out of improvement science and the plan-do-see-act model, project implementation and evaluation should complement each other. So, the project team really wants have learnings from the evaluation and to then take those learnings to help make decisions about the direction of the project. Finally, something else the project team should also consider is to make sure that there is enough capacity in the team to be able to respond to the emergent. When I say emergent, I’m talking about unexpected opportunities as well as arising challenges. So let me go ahead and pause there for a moment if there are any other additional questions.

Can you share more, uh, about how you can build a positive energy toward evaluation among the project team?

That’s a really good a question and on multiple levels. I think it goes back to what I was saying before in that you’re in defense of the project in terms of, of making sure that you’re sharing out the right information and that you’re telling the truth about the project. Creating the right energy is not so much about hiding what the facts are, but it’s about the perspective that you’re bringing to that evaluation as well as how you share out information and engage in the dialogue. But it also depends on when the evaluator is brought in. When the evaluator is brought in at the end of a grant life, there’s not as much time to be able to have those types of conversations or to be able to engage in course correction. Another key piece to building this positive energy is in realizing the humanness of another person. When you realize that you’re dealing with another person who may be very passionate about what they’re trying to achieve, the language that you use and how you communicate the truth will change. This is what really helps us to build a working relationship between that evaluator and the project team. That’s what an evaluator can do, but I think the project team can also help encourage and ask questions of the evaluator about what do findings mean and ensuring that information is being obtained at the points that decisions are being made.

Do you think reviewers know enough about developmental evaluation, that it’s not risky to include it in NSF proposals?

That is an excellent question! I don’t generally use the language of developmental evaluation if the solicitation doesn’t use the language of developmental evaluation. What I use, however, are all the terms around it. So, I talk about continuous improvement, and I talk about how the project team is going to use the information. I talk about developmental evaluation without using the exact language because what I’m seeing in solicitations is that even without them using the term “developmental evaluation”, what they’re asking for at the beginning of an initiative is some kind of developmental type of evaluation. I want to emphasize that even within a particular project, the type of evaluation that you’re completing may evolve. Just because you’re using a developmental evaluative approach does not mean that you’re not also folding in formative evaluation and summative evaluation. You still want to make sure that you’re pulling in what’s being done, why it’s being done, and what’s being learned. This is what’s so key, particularly within NSF, about disseminating information so that other people will understand what that learning process was and how you got to the end.

Part 3

Lana Rucks

Let me just briefly describe what this looks like in practice. Here’s an example: a project team received funding to increase the retention of undergraduate students in STEM majors. The project team proposed to address retention issues by providing peer mentoring, research, internship opportunities, and tutoring support. Let’s look at things from the evaluator’s perspective first. I’ve shared that the evaluator should play a strong consultative role. One way that the evaluator can do this is to provide insights from other projects on what was working in regard to providing tutoring and what’s working in terms of really ensuring that students are taking advantage of research and internship opportunities. I think something else that we do is really try to connect project teams with other individuals who are doing similar types of work as well. Then, the evaluator should allow the evaluation to follow emergent factors. So, in this way, if you’re conducting an interview or you’re gathering data you should also be quickly sharing out that information so that the project team can utilize it to make decisions. Um, the next is in regard to just being sensitive to a vulnerability of the project team. One way that the evaluator can actually help with that type of conversation is providing “negative feedback from a learning perspective to help catalyze conversations on what can be done differently. So, instead of presenting it as catastrophic, present it as what we know isn’t working and what can be changed.

Using the same example, let’s look at things from what the project team would need to bring to the table. The project team should be willing to share successes and challenges. This could mean that the project team is sharing that they’re having a hard time getting support from faculty to sponsor research, for example, and by having dialogue around that, the evaluator may be able to connect that project team with other project teams who have successfully been able to foster buy-in on. Additionally, the project team should seek learnings from the evaluation to inform project decision-making. The team can do this by meeting regularly with the evaluator to discuss evaluation findings. Recall that evaluation findings can also be information related to not being able to get buy-in, running into an obstacle, or just having some sort of problem to solve. Finally, the project team should have sufficient capacity to be able to take timely action on lessons learned. During the proposal phase that the project team should be considered how they would address challenges to implementation. As a consequence of considering this during the proposal phase, there’s the capacity that’s been written into the initiative to be able to address those issues. And I know we are very close to time, but let’s hear one more question from the audience.

I’ve heard the term evaluation capacity-building, how does that play into developmental evaluation?

I think it plays a very strong role within a developmental evaluation. So, evaluation capacity building is just really in regard to the understanding of what it takes to be able to implement an evaluation and also what it takes to use evaluation information and what you should be expecting from the evaluation. Part of what happens over time and part of evaluation capacity building, o ECB as it is sometimes referred to, is the changing relationship with the evaluator and knowing that you’re going to be able to have this conversation with the evaluator. There have definitely been times when we’ve worked with clients for the first time, and we were asked to leave a room as they tackle an issue. Over time, however, that project team became much more comfortable having us in the room. A lot of this is the evolution of these are human systems.

Lana Rucks

If there’s another question, I will make sure to follow up with individuals afterward but let leave you with this final thought from Ryan Hawk. I’m not sure if you’re familiar with him, but Ryan Hawk is a leadership guru who lives here in Dayton, Ohio, and has written a couple of books. His most recent book is a Pursuit of Excellence: The Uncommon Behaviors of the World’s Most Productive Achievers has this quotation, which I thought was very appropriate. He says, “Experience is not the best teacher. Evaluated learning from experience is the best teacher.” Developmental evaluation really provides that frame of having evaluated learning to be a great teacher for project teams and for other project teams who can leverage those learnings as well.

Before I say goodbye to everyone, I want to encourage you to mark your calendars for Thursday, July 21st, for our next webinar on, Optimizing Success: Strategies for Leveraging Program Evaluation for Project Implementation. Thank you so much for joining us today. This was very fast, but it was really great to be able to have such great questions from everyone and to be able to share on this topic that is very meaningful for me and hopefully for you as well. So, thank you. Have a great rest of the day.