Using Counterfactual Surveys to Improve the Evidence-Gathering Process

MIN READ TIME

March 4, 2019

Contextual information plays an important role in interpreting findings. Many of us have experienced this when a child has come home and said they have gotten 43 points on a test. Was it 43 out of 45, 100, or some other point system? Depending on the response, there is either praise or a very serious conversation.

In evaluation and research the same need for context to interpret findings exists. But how you get to that context can vary widely.

One common approach to create context is to utilize a pre-/post-test design (pre-/post-test). The purpose of a pre-/post-test is to compare what was occurring before an intervention to what is occurring after that intervention by focusing on particular outcome measures.

One challenge to the pre/post-test is responders’ standards for the basis of a judgment may shift because of the intervention. With additional information your perception of what “good” is and how good you are can change. This occurrence can result in similar pre-intervention and post-intervention responses.

One solution to this problem that our team has successfully incorporated into much of our work is the use of a counterfactual survey, also called retrospective survey. In these types of surveys, respondents are asked to consider their current attitudes or perceptions and their attitudes or perceptions prior to participating in the intervention at the same time. In this way respondents are able to make their own adjustments about how they perceive the intervention.

To understand completely how a counterfactual survey looks in practice, let’s examine one of our first projects in which we incorporated this approach.

In this project, STEM academic administrators were participating in a year-long professional development opportunity to enhance their leadership skills. Prior to participating in any activities, we disseminated a survey for participants to rate on a scale of 1 (least like me) – 7 (most like me) their self-perceptions as a leader. Consistent with a traditional pre-/post-test, participants were then asked to complete the survey at the end of the professional development opportunity as shown in the figure below.

03.04.19 - Image 1

To incorporate the counterfactual survey, after participants answered items about how they perceived themselves as leaders, participants were presented with items asking them to rate how they would have rated the items before participating in the professional development opportunity. Therefore, the counterfactual design looks like this:

03.04.19 - Image 2

It should be noted that a counterfactual design does not require including a pre-test questionnaire; in this situation we just happened to do so. What is interesting is that the pre-/post-test responses on several items were very similar, which is not that uncommon of an occurrence (selected items presented below).

03.04.19 - Image 3

However, when you add in the counterfactual design responses, an interesting pattern emerges—respondents rated themselves lower using a counterfactual than they had in reality.

03.04.19 - Image 4

In follow-up interviews with participants, it was apparent that the standard that participants used had indeed shifted. In other words, they didn’t know what they didn’t know and so rated themselves higher on items before the intervention than after learning more about leadership.

A counterfactual survey holds a lot of promise, particularly in conjunction with gathering other data points. A counterfactual survey is only appropriate for attitudinal or perception data and not for objective measures of skill or knowledge. But utilizing a counterfactual survey may serve to illuminate changes that would otherwise go undetected.