For the past year (almost), I've been working with colleagues at the Annenberg Public Policy Center to examine how Pope Francis's encyclical, Laudato si: On Care of Our Common Home, may have influenced the U.S. public in terms of their climate change beliefs. We used nationally-representative survey data (with an over sample of Catholic respondents) and had several waves of data collection. However, when we began looking at the data, we were frustrated: It didn't seem like anyone was changing their beliefs at all. To operationalize climate change beliefs, we asked three basic questions (these questions are very similar to what is asked by other organizations that are measuring climate change attitudes, like Pew):
We standardized responses on these items and then created an averaged index and used this as our measure of climate change beliefs. However, this scale is not normally-distributed. In fact, there is a huge ceiling effect: many individuals choose the strongest rating on all three of these climate change options. For instance, let's look at a table showing the frequency of participants (from our pre/post encyclical release panel, n = 602) among each level of position by each level of consensus. I've bolded the number in the bottom right corner. Note that is NOT a total of the columns and rows. That is the number of participants who answer with the highest value on both the scientific consensus item and on the position item. It is important to note that these are the values for the items BEFORE the encyclical has been released. If we anticipate that the encyclical would have a positive effect, how are we going to measure that when a huge portion of the sample has hit the ceiling before the encyclical has been released? To be fair, when most people are curious about whether the Pope influenced people's beliefs on climate change, they are not interested in those who are already climate concerned. Instead, we may want to see, for example, what climate change skeptics might think after they hear what the Pope has to say. However, it is likely that the Pope's messages will also influence those who already believe in climate change. So how useful is our measure of climate change beliefs and did we really capture people's climate change concern at the higher ends of the scale? Moreover, is this why it appears that no one is changing their beliefs (when we look at the overall numbers, not broken down by previous beliefs or ideology). To answer these questions, I used item response theory to examine our scale. I chose to use item response theory to examine our scale as it allows us to examine how informative the proposed scale is at each level of the latent variable (also called "theta"). While some scales, like this one, have high inter-item reliability (Cronbach's alpha), they may be informative for only a portion of the scores along theta. For instance, item response theory shows that the Cognitive Reflection Task is only informative for people who score above the mean on the scale. Eventually, I would like to learn how to run IRT in R, but for now, I'm using XCalibre. In XCalibre, I ran a polytomous Generalized Rating Scale Model (GRSM: Samejima, 1969; 1996), which is the most appropriate model to use when item responses can be characterized as ordered categorical responses, such as those for a Likert scale. XCalibre provides output for both classical statistics as well as IRT parameters. As predicted, our inter-item reliability is very high, Cronbach's alpha = .96. But this does not mean that our scale is informative across all levels of our latent variable (i.e., ccib, which can be conceptualized as climate change issue beliefs, climate change attitudes, or climate change concern). Test Information Function XCalibre provides the test information scores at every level of the latent variable theta. Then, the Test Information Function, is a graphical representation of how much information the test provides at each level of theta. You can see from the figure below that the information from our scale is skewed toward the lower scores, we don't get nearly as much information from people who are scoring at the top of our scale. Category Response Functions We can also look at the category response functions for each of the items that are used in our scale. These can be interpreted similarly to multinomial logistic function graphs. That is, we can see what the likelihood of each response is at each level of climate change concern. Here is the category response function for the position item. As you can see, once you get above 0, the most likely response is 4 (climate change is human-caused). Between 0 and -1 on climate change concern, the most likely response is 3 (climate change is due to natural fluctuations), between -1 and -2 it is 2 (we don't know enough about whether global warming is occurring) and from -2 to the end, the most likely response is 1 (climate change is not happening). In other words, this item is great at differentiating people's climate change position below the mean on climate change concern, but not that great above the mean. We see similar results with the seriousness item. Like with the position item, once people are above about 0.5 on climate change concern, they become most likely to say that climate change is a very serious issue confronting the nation. So, we don't have any discrimination among participants who score above 1 on the scale. Same with Consensus. Indeed, recent studies have aimed to examine whether telling people that there is consensus on climate change will increase their beliefs about climate change, but from our data, it appears that a lot of people already believe that there is consensus on climate change. Basically, everyone who scores above -0.5 on the latent variable is most likely to agree that scientists agree that global warming is happening and is human-caused.
So, while our scale has a strong inter-item reliability, it really could have used some items that help distinguish between the upper ends of the climate change scale. Otherwise, we can't expect to have room for people who start off as climate change concerned to increase their concern contingent on hearing about the encyclical and/or Pope Francis's views. Again, this may not be too problematic given that we are most interested in the influence of the pope's messages on climate change skeptics; however this is likely the cause for some of the results that we have shown in our studies under review (e.g., people who are climate change concerned are most likely to maintain their climate change beliefs than to increase or decrease them after hearing about the Pope's messages).
Paul Matthews
5/6/2016 11:04:39 am
"However, when we began looking at the data, we were frustrated: It didn't seem like anyone was changing their beliefs at all. "
Asheley
5/6/2016 11:56:45 am
Really good point! I will admit that I was "in the narrative flow" when writing that. Comments are closed.
|
Archives
May 2016
Categories |