A few years ago I collaborated with two biologists, a social scientist, and an economist on a article about the treatment of errors made in climate science publications.  The article started off life in a very different form as a seminar paper I wrote my first year in grad school.  If you want to read the technical version, and are interested in more rigorous definitions of error categories (like type 1 and type 2 errors), then you can read the paper here.  But for the purposes of this website, the paper looked at the reaction to and characterization of estimates of climate change effects projected in the 4th assessment report from the IPCC.  The paper looked at climate effect overestimates vs. underestimates using the Himalayan glacier melt predictions as an example of an overestimate and sea level rise predictions as an example of an underestimate. These were both errors.  The Himalayan glacier incident is the better known.  The 4th assessment report famously stated that many Himalayan glaciers could be completely melted by 2035.  While most research has shown that Himalayan glaciers are melting at a rate on par with other glaciers around the world, the 2035 melt date is an error and an overestimate.  This overestimate was discovered in 2010, and apparently arose from a breakdown in quality control and the inclusion of non-peer-reviewed-journal literature.  The predictions of sea level rise from the 4th assessment report were also an error, but they were a known underestimate.  About half of the rate increase in sea level rise up to the publication of the report was attributed to melting of the Greenland and Antarctic ice sheets, however this melting had not yet been modeled with accuracy, and in fact was accelerating quickly and in unpredictable ways.  Because the melting rates from these sheets was so fast that it could not be predicted it was not included in predictions of future ice melt.  As a result the reported upper and lower bounds for sea level rise in the 4th assessment report were necessarily incorrect. 

But were these two errors, one an overestimate and one an underestimate treated differently?  My gut hunch from reading the news at the time was that the glacier overestimate caused much more uproar than the sea level underestimate, which may not have even been characterized as an error, but merely careful science. This despite the fact that they were both incorrect, merely in opposite directions.  

There is more at stake here than merely errors being treated unequally.  If an overestimate is treated as a cardinal sin, then it indicates that this is the type of error to be most assiduously avoided. However, societies might wish to avoid climate underestimates more than overestimates, as underestimates could be costlier not only economically but in terms of human lives and well being.  We might decide that being under-warned about climate impacts is worse than being over-warned.  In which case, how different types of climate errors are represented matters greatly. 

In order to gauge how the over and under-estimates were actually described, we looked at treatment of these errors from 7 top newspapers. When I initially did this, I did it by hand, skimming our two 800 page word documents and employing a judicious use of CtrlF.  Recently, I’ve been teaching myself R, using Matthew Jocker’s Text Analysis with R for Students of Literature.  I decided to revisit this project using R instead of my lightning-fast reading skills. You can take a look at my R code here

 

Here, I have plotted the frequency of different error words occurring within the 30 words before and after “Himalaya/n” and “sea level/s.”

Here, I have plotted the frequency of different error words occurring within the 30 words before and after “Himalaya/n” and “sea level/s.”

It remains striking how seldom “sea level” is surrounded by error terms in the newspaper articles that came out after the release of the fourth assessment report.  Journalistic coverage of the Himalayan glacier error in the same newspapers has many more instances of error words occurring in the words surrounding “Himalayan.”   The treatment of the two errors, indeed looks remarkably different.

One of the strengths of doing this kind of work using tools like R instead of by hand is that it forced me to formalize my process in deciding if a prediction was described as an error.  Instead of relying on a gut impression of whether a particular article treated sea level rise or Himalayan glacier melt as an error, I had to decide what elements of speech constituted, for me, labeling something an error.  In addition, I had to decide how closely around the keyword of “sea level” or “Himalayan” to search for the key words.  Of course my particular search terms and parameters did not catch every instance of treating a particular prediction like an error.  My code didn’t catch sarcasm, for example.  Some articles put “evidence” in quotes when talking about the Himalayan glaciers, and I, of course stripped all punctuation away when tokenizing my corpus.  But a more interesting and important effect of doing this analysis with DH tools is the fact that by formalizing my thought process and writing it in code, I opened my methods up to critique.  What was before a more occluded process is now opened to myself and others so we can question my assumptions, technique, and biases.  Now, that’s pretty exciting side effect of digital humanities.