- Title: Reconciled climate response estimates from climate models and the energy budget of Earth
- Date: June 2016
- Authors: Richardson et al.
- Published by: Nature Climate Change
Naturally, there are always some gaps between the real world and the results from simulation models. In case of climate sensitivity, which refers to the temperature increase from doubling CO2, the real-world data that has been historically recorded gives us around 1.3 , while models gives us a little bit higher value. Climate skeptics often use this fact to support their argument; that global warming is not in fact that serious as scientists predict. However, a study published by Nature Climate Change this last June, Reconciled climate response estimates from climate models and the energy budget of Earth (Richardson et al., 2016), addresses why climate sensitivity is predicted differently from climate models and observed data.
First of all, the authors use the concept transient climate response, or TCR, to represent climate sensitivity. This is defined as the amount of near-surface warming resulting from a doubling of carbon dioxide. Secondly, as for the terms ‘observed’ and ‘modeled’ data, the former refers to HadCRUT 4 dataset from 1861 to 2009, which combines the sea, land and air surface temperature records compiled by the Hadley Center and other British institutions. On the other hand, modeled data refers to multi-mean of CMIP5 simulations; this is what folks at the IPCC use. According to modeled and observed data, climate sensitivity is about 24% higher from the former than the latter.
So, why do the simulation results differ from real-world data? The answer is simple: biases. There are two main sources of biases in the historical data. One is that the observed data covers limited geographical regions. In particular, the coverage largely lacks polar regions, so those regions have to rely on estimates. Second bias stems from the fact that the HadCRUT4 data uses ‘mixed’ temperatures, from both ocean and air. Unlike models, the real-world data partly depend on ocean temperatures that are collected from ship-based measurements, but the temperatures are not accurate because the sea warms at a slightly slower rate than the air above it. The past dataset reflects these biases, while the models do not. So in order to derive consistent results, it is needed to change the models to reflect these biases and see if the results are closer to observed data. There are two techniques to counterbalance these biases; one is to ‘mask’ certain geographical regions in models, which is called masking. Second is to ‘blend’ the temperatures from both air and ocean.
The authors of the study reconstructed the models in three different ways. The first is like the simulation baseline; use global air temperatures only. The term “tas-only” is a technical word used in the simulation. Secondly, the simulation was conducted using blended temperatures of air and ocean. The authors called this mode ‘blended.’ Finally, the geographical coverage from historical data was employed on top of the blended temperatures, which the authors called ‘blended-masked.’ This blended-masked simulation would yield what is closest to the observed data.
The following graph is one of the major findings of this study. The x-axis is time, and y-axis is the temperature increase in centigrade. The thick grey line represents the HadCRUT4 data, which is what was actually observed. The red, purple and blue lines are simulation results from the three different modes that were explained earlier. We need to focus on year 2000 to 2009, where the simulated and observed data diverge significantly. You can see that the results from ‘blended-masked’ are much closer than ‘tas-only’ or ‘blended.’
Now look at the graph below. There are a number of bars because multiple simulations were conducted using different calculation methods, but basically the two red bars on top represent the ‘tas-only’ mode, while the bottom three bars in blue represent ‘blended-masked.’ The black vertical lines that intersect each bar are the best estimate values; you can clearly see that the lines on blue bars are somewhere close to 1.3, while the lines of the red bars are located between 1.5 and 2.0.
So, the conclusion is that, if the simulation models are processed in the same way as the observation-based temperature records, climate models do not overestimate climate sensitivity. The authors of this paper conclude that the 24% difference can be entirely explained by applying masking and blending. As slower-warming regions are preferentially sampled and water warms less than air, the observed data underestimates climate sensitivity.