In order to improve long-term predictions of global climate change, we need more information about the current and changing environment. Unfortunately, in the current era of government budget problems, expensive satellite climate studies are being cut, so it is important to identify the measurements we need the most, choosing among things like air temperature, pressure, humidity, radiance at various wavelengths, radiation transfer to and from the surface, etc.
One possible way of prioritizing is to figure out which of those measures would help us the most when it comes to projecting future climate change, and focus research funds there. A paper that recently appeared in the Proceedings of the National Academies of Science presents a statistical method for doing this and shows that surface temperature measurements may not be the most useful data to improve surface temperature predictions.
By including more data into climate models, the accuracy and precision of long-term predictions can be improved. In fact, any additional data could improve the predictions, but with limited resources, emphasis should be placed on obtaining the most useful data.
For the new paper, researchers started with a set of models used in the Intergovernmental Panel on Climate Change's (IPCC's) Fourth Assessment Report, and used a statistical method known as Bayesian inference to determine the improvements in accuracy and precision that resulted from including additional data when modeling one particular emissions scenario, the IPCC's A1B. This scenario gives a best estimate for temperature rise of 2.8°C, but has a likely range of 1.7-4.4°C—it's obvious that additional data to refine this range would be useful.
The authors chose one model (NCAR PCM version 1) to represent the real data (the "perfect model"), relative to which the accuracy of the remaining models were judged. Obviously this is arbitrary, so rather than use the improvement in accuracy, the authors used the improvement in precision of predictions to rank the measurements.
After performing calculations with a total of 32 data types, the authors found that three satellite measurements came out best: total outgoing longwave (low-energy) radiation, radiance at 995 cm-1 (an infrared wavelength window to the planet's surface), and dry pressure at 5.5 km. These all ranked above the land measurement that came out best, which wasn't temperature either (it was geopotential height at 500 hPa, a measure of the elevation at which that specific atmospheric pressure occurs).
This suggests that remote satellite measurements may be a more useful way to improve climate models than measurements done at the planet's surface. For instance, additional data from the recently cancelled CLARREO climate satellite could improve precision by 53 percent and accuracy by 81 percent, based on the models studied by the team.
One interesting result of the study, other than the presentation of the method itself, is that additional surface temperature measurements may not be the best way to improve surface temperature prediction models. This seems counter-intuitive, but predictive climate models take many other factors into account that help control the local surface temperature.
The authors emphasize the limitations of the study, including the relatively small set of models and measurement types used, and suggest that further research be conducted prior to any decision making. However, the methodology presented allows climate researchers to determine the most useful data to include in new predictive models, which are essential to long term planning for things like agriculture and development.
PNAS, 2011. DOI: 10.1073/pnas.1107403108 (About DOIs).
http://arstechnica.com/science/news/2011/06/climatologists-figuring-out-which-data-makes-their-models-better.ars
One possible way of prioritizing is to figure out which of those measures would help us the most when it comes to projecting future climate change, and focus research funds there. A paper that recently appeared in the Proceedings of the National Academies of Science presents a statistical method for doing this and shows that surface temperature measurements may not be the most useful data to improve surface temperature predictions.
By including more data into climate models, the accuracy and precision of long-term predictions can be improved. In fact, any additional data could improve the predictions, but with limited resources, emphasis should be placed on obtaining the most useful data.
For the new paper, researchers started with a set of models used in the Intergovernmental Panel on Climate Change's (IPCC's) Fourth Assessment Report, and used a statistical method known as Bayesian inference to determine the improvements in accuracy and precision that resulted from including additional data when modeling one particular emissions scenario, the IPCC's A1B. This scenario gives a best estimate for temperature rise of 2.8°C, but has a likely range of 1.7-4.4°C—it's obvious that additional data to refine this range would be useful.
The authors chose one model (NCAR PCM version 1) to represent the real data (the "perfect model"), relative to which the accuracy of the remaining models were judged. Obviously this is arbitrary, so rather than use the improvement in accuracy, the authors used the improvement in precision of predictions to rank the measurements.
After performing calculations with a total of 32 data types, the authors found that three satellite measurements came out best: total outgoing longwave (low-energy) radiation, radiance at 995 cm-1 (an infrared wavelength window to the planet's surface), and dry pressure at 5.5 km. These all ranked above the land measurement that came out best, which wasn't temperature either (it was geopotential height at 500 hPa, a measure of the elevation at which that specific atmospheric pressure occurs).
This suggests that remote satellite measurements may be a more useful way to improve climate models than measurements done at the planet's surface. For instance, additional data from the recently cancelled CLARREO climate satellite could improve precision by 53 percent and accuracy by 81 percent, based on the models studied by the team.
One interesting result of the study, other than the presentation of the method itself, is that additional surface temperature measurements may not be the best way to improve surface temperature prediction models. This seems counter-intuitive, but predictive climate models take many other factors into account that help control the local surface temperature.
The authors emphasize the limitations of the study, including the relatively small set of models and measurement types used, and suggest that further research be conducted prior to any decision making. However, the methodology presented allows climate researchers to determine the most useful data to include in new predictive models, which are essential to long term planning for things like agriculture and development.
PNAS, 2011. DOI: 10.1073/pnas.1107403108 (About DOIs).
http://arstechnica.com/science/news/2011/06/climatologists-figuring-out-which-data-makes-their-models-better.ars
You have read this article Science
with the title Climatologists figuring out which data make their models better. You can bookmark this page URL http://emill-emil.blogspot.com/2011/06/climatologists-figuring-out-which-data.html. Thanks!