Kanadische Modellierer ignorieren 100-jährigen Kalibrierungsdatensatz

Die kanadische Umweltbehörde lässt offenbar 100 Jahre harter Wettermessdaten in ihren Klimamodellen unberücksichtigt, da angeblich die Anzahl der Wetterstationen zu gering gewesen sei. So bleiben die Jahre von 1850-1949 ohne Kalibrierung, was für die Modellierer bequem ist, denn sie müssen die lästige Vergangenheit nicht reproduzieren. Was sie dabei offenbar vergessen: Schlecht kalibrierte Klimamodelle sind wenig vertrauenswürdig und Prognosen wenig verlässlich. Ist man möglicherweise gar nicht so richtig an robusten Vorhersagen interessiert und will lieber ein bereits vorgefasstes Szenario als Ergebnis ausgeben? Toronto Sun am 17. September 2019:

Feds scrapped 100 years of data on climate change

Canadians already suspicious of Prime Minister Justin Trudeau’s carbon tax are likely be even more suspicious given a report by Ottawa-based Blacklock’s Reporter that Environment Canada omitted a century’s worth of observed weather data in developing its computer models on the impacts of climate change.

The scrapping of all observed weather data from 1850 to 1949 was necessary, a spokesman for Environment Canada told Blacklock’s Reporter, after researchers concluded that historically, there weren’t enough weather stations to create a reliable data set for that 100-year period.

Weiterlesen in der Toronto Sun

——————–

Eine Studie des Pacific Northwest National Laboratory fand, dass Wissenschaftler die Güte ihrer Klimamodelle zum Teil sehr subjektiv bewerten. Auszug aus einer Pressemitteilung aus dem Juni 2018:

Researchers work toward systematic assessment of climate models

[…] The researchers found, from a survey of 96 participants representing the climate modelling community, that experts took specific scientific objectives into consideration when rating variable importance. They found a high degree of consensus that certain variables are important in certain studies, such as rainfall and evaporation in the assessment of the Amazonian water cycle. That agreement falters on other variables, such as how important it is to accurately simulate surface winds when studying the water cycle in Asia.

Understanding these discrepancies and developing more systematic approaches to model assessment is important, according to Burrows, since each new version of a climate model must undergo significant evaluation, and calibration by multiple developers and users. The labor-intensive process can take more than a year.

The tuning, while designed to maintain a rigorous standard, requires experts to make trade-offs between competing priorities. A model may be calibrated at the expense of one scientific objective in order to achieve another. […]

——————–

Wie könnten die Klimamodelle und Klimavorhersagen besser werden? Zum einen müssten die Zusammenhänge besser verstanden werden, Prozesse ohne politischen Ballast quantifiziert werden. Zum anderen stecken in den vorhandenen Messdaten wertvolle Muster, die es herauszukitzeln gilt. Die Columbia University möchte für Letzteres maschinelles Lernen einsetzen. Pressemitteilung aus dem Juni 2018:

Machine Learning May Be a Game-Changer for Climate Prediction

A major challenge in current climate prediction models is how to accurately represent clouds and their atmospheric heating and moistening. This challenge is behind the wide spread in climate prediction. Yet accurate predictions of global warming in response to increased greenhouse gas concentrations are essential for policy-makers (e.g. the Paris climate agreement).

In a paper recently published online in American Geophysical Union (May 23), researchers led by Pierre Gentine, associate professor of earth and environmental engineering at Columbia Engineering, demonstrate that machine learning techniques can be used to tackle this issue and better represent clouds in coarse resolution (~100km) climate models, with the potential to narrowing the range of prediction.

Pierre Gentine Associate (Professor of Earth and Environmental Engineering: „Machine learning techniques help us better represent clouds and thus better predict global and regional climate’s response to rising greenhouse gas concentrations.“

“This could be a real game-changer for climate prediction,” says Gentine, lead author of the paper, and a member of the Earth Institute and the Data Science Institute. “We have large uncertainties in our prediction of the response of the Earth’s climate to rising greenhouse gas concentrations. The primary reason is the representation of clouds and how they respond to a change in those gases. Our study shows that machine-learning techniques help us better represent clouds and thus better predict global and regional climate’s response to rising greenhouse gas concentrations.”

The researchers used an idealized setup (an aquaplanet, or a planet with continents) as a proof of concept for their novel approach to convective parameterization based on machine learning. They trained a deep neural network to learn from a simulation that explicitly represents clouds. The machine-learning representation of clouds, which they named the Cloud Brain (CBRAIN), could skillfully predict many of the cloud heating, moistening, and radiative features that are essential to climate simulation.

Gentine notes, “Our approach may open up a new possibility for a future of model representation in climate models, which are data driven and are built ‘top-down,’ that is, by learning the salient features of the processes we are trying to represent.”

The researchers also note that, because global temperature sensitivity to CO2 is strongly linked to cloud representation, CBRAIN may also improve estimates of future temperature. They have tested this in fully coupled climate models and have demonstrated very promising results, showing that this could be used to predict greenhouse gas response.

—————————

Die McGill University räumte im Mai 2018 ein, dass regionale Klimamodellierungen noch immer mit hohen Fehlern behaftet sind:

New approach to global-warming projections could make regional estimates more precise

Computer models found to overestimate warming rate in some regions, underestimate it in others

A new method for projecting how the temperature will respond to human impacts supports the outlook for substantial global warming throughout this century – but also indicates that, in many regions, warming patterns are likely to vary significantly from those estimated by widely used computer models.

The new method, outlined by McGill University researchers in Geophysical Research Letters, is based on historical temperature increase in response to rising greenhouse gas concentrations and other climate influences. This approach could be used to complement the complex global climate models, filling a need for more reliable climate projections at the regional scale, the researchers say.

“By establishing a historical relationship, the new method effectively models the collective atmospheric response to the huge numbers of interacting forces and structures, ranging from clouds to weather systems to ocean currents,” says Shaun Lovejoy, a McGill physics professor and senior author of the study.

“Our approach vindicates the conclusion of the Intergovernmental Panel on Climate Change (IPCC) that drastic reductions in greenhouse gas emissions are needed in order to avoid catastrophic warming,” he adds. “But it also brings some important nuances, and underscores a need to develop historical methods for regional climate projections in order to evaluate climate-change impacts and inform policy.”

In particular, the new approach suggests that for over 39% of the globe, the computer models either overestimate or underestimate significantly the pace of warming, according to Lovejoy and his co-author, PhD student Raphaël Hébert (now at the Alfred-Wegener-Institut, in Potsdam.

These areas of significant difference are indicated by the x’s in the map above. For example, the map shows that the IPCC projections are expected to be too warm (red x’s) over vast parts of the Pacific Ocean, the North Atlantic Ocean, and the Indian Ocean, while the opposite is true for the South Atlantic Ocean and the part of the Indian Ocean south of Australia.  In contrast, the projected warming is expected to be underestimated (blue) over northwestern Canada and central Asia — and a seasonal analysis reveals that this is primarily due to an underestimate of the warming in winter months.  (Dark red indicates 3-degree Celsius model overestimate, dark blue 3-degree underestimate if CO2 is doubled).

“Global climate models are important research tools, but their regional projections are not yet reliable enough to be taken at face value,” Hébert and Lovejoy assert. “Historical methods for regional climate projections should be developed in parallel to traditional global climate models. An exciting possibility for further improvements will be the development of hybrid methods that combine the strengths of both the historical and traditional approaches.”

Paper: „Regional Climate Sensitivity and Historical Based Projections to 2100,“ by Hébert, R., and S. Lovejoy is published in Geophysical Research Letters.

Teilen: