Mon. Dec 23rd, 2024
Machine Learning Can Enable Climate Model Simulations To Be Run

This article is from Science X Editing Process
and policy.
Editor While ensuring the authenticity of the content, we highlighted the following attributes:

Fact-checked

A trusted source

Proofread


New downscaling techniques used in climate models leverage machine learning to improve resolution at finer scales. By making these simulations locally relevant, policymakers can better access information to inform climate action. Credit: Massachusetts Institute of Technology

× close


New downscaling techniques used in climate models leverage machine learning to improve resolution at finer scales. By making these simulations locally relevant, policymakers can better access information to inform climate action. Credit: Massachusetts Institute of Technology

Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can predict what will happen next, such as rising sea levels, flooding, and increased temperatures, and determine how to respond appropriately. However, current climate models struggle to provide this information quickly and cheaply enough to be useful at smaller scales, such as the size of a city.

Now, the authors of the new paper Published In Journal of Advances in Earth System Modeling We have discovered a way to leverage machine learning to harness the benefits of current climate models while reducing the computational costs required to run them.

“This is a game changer,” said Sai LaBella, principal research scientist in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and author of the paper with EAPS postdoc Anamitra Saha.

Traditional wisdom

In climate modeling, downscaling is the process of using a lower-resolution global climate model to generate detail for a smaller area. Think of a digital image: the global model is a large image of the world with fewer pixels. To downscale, you enlarge only the part of the photo you want to see (Boston, for example). But because the original image had a lower resolution, the new version is blurry and doesn’t have enough detail to be particularly useful.

“When you go from a coarse to a fine resolution, you need to add information somehow,” Saha explains. Downscaling seeks to add that information back in by filling in the missing pixels. “Adding information can be done in two ways: either from theory or from the data.”

Traditional downscaling often uses models based on physics (such as the processes of rising, cooling and condensation of air and the local topography) and supplements them with statistical data obtained from past observations, but this method is computationally intensive, requires a lot of time and computing power to run, and is expensive.

A little bit of both

In the new paper, Saha and Labella came up with a different way to add data. They employ a machine learning technique called adversarial learning. The technique uses two machines: one generates the data to insert into the photo; the other judges the samples by comparing them with real data. If the first machine decides the image is fake, it has to try again and again until it convinces the second machine. The end goal of this process is to create super-resolution data.

The use of machine learning techniques like adversarial learning is not a new idea in climate modeling. The difficulty is that they currently cannot handle a lot of underlying physics, like conservation laws. Researchers have found that simplifying the physics and supplementing it with statistics from historical data is enough to produce the results they need.

“When you augment machine learning with information from both statistics and simplified physics, suddenly something magical happens,” Labella says.

He and Saha started by estimating extreme rainfall rates by stripping away the more complicated physics equations and focusing instead on water vapor and land topography. They then generated general rainfall patterns for both mountainous Denver and flat Chicago, applying historical records to refine their output.

“It brings extremes like physics at a much lower cost, and it brings speeds like statistics at much higher resolution,” Labella continues.

Another unexpected benefit of the results is that very little training data was required: “The fact that just a little bit of physics and just a little bit of statistics was enough to improve the ML performance was surprising.” [machine learning] “The model… was actually not obvious from the beginning,” Saha said. It took just a few hours to train and can deliver results in minutes, an improvement over other models that can take months to run.

Rapidly quantify risk

Being able to run models quickly and frequently is a key requirement for stakeholders like insurers and local policymakers. Labella cites the example of Bangladesh: Knowing how extreme weather will affect the country allows decisions about which crops to grow or where to relocate the population to be made as quickly as possible, taking into account a very wide range of conditions and uncertainties.

“We can’t wait months or years to quantify this risk,” he said. “We need to look far into the future and a lot of uncertainty to determine what the good decisions are.”

While the current model only looks at extreme precipitation, the next step in the project is to train the model to look at other important events, such as tropical storms, wind, and temperature. With a more robust model, Labella hopes to apply it to other locations, such as Boston and Puerto Rico, as part of MIT’s Climate Grand Challenges project.

“We’re very excited about both the methodology we’ve put together and the applications it could potentially have,” he says.

For more information:
Anamitra Saha et al. “Statistical-Physical Adversarial Learning from Data and Models for Downscaling Rainfall Extremes” Journal of Advances in Earth System Modeling (2024). Posted on: October 29, 2023