How models got a complex storm like Sandy mostly right
- By William Jackson
- Oct 30, 2012
Weather forecasting, including the prediction of a hurricane’s track, remains an inexact science, but the use of increasingly sophisticated computer modeling and powerful computers helps the National Oceanic and Atmospheric Administration predict storms such as Hurricane/Superstorm Sandy.
As early as the morning of Oct. 26, predictions of a number of computer models had coalesced on a track that showed the huge storm making landfall on the Mid-Atlantic coast either late on Oct. 29 or early on the 30th. The prediction was off by a few hours (Sandy made landfall on the New Jersey shore about 6 p.m. on Monday, the 29th), which was not bad considering that the storm was unusual, if not unique, in its size, strength and the complexity of the patterns that formed it.
No two storms are alike, and the complexity of any weather system makes long-term predictions difficult, so NOAA’s National Hurricane Center in Miami uses a variety of computer models that combine historical information, the physics of the atmosphere and the best information on current and future conditions to predict what the storms will do.
The accuracy of the predictions depends on the detail and accuracy of both the historical data and current conditions and also on the validity of the underlying assumptions of each particular model. That is why a variety of computer models typically are used to produce a consensus forecast.
Ultimately, storm prediction also is a matter of human judgment. Some scientists prefer different models for different conditions, but because none is exact someone has to choose. Predictions also are constantly updated.
The Hurricane Center relies on workhorse weather models such as the Global Forecast System to issue official hurricane track forecasts, but new tools are constantly being added to the kit. NOAA researchers used Hurricane Irene, which struck the Atlantic Coast causing widespread damage in September 2011, to provide a real-world workout for a next-generation forecast model called the Flow-following Finite Volume Icosahedral Model, also known as the FIM Global Model, which successfully predicted Irene’s track three days out.
The National Hurricane Center uses no fewer than 37 different models and permutations of models to predict the track and intensity of storms, some of which are averaged together to provide results. Some produce near-term forecasts and some longer range predictions. Some predict only intensity or track, some do both.
The models vary significantly in structure and complexity, ranging from those simple enough to run in a few seconds on an ordinary computer to those requiring hours of time on a supercomputer, according to NOAA. Dynamical or numerical models are complex and use high-speed computers to solve physical equations. Statistical models are simpler, using historical data on storm behavior rather than atmospheric physics. The most complex are statistical-dynamical models that combine both techniques.
Complex dynamical models often take too long to run to be used for some forecasts (it can take six hours to produce a five-hour forecast, for example) but there are techniques for including information from such “late” models to produce forecasts. This is one reason results from several models are used to produce forecasts.
The computer models are available for anyone to use, but NOAA expertise adds value to the products. The agency says that forecasts from the National Hurricane Center usually have smaller errors than the results of any individual model. There is uncertainty in any forecast, but “users should consult the official forecast products issued by NHC and local National Weather Service Forecast Offices rather than simply looking at output from the forecast models themselves,” NOAA advises.
William Jackson is freelance writer and the author of the CyberEye blog.