Estimating Tropical Cyclone Intensity in Passive Microwave Images Using Deep Learning Models
In Summer 2018, I interned at NASA Marshall Space Flight Center, in the Earth Science Office. While there, I worked with computer scientists who wanted to improve machine learning algorithms aimed at classifying hurricane intensities.
It can be very challenging to estimate the wind speed of a hurricane in real time, despite vast improvements over the last forty years in the forecasting tools we use. The most common techniques suffer from human error, time constraints, and complexity. The goal for incorporating machine learning is to lessen those challenges by using automated processes that are timely and can be applied to hurricanes in any basin across the globe.
In this study, we used deep learning models, meaning that we would input an image of a hurricane, which upon passing through the model, would be filtered through a number of layers. By layering, the model can learn more and more information about the image–imagine a 5×8 photo of you and your pet. Suppose you want the model to classify what type of pet you have in the photo. You might teach the model about tail length, point of ears, fur or no fur, average heights of various pets, etc. Then, instead of looking at all of the photo at once, you would teach the model to zoom in and focus on particular features until it is able to recognize something that it was trained for. One-by-one, those features would all be determined as the model scans across small areas of the photo. By deciding what each feature is, the percentage chance of your pet being a rabbit, dog, cat, or something unknown could be output from the model.
As you can imagine, this process is not so simple with hurricanes, as they are made up of filamented clouds. Making it harder, the images available from satellites are fairly low resolution, and data only exist from the few passes an individual satellite passed over the cyclone. Specifically, we used 85 GHz passive microwave brightness temperatures made available via the Naval Research Laboratory dataset (1997-2016).
We chose passive microwave images, rather than infrared images, expecting that the hurricane structures would be more visible. This is because microwaves scatter off of water particles, giving us insight into the finer details of a hurricane. Remember that the machine learning models really benefit from the features in an image! Infrared is heat sensitive, meaning that the cloud-top temperature can be determined, rather than the actual particles making up those clouds. The figure below comes from the University of Wisconsin-Madison and shows a side-by-side comparison of microwave (left) and infrared (right) images of Hurricane Florence (2018). While the eye of the hurricane in this example is shown partially open in the microwave image, that detail is missed in the infrared image.
This research was limited to Category 1 hurricanes and below, as previous research showed that models faltered when categorizing lower intensity storms. All data were in the Northern Atlantic and Pacific oceans.
We were able to reduce root mean squared errors (the value used to determine how accurate the model classification was) to <15 knots (8 meters per second) when using neural networks. This value, though, was 4 knots 2 meters per second) higher than the root mean squared error found when using infrared images. We particularly found that model accuracy was highest among images generated by higher resolution satellite sensors (AMSR, GMI, and TMI) when compared with the lower resolution sensors (METOP and SSMI).
We additionally tested linear regression models and were able to decrease root mean squared errors below 10 knots (5 meters per second). The greatest difference between the neural network and the linear regression models is that neural networks require “classes”, meaning that all possible wind speeds were added as inputs. Once image data went through the model, the output was then a percentage probability that the image represented x, y, or z wind speed value. In contrast, linear regression models assign the image a wind speed based on “best fit” rather than weighting it with probability.
What this suggested for our work was that while neural network classification may be a very useful tool for distinctive details such as furry rabbit tails or floppy dog ears, we may be farther away from using machine learning to distinguish whether a hurricane’s intensity is 63 or 64 knots. For a look at a group finding success with neural networks applied to 89 GHz (high resolution sensors) passive microwave images of hurricanes, check out Wimmers et al. (2019).
This research was presented at the AMS Annual Meeting in Phoenix, AZ in January 2019.
Fox, K. Ryder, Iksha Gurung, J.J. Miller, Manil, Maskey, and Andrew Molthan, 2018: Applying Deep Convolutional Neural Networks to the Estimation of Tropical Cyclone Intensity within Passive Microwave Images. Summer Poster Expo, NASA Marshall Space Flight Center, Huntsville, AL