Do Androids Dream of Magnetic Fields?

Do Androids Dream of Magnetic Fields? Using Neural Networks to Interpret the Turbulent Interstellar Medium


arXiv Link:

Many papers on deep learning attempt to show that a new method or architecture is generically more effective than an existing one, or they apply some existing architecture to a new problem to get better results than had been possible before. In this work, we tried something different: we use neural networks to prove the existence and find the location of information in certain images.

Orion Nebula, from Astronomy Picture of the Day from M. Robberto. Nebulae are the resuls of stars that have died, as well as the birthplaces of new stars.

The interstellar medium (ISM) is the place all material goes after a star comes apart, and is the reservoir from which all stars are born. Understanding it helps us understand the origin of the stars, the planets, and ourselves.

It’s also a violently turbulent, magnetized mess — dramatically unlike most fluids we can study here on earth. To understand it, we need diagnostic tools that can capture the underpinning physics when applied to images of the ISM. An image is a complicated thing, and we need a new tool to pull the relevant information from the images to start to understand how magnetized, turbulent, and hot our ISM really is.

One of the simulations of magnetized turbulence we use. The network is able to pick up on the subtle structure of ridges in the data to distinguish between strongly magnetized and weakly magnetized turbulence

To do this information extraction, we have often looked at the  Fourier Power Spectrum (or Fourier spectrum or power spectrum). Through a mathematical process known as the Fourier transform, we can convert the image to a set of “frequencies” — high frequencies for small structures, low frequencies for big structures. At each frequency, there is an intensity and a phase; the intensity represents how strong the signal is, and the phase represents where it is shifted across the image. To make the power spectrum, we only use that intensity information. This can boil down some very important aspects of the data to something that is much easier to measure than a complicated image.

With this boiling-down process, we can also use relatively elegant pencil-and-paper theory to predict how much Fourier power we expect to exist across the range of sizes of turbulent eddies. At the largest scales, for example, turbulence can be injected by the wave action of spiral arms in the Galaxy, and at small scales, turbulence may be dissipated as heat.

The problem is that this beautiful cascade is actually very poor in information. Different combinations of magnetic field strength, gas temperature, and turbulence intensity can yield the same Fourier power spectrum. We set out to investigate if the Fourier phase, the other part of the Fourier transform, might contain a lot more information about the shapes in the images. We found that neural networks can indeed help track down this information.

Our network architecture, a very common convolutional neural network setup.

We took two simulations of turbulent magnetized plasmas with different levels of magnetization and extracted 128 x 128 images from the data cubes. We used a fairly standard convolutional neural network with seven convolutional layers (interspersed by three max-pooling and drop out layers) to distinguish similar images with 98% accuracy. We then did the same thing, but with doctored images, where the Fourier power (intensity)  had been artificially fixed to 1 to get rid of any information. While the network took longer to train and was more erratic in training, we still got to 98% accuracy, even with no power spectral information at all. This proves that there does exist a huge amount of information in these images without the power spectrum — i.e., in the phase information.

Most interestingly, we ran “saliency maps” on these networks, which are designed to find which parts of the image are most important. In the above picture, the red contours highlight the most “salient” parts of the images, the pixels the network is most interested in. By visual inspection, we found that the network is picking up on the presence or absence of thin ridges of plasma, consistent with the idea that in the “sub-Alfvénic” case, shocks can more easily bunch up the material.

What makes this especially interesting is that we think it provides a simple way forward for understanding where the information lies in many astronomical images. For instance, take a series of galaxy images with some additional physical information, for example: the metallicity from spectra. Sure, we could build a network to predict the metallicity from the images, and it has been done before. But perhaps more interestingly, we could figure out what parts of the images contain the information about what drives metallicity evolution, and thus get much deeper insight on the physical processes that underpin it.

Leave a Reply