Artificial intelligence (AI) algorithms trained on real astronomical observations are now outperforming astronomers in sifting through huge amounts of data to find new exploding stars, identify new types of galaxies and detect star mergers massive, accelerating the pace of new discoveries in the world’s oldest. science.
But AI, also known as machine learning, can reveal something deeper, astronomers at the University of California at Berkeley have discovered: unsuspected connections hidden in the complex mathematics stemming from general relativity – in particular, how this theory is applied to the search for new planets around other stars. .
In an article published this week in the journal natural astronomyresearchers describe how an AI algorithm developed to more quickly detect exoplanets when such planetary systems pass in front of a background star and briefly illuminate it – a process called gravitational microlensing – found that decades-old theories now used to explain these observations are unfortunately incomplete.
In 1936, Albert Einstein himself used his new theory of general relativity to show how light from a distant star can be bent by the gravity of a foreground star, not only illuminating it when viewed from the Earth, but often dividing it into multiple points of light or warping it into a ring, now called Einstein’s ring. This is similar to how a magnifying glass can focus and intensify sunlight.
But when the foreground object is a star with a planet, the brightening over time – the light curve – is more complicated. Moreover, there are often several planetary orbits which can equally well explain a given light curve – so-called degeneracies. This is where humans have simplified the math and missed the big picture.
The AI algorithm, however, pointed to a mathematical way to unify the two main types of degeneracy in interpreting what telescopes detect when microlensing, showing that the two “theories” are really special cases. of a larger theory which, the researchers admit, is probably still incomplete.
“A machine learning inference algorithm we previously developed led us to discover something new and fundamental about the equations that govern the general relativistic effect of light bending by two massive bodies,” said wrote Joshua Bloom in a blog post last year when he uploaded the article. to a preprint server, arXiv. Bloom is a professor of astronomy at UC Berkeley and chair of the department.
He compared UC Berkeley graduate student Keming Zhang’s discovery to the connections that Google’s artificial intelligence team, DeepMind, recently made between two different areas of mathematics. Taken together, these examples show that AI systems can reveal fundamental associations that humans lack.
“I argue that they constitute one of the first, if not the first time, where AI has been used to directly yield new theoretical knowledge in mathematics and astronomy,” Bloom said. “Just as Steve Jobs suggested that computers could be the bicycles of the mind, we sought an AI framework to serve as an intellectual rocket for scientists.”
“This is sort of a milestone in AI and machine learning,” said co-author Scott Gaudi, professor of astronomy at Ohio State University and one of the pioneers of using gravitational microlenses to discover exoplanets. “Keming’s machine learning algorithm revealed this degeneracy that had been missed by experts in the field working with data for decades. This suggests how research will unfold in the future when aided by machine learning, which is really exciting.”
Discover exoplanets with microlenses
More than 5,000 exoplanets, or extrasolar planets, have been discovered around stars in the Milky Way, although few have been seen through a telescope – they are too dim. Most were detected because they create a Doppler oscillation in the motions of their host stars or because they slightly attenuate the host star’s light as they cross in front of it – transits that were central to the mission. NASA’s Kepler. Just over 100 were discovered by a third technique, microlensing.
One of the primary goals of NASA’s Nancy Grace Roman Space Telescope, slated for launch by 2027, is to discover thousands more exoplanets via microlensing. The technique has an advantage over Doppler and transit techniques in that it can detect lower-mass planets, including Earth-sized ones, that are distant from their stars at a distance equivalent to that of Jupiter. or Saturn in our solar system.
Bloom, Zhang and their colleagues set out two years ago to develop an AI algorithm to more quickly analyze microlensing data to determine the stellar and planetary masses of these planetary systems and the distances between the planets and their stars. Such an algorithm would speed up the analysis of the hundreds of thousands of probable events that the Roman telescope will detect in order to find the 1% or less that are caused by exoplanetary systems.
A problem encountered by astronomers, however, is that the observed signal can be ambiguous. As a foreground lone star passes in front of a background star, the brightness of the background stars gently rises to a peak and then drops symmetrically to its original brightness. It’s easy to understand mathematically and observationally.
But if the foreground star has a planet, the planet creates a separate peak in brightness within the peak caused by the star. When trying to reconstruct the orbital configuration of the exoplanet that produced the signal, general relativity often allows two or more so-called degenerate solutions, all of which can explain the observations.
To date, astronomers have generally treated these degeneracies simplistically and artificially distinctly, Gaudi said. If distant starlight passes close to the star, the observations could be interpreted as a wide or near orbit for the planet — an ambiguity that astronomers can often resolve with other data. A second type of degeneracy occurs when background starlight passes close to the planet. In this case, however, the two different solutions for the planetary orbit are usually only slightly different.
According to Gaudi, these two simplifications of two-body gravitational microlensing are generally sufficient to determine the true masses and orbital distances. In fact, in a paper published last year, Zhang, Bloom, Gaudi and two other UC Berkeley co-authors, astronomy professor Jessica Lu and graduate student Casey Lam, described a new algorithm for IA which does not rely at all on the knowledge of these interpretations. . The algorithm dramatically speeds up the analysis of microlensing observations, delivering results in milliseconds, rather than days, and dramatically reducing computer grinding.
Zhang then tested the new AI algorithm on microlensed lightcurves from hundreds of possible orbital configurations of stars and exoplanets and noticed something unusual: there were other ambiguities than the two interpretations did not take into account. He concluded that the commonly used interpretations of microlensing were, in fact, just special cases of a larger theory that explains all the variety of ambiguities in microlensing events.
“The previous two degeneracy theories deal with cases where the background star appears to pass close to the foreground star or the foreground planet,” Zhang said. “The AI algorithm showed us hundreds of examples of not only these two cases, but also situations where the star does not pass near the star or the planet and cannot be explained by any previous theories. This was the key for us to come up with the new unifying theory.”
Gaudi was skeptical, at first, but returned after Zhang produced numerous examples where the previous two theories didn’t match the observations and the new theory did. Zhang actually looked at the data from two dozen previous papers that reported the discovery of exoplanets by microlensing and found that, in all cases, the new theory fit the data better than previous theories.
“People were seeing these microlensing events, which were actually exhibiting this new degeneracy but just didn’t realize it,” Gaudi said. “It was really just machine learning looking at thousands of events where it became impossible to miss.”
Zhang and Gaudi have submitted a new paper that rigorously outlines new mathematics based on general relativity and explores the theory in microlensing situations where more than one exoplanet orbits a star.
The new theory technically makes the interpretation of microlensing observations more ambiguous, as there are more degenerate solutions to describe the observations. But the theory also clearly demonstrates that viewing the same microlensing event from two angles — from Earth and from the orbit of the Roman Space Telescope, for example — will make it easier to determine the correct orbits and masses. That’s what astronomers are currently planning to do, Gaudi said.
“The AI suggested a way to look at the lens equation in a new light and find out something really deep about the math of it,” Bloom said. “AI is emerging not just as this kind of blunt tool that’s in our toolbox, but as something that’s actually quite intelligent. Alongside an expert like Keming, the two were able to do something quite fundamental.”