42? How computers could answer three important questions about the universe


[ad_1]

Observatories are increasingly using supercomputing, cloud computing, and deep learning to assess the growing amount of space science research data. Here are a few examples of how these technologies are changing the way astronomers explore space.

The astrophysicist began as postdoctoral fellow in the United States, Eliu Huerta, to think about how technology could contribute to new breakthroughs in his field. In 2015, a new opportunity presented itself when researchers first discovered gravitational waves with the Laser Interferometer Gravitational Wave Observatory (LIGO). Since then, scientists have recorded these elusive waves and tried to learn as much as possible about them. They’ve even discovered dozens of other gravitational wave signals, and advances in computing are helping them keep pace with growing mountains of data.

More from MIT Technology Review

More from MIT Technology Review

More from MIT Technology Review

Huerta himself first researched gravitational waves by comparing the data collected by the detectors with a catalog of potential waveforms. He wanted to find a better way for this arduous method. At the start of the year, Huerta, who now works as a computer scientist at Argonne National Laboratory, is working near Chicago, an artificial intelligence set that processes LIGO data for an entire month in just seven minutes.

Its algorithms, which run on special processors called GPUs, combine advances in artificial intelligence and distributed computing. Using separate computers or networks that act as a single system, Huerta can identify places with high gravitational density such as black holes that create waves when merging. Huerta’s collection of AI models is open source, so anyone can use it. “Not everyone has access to a supercomputer,” says the researcher. “This will reduce the obstacles for researchers to use AI.”

Despite all the advances in astronomy, the integration of cloud computing has made slow progress in this area of ​​research. This Vera C. Rubin-Observatorium, which is currently under construction in Chile, will be the first astronomical facility of its size with a cloud-based data facility. When the observatory begins operations in 2024, the data captured by its telescope will be transferred to the “Legacy Survey of Space and Time” (LSST) project stream.

As part of the LSST, a catalog is being created, a thousand times larger than all previous night sky surveys. Previous research was almost always downloaded and saved locally, making it difficult for astronomers to access the work of their colleagues. “We’re creating a map of the entire sky,” says Hsin-Fang Chiang of Rubin’s data management team. In doing so, scientists are building “a huge data set that will be useful for many types of science in astronomy.” Astronomer Chiang prides himself that his work will improve collaboration between researchers.

As part of the ten-year project, 500 petabytes of data and images will be brought into the cloud to help astronomers answer questions about the structure and evolution of the universe. “For each position in the sky, we will have over 800 images,” Chiang explains. “You can even see what happened in the past. It’s very interesting, especially with supernovas or things that change a lot.

The Rubin Observatory will process and store 20 terabytes of data every night when mapping the Milky Way and other areas. Astronomers involved in the project can do this. Retrieve and analyze data through a web browser. Ultimately, the images the telescope takes each night must be merged into an online database of stars, galaxies, and other celestial bodies.

Advances in computer technology could help astronomers turn the cosmic clock back, so to speak. Earlier this year, Japanese astronomers used ATERUI II supercomputers, specializing in astronomical simulations, reconstructing what the universe might have looked like at the start of the Big Bang.

ATERUI II helps researchers study cosmic inflation, the theory that the early universe expanded exponentially from moment to moment. Astronomers agree that this expansion would have left extreme fluctuations in the density of matter that would have affected both the distribution of galaxies and their evolution.

By comparing 4,000 simulations of the early universe – all with varying degrees of density – with reality, scientists were able to go back in time and wonder why some places in the universe are full of cosmic activity when others are practically sterile.

Masato Shirasaki of the National Astronomical Observatory of Japan is convinced that this question would be difficult to answer without the simulations. The project requires a huge database of ten terabytes, what a good 22,000 television episodes of the fantastic series Game Of Thrones is equivalent to.

Shirasaki’s team built a model of how the universe presumably evolved and applied it to each simulation to see which result is closest to what it looks like today. This method facilitated the study of the physics of cosmic inflation.

In the next few years, Shirasaki’s methods could help shorten the observation time required for futures. Projects like SPHEREx are needed. SPHEREx is a two-year mission, scheduled for 2024, with a spacecraft orbiting the Earth and observing nearly 300 million galaxies in the sky. With these advances in computing, our understanding of the universe is gradually expanding.


(vsz)

Disclaimer: This article is generated from the feed and is not edited by our team.

[ad_2]

About Johnnie Gross

Check Also

Anil Kapoor tells George Clooney about his grandson Vayu’s first ‘exposure to the universe’, says he’s ‘slowly connecting’ with him

Anil Kapoor loves the experience of being a grandfather, and in a recent chat with …

Leave a Reply

Your email address will not be published.