Breaking News

Studying the big bang with artificial intelligence

A quark gluon plasma after the collision of two heavy nuclei. Credit: TU Wien

It could hardly be more complicated: tiny particles whir around wildly with extremely high energy, countless interactions occur in the tangled mess of quantum particles, and this results in a state of matter known as “quark-gluon plasma”. Immediately after the Big Bang, the entire universe was in this state; today it is produced by high-energy atomic nucleus collisions, for example at CERN.

Such processes can only be studied using high-performance computers and highly complex computer simulations whose results are difficult to evaluate. Therefore, using artificial intelligence or machine learning for this purpose seems like an obvious idea. Ordinary machine-learning algorithms, however, are not suitable for this task. The mathematical properties of particle physics require a very special structure of neural networks. At TU Wien (Vienna), it has now been shown how neural networks can be successfully used for these challenging tasks in particle physics.

Neural networks

“Simulating a quark-gluon plasma as realistically as possible requires an extremely large amount of computing time,” says Dr. Andreas Ipp from the Institute for Theoretical Physics at TU Wien. “Even the largest supercomputers in the world are overwhelmed by this.” It would therefore be desirable not to calculate every detail precisely, but to recognize and predict certain properties of the plasma with the help of artificial intelligence.

Therefore, neural networks are used, similar to those used for image recognition: Artificial “neurons” are linked together on the computer in a similar way to neurons in the brain—and this creates a network that can recognize, for example, whether or not a cat is visible in a certain picture.

When applying this technique to the quark-gluon plasma, however, there is a serious problem: the quantum fields used to mathematically describe the particles and the forces between them can be represented in various different ways. “This is referred to as gauge symmetries,” says Ipp. “The basic principle behind this is something we are familiar with: if I calibrate a measuring device differently, for example if I use the Kelvin scale instead of the Celsius scale for my thermometer, I get completely different numbers, even though I am describing the same physical state. It’s similar with quantum theories—except that there the permitted changes are mathematically much more complicated.” Mathematical objects that look completely different at first glance may in fact describe the same physical state.

Gage symmetries built into the structure of the network

“If you don’t take these gage symmetries into account, you can’t meaningfully interpret the results of the computer simulations,” says Dr. David I. Müller. “Teaching a neural network to figure out these gage symmetries on its own would be extremely difficult. It is much better to start out by designing the structure of the neural network in such a way that the gage symmetry is automatically taken into account—so that different representations of the same physical state also produce the same signals in the neural network,” says Müller. “That is exactly what we have now succeeded in doing: We have developed completely new network layers that automatically take gage invariance into account.” In some test applications, it was shown that these networks can actually learn much better how to deal with the simulation data of the quark-gluon plasma. 

“With such neural networks, it becomes possible to make predictions about the system—for example, to estimate what the quark-gluon plasma will look like at a later point in time without really having to calculate every single intermediate step in time in detail,” says Andreas Ipp. “And at the same time, it is ensured that the system only produces results that do not contradict gage symmetry—in other words, results which make sense at least in principle.”

It will be some time before it is possible to fully simulate atomic core collisions at CERN with such methods, but the new type of neural networks provides a completely new and promising tool for describing physical phenomena for which all other computational methods may never be powerful enough.

The research was published in Physical Review Letters.


Explore further

First detection of exotic ‘X’ particles in quark-gluon plasma


More information:
Matteo Favoni et al, Lattice Gauge Equivariant Convolutional Neural Networks, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.128.032003

Provided by
Vienna University of Technology

Citation:
Studying the big bang with artificial intelligence (2022, January 25)
retrieved 26 January 2022
from https://phys.org/news/2022-01-big-artificial-intelligence.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.