Experimental physics has a lot to do with data, and the extremely powerful interaction between IT and physics is no surprise. However, analyzing it poses a great problem, and that is precisely where AI comes to lend a hand to the scientists. Collider, neutrino telescope, nuclear security: how does this collaboration work and what technologies are put in place – in a brief summary of the article A Novel Type of Neural Network Comes to the Aid of Big Physics by Steve Nadis for Wired (with our extended research).
The particularity of the data in experimental physics is its vast volume and the fact that the overloading majority of it is to be neglected. For example, detectors of the neutrino detectors or particle colliders capture an excessive amount of data, but there are only a tiny few pieces that are important. Analyzing the whole picture and trying to find out what is important and what is not demands a lot of resources. But this is starting to change with the help of AI.
Sparse convolutional neural network (SCNN) is a machine learning tool that separates the important data from blanc noise and screens the latest out. The work on the project began in 2012 and was firstly focused on analyzing and recognizing the Chinese handwriting. The algorithm is more or less the following:
The images containing chinese characters are mostly empty, and that is natural if we put any significant piece of information in a block of data big enough to contain it all and separate it from other pieces. In order to cut back the amount of data for analysis, the creator of the algorithm, Benjamin Graham, decided to place the kernel on 3-by-3 sections of the image that contain at least one pixel that has nonzero value (and is not just blank). The system became successful with an error rate of only 2,61%.
His work continued in Facebook AI Research (Meta AI Research, currently) and led to the first SCNN, which centered the kernel only on pixels that had a nonzero value (rather than placing the kernel on any 3-by-3 section that had at least one “nonzero” pixel). That was important for physicians at the Fermi National Accelerator Laboratory that probe the nature of neutrinos, the most abundant particles in the universe with mass, but still uneasy to be detected. In 2019 the first SCNN was applied to analyze the data expected from the Deep Underground Neutrino Experiment, or DUNE, which will be the world’s largest neutrino physics experiment when it comes online in 2026. The SCNNs analyzed the simulated data faster than ordinary methods, and required significantly less computational power in doing so. The promising results mean that SCNNs will likely be used during the actual experimental run.
IceCube Neutrino Observatory at the South Pole’s goal is to intercept the universe’s most energetic neutrinos and trace them back to their sources, most of which lie outside our galaxy. There are 5,160 optical sensors and only a tiny fraction of them lights up in an unpredictable time. Standart CNNs can not cope with the amount of data (and SCNN is 2-3x times faster and more productive), but the sparse data problem can be potentially solved with the help of SCNN. The scientists are ready to run the tests, and, if successful, implement the algorithm in the working process of the lab. They hope the technology can be helpful for neutrino telescopes.
Another extremely crucial AI technology is predictive analytics algorithm. It is widely and successfully used not only in the nuclear industry, but also in any field where we are talking about large amounts of data and risk connected with it. Actually, atomic industry has a lot to do with AI. On the contrary with the SCNN, this AI Tool focuses on the repeated processes and demands the existence of a trand.
The key concept for the implementation of anomaly prediction technology is a time series – an ordered sequence of points, a feature measured through the same and constant time intervals, characterizing the process from one side or another. Time series is not a unique concept for engineering work, this concept is used for predictivity in economics, retail, and cybersecurity, which only confirms its effectiveness and ease of scaling. The time series assumes, however, the existence of some single trend – this model will not be able to track and organize the absolutely random dynamics of indicators. Based on the data organized with the help of AI, it becomes possible to aggregate features that may indicate a potential anomaly, or the extremes of the resulting array.
Fixing indicators and building time series gives engineers and developers a reliable database, which greatly simplifies the detection of problems and the ability to take timely measures to eliminate them.
This approach helps to keep the whole process safe, and is used, for example, in digital models of diagnostics of operation of electrolyzers in production. Such algorithms are designed for early detection of deviations in the operation of electrolysis plants. The developed solution automatically searches for hidden defects that occur during operation, visualizes information about the anomaly and notifies the operator to make decisions. In addition, the model is able to create an analytical report on the problem and give the operator complete data on any period.
Scientist's creativity and ability to explain the world is irreplaceable. However, without AI the ideas would have a longer way to appear. Saving time or saving the world from nuclear danger, these AI Tools' full effectiveness is still to be worked out, but is evident even now. We, as observers, are looking forward to see the role that the AI would play in the most important breakthroughts of the XXI century.