High performance computing is useful to process the data and do certain numerical calculations (like lattice QCD which tries to describe the interactions between quarks).
However it will be of little use to understand why particle physics is the way it is. These are not at all heavy calculations. It would just require a very bright idea.
We observe patterns among particles that we can’t explain. For example, particles come in three copies which are increasingly heavy. The electron (and it’s little brother, the neutrino) have “cousins” which are identical particles, only heavier, the muon and the tau (along with the corresponding neutrinos).
The picture of particle physics that we have was established in the early 70’s and was coined the “Standard Model”. Since then, many attempts have been made to go further and explain those patterns. As far as I know, all those attempts were based on symmetries. We would postulate a new symmetry that would explain some features but that symmetry would imply interactions and particles that we do no observe which, in turn, required another explanation.
It’s as if an archeologist was trying to explain the shape of a pile of bricks that had been unearthed. A possible explanation was that this was part of a house but, then, where is the rest of it? In that case, it is easy. We could say that the house collapsed and someone stole the bricks. But, in particle physics, this is not so simple and required increasingly convoluted theories.
The first attempt was based on the group SU(5) which tried to extend the symmetry of the standard model. This extended symmetry implied the existence of new particles that could allow the proton to decay but this was never observed. This is when things started to go wrong. The establishment of the Standard Model had been the culmination of decades of successes but, since then, none of these predictions have been verified. Of course, there have been many experimental successes since then, like the discovery of heavier quarks and leptons, as well as the W and Z gauge bosons (the particles mediating the weak interaction) and, more recently, the Higgs boson but all these were predictions of the Standard Model.
Since the 70’s, many theories were made in the hope that the new particles would be observed at the LHC. The LHC came and found nothing unexpected. It confirmed the Standard Model by discovering the Higgs boson but excluded some of the theoretical attempts that had been made to go further. So, now, without experimental input, it would be difficult to do it, unless someone has a very bright idea.
However, many puzzles are staring at us in the face without the need for a larger particle accelerator. Some measurements do not exactly give the result predicted by the theory. Neutrinos can change flavour. We can describe it but there is no explanation for their mixing. We also do not know why there is so little anti-matter despite the fact that, in the lab, matter and anti-matter behave almost in the same way.
So there has been no overwhelming discovery that would show us the way but many puzzles whose solution could bring us forward.
Not quite sure what you mean by “the majority of work in particle physics”. In order to test theories, you need to build actual experiments (not computational models, which can only give output based on what you put in, and therefore can’t test anything unless there are real data to compare with). Analysing experimental data typically requires a lot of computing power, but it’s high throughput computing rather than high performance computing (you need to do quite simple things, but you need to do them to an awful lot of individual events). As Philippe says, high-performance computing is useful for applying the existing theoretical framework to complex situations (for example, computing a very large number of individual Feynman diagrams to produce predictions for many-body interactions), but it is not going to produce anything really novel.
The problem is not so much “making sense of the patterns”. It is making sense of the patterns in a way which produces testable new insights. As Philippe says, the original “Grand Unified Theory” (theory aiming to produce a genuinely unified picture of the strong and electroweak interactions, as opposed to the Standard Model which just bolts them together), SU(5), did make such a testable prediction: it predicted that the proton would decay into a positron and a neutral pion on a timescale of 10^30 years (which, since this is a probabilistic phenomenon and not deterministic – protons decay with an exponential probability such that on average the lifetime is 10^30 years, they don’t all live for exactly 10^30 years and then vanish in a puff of positrons – is actually testable). Protons do not decay on this timescale (the current limit for that decay mode is several hundred thousand times longer than 10^30 years), so SU(5) bit the dust. Most subsequent attempts have either likewise fallen foul of inconvenient facts, or have failed to produce testable predictions (superstrings, anyone?). High-performance computing may in some cases be a useful tool for working out what the predictions of a given new theory are, but it isn’t going to help us come up with one.
I guess to understand the job market in particle physics a bit more, I wonder is the funding for research into new theories a steady dependable stream or is it a battle for the money for research into something that may be more successful?
Hope you don’t mind me entering the conversation! There are many different types of job in particle physics, from various sorts of theory, to various sorts of data analysis, to detector and computing development and operations. We need people in all these areas, and all battle for funding: that’s just how it is!
I would not base any intentions in the particle physics job market on where the money currently appears to be going, in the expectation of working on that forever: instead, many researchers repurpose and multitask between areas as interests and the job market change. Personally, I’d guess that the search for new particles will always be the most prominent academic driver, but there’s around 30-50% of experimental analysis that is focused more on measurements of what we know exists but don’t fully understand. And many physics and technical roles are about developing techniques and technologies which are agnostic to the exact kind of physics being studied.
Comments
Susan commented on :
Not quite sure what you mean by “the majority of work in particle physics”. In order to test theories, you need to build actual experiments (not computational models, which can only give output based on what you put in, and therefore can’t test anything unless there are real data to compare with). Analysing experimental data typically requires a lot of computing power, but it’s high throughput computing rather than high performance computing (you need to do quite simple things, but you need to do them to an awful lot of individual events). As Philippe says, high-performance computing is useful for applying the existing theoretical framework to complex situations (for example, computing a very large number of individual Feynman diagrams to produce predictions for many-body interactions), but it is not going to produce anything really novel.
The problem is not so much “making sense of the patterns”. It is making sense of the patterns in a way which produces testable new insights. As Philippe says, the original “Grand Unified Theory” (theory aiming to produce a genuinely unified picture of the strong and electroweak interactions, as opposed to the Standard Model which just bolts them together), SU(5), did make such a testable prediction: it predicted that the proton would decay into a positron and a neutral pion on a timescale of 10^30 years (which, since this is a probabilistic phenomenon and not deterministic – protons decay with an exponential probability such that on average the lifetime is 10^30 years, they don’t all live for exactly 10^30 years and then vanish in a puff of positrons – is actually testable). Protons do not decay on this timescale (the current limit for that decay mode is several hundred thousand times longer than 10^30 years), so SU(5) bit the dust. Most subsequent attempts have either likewise fallen foul of inconvenient facts, or have failed to produce testable predictions (superstrings, anyone?). High-performance computing may in some cases be a useful tool for working out what the predictions of a given new theory are, but it isn’t going to help us come up with one.
tulloche commented on :
I guess to understand the job market in particle physics a bit more, I wonder is the funding for research into new theories a steady dependable stream or is it a battle for the money for research into something that may be more successful?
Andy commented on :
Hope you don’t mind me entering the conversation! There are many different types of job in particle physics, from various sorts of theory, to various sorts of data analysis, to detector and computing development and operations. We need people in all these areas, and all battle for funding: that’s just how it is!
I would not base any intentions in the particle physics job market on where the money currently appears to be going, in the expectation of working on that forever: instead, many researchers repurpose and multitask between areas as interests and the job market change. Personally, I’d guess that the search for new particles will always be the most prominent academic driver, but there’s around 30-50% of experimental analysis that is focused more on measurements of what we know exists but don’t fully understand. And many physics and technical roles are about developing techniques and technologies which are agnostic to the exact kind of physics being studied.