Exascale Day 2024: How Supercomputing at a Billion Billion Calculations Per Second Is Changing the World
The Exascale Era began in spring 2022 when the Top500 list certified that the HPE-AMD Frontier supercomputer at Oak Ridge National Laboratory had broken through the exascale computing barrier. And with the Intel-HPE Aurora system at Argonne National Laboratory confirmed by Top500 last spring, the U.S. now has two exascale-class systems – with a third, the HPE-AMD El Capitan system at Lawrence Livermore Laboratory, on the way.
Generally speaking, people quickly get used to technology breakthroughs. Giving voice commands to your phone for driving directions was amazing – for a week or two. But exascale is different. More than two years in, exascale computing still feels fresh and astounding. Tell someone we have high performance computers capable of a billion billion (a quintillion, or 10 to the 18th power) calculations per second, it makes them pause in wonder. Exascale Day, October 18 (1018) continues as a day to be recognized and celebrated.
How exascale systems have been stood up has been recounted in detail, as has the dramatic moment when Frontier achieved exascale status. Now the focus has shifted to the work research organizations are doing with exascale, how it’s actually changing the world with more science, more discovery and more innovation that improve how we live and work.
And with the arrival of generative AI nearly two years ago, the combination of exascale with large language models will only generate greater HPC-AI momentum.
A major achievement of exascale is its advancements in scientific simulations. Exascale systems ingest and analyze vast volumes of data and generate simulations that are more detailed, more realistic and at more encompassing scale than anything preceding it. Exascale systems simulate the processes involved in precision medicine, regional climate modeling, additive manufacturing, the conversion of plants to biofuels, the relationship between energy and water use, the unseen physics in materials discover, the fundamental forces of the universe, and other fields of science and industry.
As we salute Exascale Day, let’s look at research work being done by scientists using the most advanced supercomputing technology.
Clean Energy: AI-Driven Nuclear Energy Compliance As if nuclear energy wasn’t complex enough, complying with governing regulations adds another major challenge. But tech startup Atomic Canyon is simplifying regulatory processes and enhancing document analysis with the Frontier exascale system.
When Atomic Canyon co-founder Trey Lauderdale took up residence near California’s Diablo Canyon nuclear power plant, he saw the potential of nuclear energy. “I realized that nuclear power is the best path for clean energy,” he said. “A uranium pellet the size of the tip of my pinkie can generate the equivalent power of a ton of coal without releasing the carbon gasses that cause global warming.”
But American nuclear power plants contend with a complex compliance landscape involving extensive documentation. Atomic Canyon helps navigate the regulatory environment starting with the AI search platform Neutron, trained on the Nuclear Regulatory Commission’s public database comprised of millions of documents.
Locating specific documents can take hours due to complexity, specificity, validation and sheer data volume. By training advanced sentence embedding models using Frontier, Atomic can handle the intricacies and scale of this specialized language.
Atomic’s goal is to use AI and Frontier to optimize the entire nuclear supply chain, from uranium mining to next-generation reactor construction. The company is now developing a new version of Neutron by leveraging Frontier to train AI models and enable semantic search over data.
Bottom line: Exascale and AI can bring more clean power online faster and at a lower cost.
Clean Wave Energy: Australia-based startup Carnegie Clean Energy is using advanced HPE-Cray supercomputing to develop a new form of energy based on the rhythmic action of ocean waves.
About 20 percent of the world’s energy comes from renewable sources, but Carnegie believes wave energy can be a major contributor because, unlike solar and wind-based energy, ocean waves are more predictable and consistent.
The company has developed CETO, named after a Greek sea goddess. CETO is a fully submerged buoy that sits a few meters below the surface. It moves with the orbital motion of waves and drives a power take-off (PTO) system that converts this motion into grid-ready electricity.
Carnegie uses an HPE supercomputer at the Pawsey Supercomputing Center in Australia along with AI to control the CETO device. The AI system has been trained to understand the complexities of waves, and it tunes CETO to maximize energy capture while avoiding the damaging forces of extreme waves by diving deeper below the surface.
Carnegie’s R&D work builds on more than 10 years of CETO development, including tank testing, rapid small scale prototyping and large commercial scale prototypes and arrays
Antibiotic Discovery: Researchers at McMaster University in Ontario and Stanford University are using supercomputing and generative AI to revolutionize antibiotic discovery.
Antibiotic-resistant bacteria infections pose a grave global threat, with estimates projecting up to 10 million annual deaths from antimicrobial resistance by 2050. The research team’s work utilizes an AI model called SyntheMol that learns to design new antibiotic molecules with defined molecular building blocks.
SyntheMol can generate synthesizable molecules with an efficiency of around 80 percent. With wet lab success rates of more than 10 percent — which is 10x higher than standard laboratory screening — this suggests a possible paradigm shift in drug discovery.
The key is the ability of certain machine learning methods to directly design molecules across a vast range of chemical structures, accelerating the validation of drug candidates. Using McMaster’s HPE Cray supercomputer the team’s model designs antibiotics that can be synthesized in one step. SyntheMol’s in vitro hit rate marks a breakthrough in using generative AI for the rapid design of easily synthesized molecules, potentially cutting costs and development time by half — from $2 billion and 10 years.
Precision Agriculture: Norway-based DigiFarm is a startup developing deep neural network models and agtech solutions for accurately detecting field boundaries and seeded acres for precision farming and more resilient agriculture.
DigiFarm utilizes high-resolution satellite data and AI powered by the HPE Cray pre-exascale LUMI supercomputer, located at the CSC- IT Center for Science in Kajaani, Finland.
The company reports impressive results: improved crop yield forecasting, reducing seed costs by 5 to 10 percent and increasing yields by up to 10 percent. In turn, this reduces government farm subsidy monitoring costs by 25 percent and optimizes resource allocation and promotes sustainable farming practices, ensuring compliance with deforestation regulations.
“The immediate effect of working on the LUMI supercomputer wasn’t only the ability to develop better-performing models, but also to significantly shorten the time frame from training and R&D to product iteration and commercialization,” said Nils Helset, co-founder and CEO of DigiFarm and a 15th-generation Norwegian farmer. “These two factors have been really, really significant for us.”
More information on the groundbreaking work of companies and researchers using HPE Cray supercomputer can be found on the HPE Exascale Day site on Friday, October 18.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insideainews/
Join us on Facebook: https://www.facebook.com/insideAINEWSNOW