Celebration of Science


The ARCHER2 Celebration of Science took place on the 7th – 8th March 2024 in Edinburgh at South Hall, The University of Edinburgh.

Thanks for attending the event and we look forward to seeing you again.

Day 1: Thursday 7th March 2024

Day 2: Friday 8th March 2024

Lightning Talks

- Poster Presenter Poster Title
1 Ben Durham, University of York Open Boundary Conditions and Implicit Solvation Calculations in CASTEP using DL MG
2 Callum Watson, British Geological Survey Enabling a better global view of the magnetic field of Earth’s rocks
3 Carlos Edgar Lopez Barrera, Queen Mary University of London Computational Insights for Individualized Atrial Fibrillation Treatments
4 Chi Cheng (Cecilian) Hong, University of Edinburgh Insight into the Correlated Disorder of Fumarate- Based MIL-53 Frameworks: A Computational Study of Free-Energy Landscape
5 Ivan Tolkachev, University of Oxford Large scale atomistic simulations of nanocrystalline Iron formation and its irradiation performance
6 Joel Hirst, Sheffield Hallam University Spin-Waves: A potential route to more efficient data transmission, storage and processing
7 Joseph Prentice, University of Oxford Computing infra-red spectra using finite differencing in CASTEP
8 Juan Herrera, EPCC, The University of Edinburgh MONC Performance Portability
9 Kevin Stratford, EPCC, The University of Edinburgh MPI+X on ARCHER2: observations from Ludwig
10 Ludovica Cicci, Imperial College London A multi-scale analysis of the impact of measurement and physiological uncertainty on electrocardiograms
11 Mara Strungaru, University of York Implementing spin-lattice dynamics within the VAMPIRE software package
12 Marina Strocchi, Imperial College London & King’s College London Linking Molecular to Whole-organ Function Using Multi-scale, Multi-physics Four-chamber Computational Models * * * Winning Poster * * *
13 Martin Plummer, STFC Scientific Computing Department (Daresbury Laboratory) Multi-Layered MPI parallelisation for the R-matrix with time-dependence code
14 Matt Smith, University of York Future-proof Parallelism for Plane-Wave Density Functional Theory
15 Matthias Frey, University of St Andrews EPIC: The Elliptical Parcel-In-Cell method
16 Lorna Smith, EPCC, The University of Edinburgh Public engagement in High Performance Computing
17 Paul Bartholomew, EPCC, University of Edinburgh Adding ADIOS2 to the Xcompact3D CFD code
18 Pavel Stishenko, Cardiff University Implementing an implicit solvent model in a periodic DFT code
19 Sean Mashallsay, Queen’s University Belfast Unravelling Attosecond Dynamics: A General Approach To Ultrafast Atomic Simulations
20 Steven Boeing, University of Leeds Small-scale mixing in a parcel-based model of moist convection
21 Stuart Morris, University of Warwick Laser-plasma instabilities at Shock Ignition scales
22 Tobias Slade-Harajda, University of Warwick The consequences of tritium mix for simulated ion cyclotron emission spectra from deuterium-tritium plasmas
23 Vinush Vigneswaran, The University of Edinburgh – Centre for Cardiovascular Science OpenEP Workbench: A computational platform for identifying fibrotic regions and conduction disturbances in the atria using conduction velocity.
24 Hannah Menke, Heriot-Watt University Introducing GeoChemFoam to Archer2
25 Nick Brown, EPCC, University of Edinburgh ExCALIBUR: An exascale software programme

Download Lightning Talks schedule as PDF

1 Open Boundary Conditions and Implicit Solvation Calculations in CASTEP using DL MG

Ben Durham, University of York

CASTEP is a quantum mechanics simulation code specialised for solid materials and is heavily used on ARCHER2 (~50 users per month). CASTEP’s support for obtaining the ground-state electronic and atomic configurations is very good for solid materials with , periodic (repeated) crystal structures. There are a wide range of materials science, chemical and biological applications for which it is desirable to compute the properties of isolated molecules or molecules in solvent, e.g. energy storage or drug development. In an eCSE project carried out under the previous ARCHER service (eCSE07-006), the periodic approach of CASTEP was extended to enable simulations in Open Boundary Conditions (OBCs) and in the presence of a solvent, by adopting the minimal-parameter solvent model already developed in ONETEP, another materials modeling code.

This poster presents the work that was achieved in ARCHER2-eCSE01-09 which extended the OBC/solvation functionality in CASTEP to address the limitations of the original implementation:

DOI: 10.5281/zenodo.10817424

2 Enabling a better global view of the magnetic field of Earth’s rocks

Callum Watson, British Geological Survey

The WMAM code calculates spherical harmonic models of the natural magnetisation of the Earth’s crustal geology. In essence, the model allows us to estimate the value of the full magnetic field vector at any location, based on scattered measurements of only the scalar magnetic field. These modelled values of the magnetic field serve many important purposes, such as geological research, navigation, and safe resource extraction.

Global spherical harmonic models of degree 1440 (∼28 km resolution) have been successfully computed on the British Geological Survey HPC facility, but such runs require the full BGS compute capacity (512 cores) for up to six days. Further, the resolution of the available scalar field measurements is too high to be fully exploited by the WMAM code, limiting models of the crustal magnetic field to a resolution of 28 km.

However, following a successful ARCHER2 eCSE project, we refactored the WMAM code such that it can produce models of spherical harmonic degree 2000 (∼20 km resolution). We ran the code at this resolution using 64 ARCHER2 nodes (8,192 cores) for 3 hrs and 44 mins. The resulting model power spectra and magnetic field maps showed excellent agreement with the BGS1440 model and original data and thus the potential to explore the modelling process and gain new knowledge about crustal magnetic fields.

DOI: 10.5281/zenodo.10817524

3 Computational Insights for Individualized Atrial Fibrillation Treatments

Carlos Edgar Lopez Barrera, Queen Mary University of London

This research introduces an innovative methodology for developing personalized in silico models aimed at assessing individual responses to treatments for irregular heart rhythms. The process commences with the extraction of a point cloud from a segmented MRI scan, forming a spatial representation of the atrium. These points interconnect to generate a 3D finite element mesh that faithfully mirrors the unique anatomy of the patient. Incorporating directional properties influenced by atrial fiber orientation from a DT-MRI atlas, the model employs the openCARP solver to simulate electrical activity and electrograms, specifically exploring irregular heart rhythms such as atrial fibrillation.

This computational model stands out for its ability to predict how a patient will react to therapy, facilitating virtual trials. The demonstration includes the use of pulmonary vein isolation ablation therapy as an example. Post-simulation analysis identifies critical regions related to atrial fibrillation, offering valuable insights for personalized treatment planning.

Additionally, the research touches upon a complementary study investigating the impact of spatial resolution on clinical measurements in understanding and treating diseases, particularly irregular heart rhythms like atrial fibrillation. The spatial resolution of atrial fibrillation electrogram data emerges as a critical factor for interpreting underlying mechanisms and ensuring successful treatment. The study employs simulations to explore electrogram recordings at different spatial resolutions, emphasizing the importance of precise measurement locations in enhancing our comprehension of irregular heart rhythms for more effective clinical interventions.

DOI: 10.5281/zenodo.10845480

4 Insight into the Correlated Disorder of Fumarate- Based MIL-53 Frameworks: A Computational Study of Free-Energy Landscape

Chi Cheng (Cecilian) Hong, University of Edinburgh

Metal-organic frameworks (MOFs) are porous, periodic, and crystalline materials that are highly modular due to the many metal and linker combinations possible. Notably, certain combinations of metals and linkers have shown to result in a breathing caloric effect wherein the adsorption and desorption of guest molecules are accompanied by a large reversible volume change and isothermal entropy change, making it a potential greener alternative for refrigerants. On top of the aforementioned combinations, another consideration is the correlated-disorder that can arise from the linker orientations which was previously shown to affect the mechanism in which a framework closes.

My research focusses on unrevealing the correlation between the transition metal centres of fumarate-based MIL-53 MOFs and the inherent flexibility of the framework as a result. Furthermore, a series of isoreticular MIL-53 frameworks from different synthetic pathways are being studied to understand how the linker induced correlated disorder affects the breathing mechanics of these frameworks. We use a combination of first principles molecular dynamics and metadynamics calculations to gain insight on the flexibility of the framework and how the free energy landscape can be altered.

DOI: 10.5281/zenodo.10846029

5 Large scale atomistic simulations of nanocrystalline Iron formation and its irradiation performance

Ivan Tolkachev, University of Oxford

Reduced activation ferritic-martensitic (RAFM) steels are candidate materials for the construction of the first wall and breeding blanket in future nuclear fusion reactors. These materials will be subjected to 14.1MeV neutrons that can penetrate the material lattice and displace atoms from their positions causing damage within the material. The resulting defects can accumulate and cause changes in the mechanical and thermal properties of the material. It is important to study the effects of irradiation on iron, and how to improve material performance in this extreme environment. Nanocrystalline materials have previously been considered for use in reactor environments due to their high grain boundary density, which could act as a “sink” for irradiation defects. In this poster, we present a method of grain refinement through severe plastic deformation. We used the molecular dynamics code LAMMPS on ARCHER2 to apply shear strain to initially pristine Fe samples and observed the creation of nanocrystalline material due to the formation of a disordered atomic state and subsequent recrystallisation. We also present the initial results of collision cascade simulations, where the nanocrystalline material is subsequently irradiated. Based on our results, we can gain fundamental insight into the mechanisms that control the formation of nanocrystalline materials, and how they are subsequently altered by irradiation. This is essential information for the targeted development of next generation, high radiation-resistant steels.

DOI: 10.5281/zenodo.10846054

6 Spin-Waves: A potential route to more efficient data transmission, storage and processing

Joel Hirst, Sheffield Hallam University

The excitation, propagation and detection of spin-waves through magnetic media has given birth to the field of magnonics, dedicated to harnessing collective spin excitations for advancing the next generation of data transmission The main benefit of using spin-waves to transmit information is that they operate with reduced energy dissipation compared to using conventional electrical currents and show promise for faster transmission. Atomistic Spin Dynamics (ASD) is a computational modeling technique used for measuring spin-wave dynamics. One of the most widely used packages for ASD modelling is VAMPIRE, originally developed by academics at the University of York. As part of this eCSE project, we are developing a module within VAMPIRE capable of calculating the spin wave dynamics of magnetic materials. The module can be used in the future to confirm existing and predict future experimental results and observations in the field of magnonics.

DOI: 10.5281/zenodo.10846068

7 Computing infra-red spectra using finite differencing in CASTEP

Joseph Prentice, University of Oxford

In this work, we implement new functionality within the well-known density functional theory (DFT) code CASTEP: the ability to compute infra-red absorption spectra, using finite differencing of Berry phase polarisation to obtain the Born effective charge tensors required to compute the IR absorption strengths. Finite differencing works ‘out-of-the-box’ with virtually any electronic structure method with CASTEP, unlike density functional perturbation theory (DFPT), which must be explicitly coded. To achieve this, we implement the main missing contribution to the Berry phase polarisation, which arises from ultra-soft pseudopotentials (USPs). We also implement the finite differencing approach to computing the Born effective charges itself, and benchmark the accuracy of these two implementations against existing work. We also make improvement to the parallelisation of the computation of Berry phase polarisation and IR spectra, including k-point parallelism and task farm parallelisation. This work allows IR spectra to be computed using previously unavailable methods, such as DFT+U and more accurate functionals such as meta-GGAs and hybrids; in materials where such methods are vital for an accurate description, this work will enable IR spectra to be calculated for the first time. Our work will also future-proof the calculation of vibrational spectra against new approaches, and has broader relevance to modelling of vibrational spectra, e.g. Raman spectroscopy and phonon-loss EELS.

DOI: 10.5281/zenodo.10846084

8 MONC Performance Portability

Juan Herrera, EPCC, The University of Edinburgh

MONC is a high-resolution atmospheric modelling code used in the UK by the Met Office and several UK universities. The NERC community wish to leverage the next generation hardware in ARCHER2 for their scientific ambition. This project aimed to update MONC to tune it to the ARCHER2 architecture and take the opportunity to address two long-standing issues, namely, to update the current FFT dependency and relax a restriction in the code that limits the usable halo size.

DOI: 10.5281/zenodo.10846117

9 MPI+X on ARCHER2: observations from Ludwig

Kevin Stratford, EPCC, The University of Edinburgh

Ludwig is a parallel code for complex fluids, which include colloidal suspensions, gels, and liquid crystals, amongst other things. It takes its name from Ludwig Boltzmann, as it uses the lattice Boltzmann method for hydrodynamics. Coarse-grained order parameters are solved by finite difference to represent the ‘complex’ part.

The code is written in ANSI C and includes a lightweight thread abstraction to allow it to be run with either OpenMP, CUDA, or HIP. We consider here some considerations for OpenMP performance (CPU), and also GPU performance on ARCHER2.

DOI: 10.5281/zenodo.10847877

10 A multi-scale analysis of the impact of measurement and physiological uncertainty on electrocardiograms

Ludovica Cicci, Imperial College London

An electrocardiogram (ECG) is a non-invasive diagnostic tool for the evaluation of the electrical activity of the heart – recorded by sensors placed on the body surface –, and the early detection of cardiac diseases. However, the cardiac activity and its propagation through the body depend on several factors, acting either at the cellular or at the organ level. Multi-scale, electrophysiology (EP) computational models represent a promising tool to investigate the interactions between these factors and the ECG signal. In this respect, global sensitivity analysis (GSA) is a useful method to quantify the impact of electrophysiological parameters on clinical outputs of interest, to screen out the non-influential factors, and to map the remaining ones to the ECGs. Decreasing the number of input parameters is crucial to reduce the model complexity and the associated computation time, with the goal to match the clinical timescales and embed EP simulations into the clinical practice. In this work, we investigate the impact of 164 parameters – ranging from ionic conductances to cardiac fibre orientation and tissue conductivities – onto scalar quantities extracted from a 12-lead ECG. The latter results from a forward EP simulation, able to reproduce the cardiac electrical activation, performed on a patient-specific whole torso geometry. By making use of 512 nodes per simulation on the ARCHER2 computing facilities, we collected a dataset of more than 3000 input-output pairs to train Gaussian process emulators (GPEs), which were ultimately employed to evaluate the GSA indices in an efficient manner.

DOI: 10.5281/zenodo.10847937

11 Implementing spin-lattice dynamics within the VAMPIRE software package

Mara Strungaru, University of York

Magnetic materials are typically modelled from an atomistic perspective, considering only the spin degrees of freedom. However, at ultrafast timescales the spin and lattice degrees of freedom mutually influence one another; hence it is necessary to employ a unified model. Here, we have implemented coupled spin-lattice dynamics (SLD) into the fast and highly scalable open-source software package, VAMPIRE. Our framework can successfully transfer energy and angular momentum between the lattice and the spin system, via a coupling term that can be parameterised by magneto-elastic experiments.

We have developed several new features for the existing software, such as position dependent quantities, mechanical potentials, Suzuki-Trotter integration and numerous statistics (lattice and spin temperatures, energy contributions). Many of these modifications are localized, due to the modular structure of the code. We have performed several physical tests, such as conservation of energy, to assess the accuracy of the integration method and we have also designed unit testing. To parallelise the framework via MPI, we use a checkerboard-style update for the Suzuki-Trotter decomposition arising since the spin update can only be parallelised if non-interacting.

This work provides new functionality into the leading code for atomistic spin dynamics, allowing for novel simulations in exciting areas of magnetism, such as THz excitation, modelling of magnetic insulators, the Einstein de Haas effect and hyperthermia. These new features will have a high impact on the magnetism community and will further encourage the use of HPC facilities to develop new research for future technologies.

DOI: 10.5281/zenodo.10848507

12 Linking Molecular to Whole-organ Function Using Multi-scale, Multi-physics Four-chamber Computational Models

Marina Strocchi, Imperial College London & King’s College London

Multi-scale, multi-physics models provide a physics-constrained framework to explore interactions between complex molecular processes and clinically measurable whole-organ function. Having a better knowledge of such interactions might help identifying novel disease phenotypes and designing novel targeted therapies, and ultimately improve patient care.

We developed a framework for a scalable, multi-scale and multi-physics whole-heart model that is able to simulate molecular function up to whole heart electrical activation, mechanical contraction and the circulatory system. We broke down this model into five hierarchical sub-models to identify unimportant parameters and reduce the complexity of the analysis. This allowed us to reduce the number of parameters from 117 to 45.

Using the ARCHER2 computing facilities, we used this platform to vary these 45 parameters, ranging across all scales and physics in the model, and run 500 simulations to generate a training dataset for Gaussian Processes Emulators. The trained emulators were then used to run a global sensitivity analysis and to build an interaction map between the model parameters and the clinically measurable outputs simulated by the model. This provides a robust method for mapping between molecular processes and clinical measurements.

DOI: 10.5281/zenodo.10848600

13 Multi-Layered MPI parallelisation for the R-matrix with time-dependence code

Martin Plummer, STFC Scientific Computing Department (Daresbury Laboratory)

To ‘view’ electrons in an atom or a molecule, we must capture their motion on a time-scale comparable to their interactions. This is akin to taking a photograph- the faster an object is moving, the shorter the exposure time required to capture it. In atomic, molecular and optical physics we effectively use ultrashort ‘camera’ flashes (laser pulses) to image and also control electrons in motion. While this field of attosecond physics (1 attosecond = 1 billionth of a billionth of a second) is well established, theory and computational models of the underlying mechanisms are relatively underdeveloped.

‘R-matrix with Time-dependence’ (RMT) solves the time-dependent Schrödinger equation to describe time-domain behaviour of atomic and molecular systems in external fields. RMT is at the forefront of research in attosecond/ultrafast/strong-field physics and is the only code capable of describing general systems driven by arbitrarily polarised laser pulses taking full account of multi-electron correlation.

RMT divides physical space occupied by the electronic wavefunction into two regions. In the compact inner region, centred on the nuclear centre of mass, a basis set expansion includes full multi-electron effects allowing accurate description of correlation. In the outer region, an ionised electron is sufficiently isolated that exchange with residual electrons can be neglected and the wavefunction is described on a finite-difference grid. This eCSE project added new layers to the distinct parallelization schemes for each region. We describe performance improvements and systematic choice of the correct balance of inner- and outer-region MPI tasks, along with newly enabled application possibilities.

DOI: 10.5281/zenodo.10848761

14 Future-proof Parallelism for Plane-Wave Density Functional Theory

Matt Smith, University of York

Materials modelling codes which implement Density Functional Theory (DFT) in a Fourier basis rely on Fast Fourier Transforms (FFTs) to achieve acceptable efficiency and accuracy. A typical data decomposition for large simulations distributes ‘pencils’ of Fourier wave-vectors (‘plane-waves’) over MPI processes, load-balancing being complicated by the non-uniformity of data density in the Fourier domain.

A common strategy is to perform each 3D FFT as a set of 1D (and possibly 2D) FFTs, interleaving each of these with an inter-process communication stage as necessitated by the distribution of the Fourier components. While this approach balances efficiency of FFT execution with sufficiency of parallel exposure, the attendant communications ultimately limit the strong scaling of most plane-wave DFT codes.

We present our development of a plane-wave domain decomposition scheme, implemented in CASTEP as part of an eCSE project, which optimises FFT communications. To our knowledge, this constitutes a novel solution in the plane-wave DFT community. A 2D logical process-grid, or generally a convex polyomino, not only avoids global communications but also reduces the total number of messages transmitted. Recognising optimal load-balancing in this context as a well-known deterministic scheduling problem, we adopt several task-scheduling polynomial time approximation schemes to distribute efficiently.

We illustrate efficient scaling of CASTEP to 4x as many processes as was previously possible, with 6x relative speed-ups regularly achieved on ARCHER2 and other U.K. HPC resources. The decomposition proves so efficient that FFT communications are no longer necessarily the limit to strong scaling; others, previously negligible by comparison, may now account for a significant fraction of the runtime.

No DOI

15 EPIC: The Elliptical Parcel-In-Cell method

Matthias Frey, University of St Andrews

We introduce a novel Elliptical Parcel-In-Cell (EPIC) method for the simulation of fluid flows. The Lagrangian parcels are represented by ellipsoids which deform, rotate and move in the flow field. The splitting of excessively elongated parcels and the merging of very small parcels with their nearest other parcel gives rise to a natural mixing mechanism. The parcel merging algorithm involves analysing a graph of nearest neighbouring parcels, which can be on different MPI processes. The communication in this algorithm was implemented using one-sided (RMA) communication. The EPIC method has proven to outperform non-deformable PIC methods and comparable Eulerian models for buoyancy-driven turbulent fluid flows in terms of effective small-scale resolution.

DOI: 10.5281/zenodo.10848819

16 Public engagement in High Performance Computing

Lorna Smith, EPCC, The University of Edinburgh

EPCC operates a wide range of data facilities and High Performance Computing platforms. Our public engagement activities aim to demonstrate the societal benefits of these to the general public and look to explain the purpose of these facilities and show that they are a valuable use of public funds. As a National Service provider, we focus on activities that provide benefit across the UK.

In this poster we will demonstrate the different fun, hands-on activities we deliver for participants. These range from Wee Archie, our suitcase-sized supercomputer, to logic puzzles to encourage problem-solving skills. We will showcase out work placement opportunities, provided to school children to help experience the world of work and explore careers in computational science.

DOI: 10.5281/zenodo.10848862

17 Adding ADIOS2 to the Xcompact3D CFD code

Paul Bartholomew, EPCC, University of Edinburgh

An often-overlooked component in scientific codes is the I/O system. As a consequence I/O may become the performance limiting bottleneck, especially at scale. This poster is based on the ARCHER2 eCSE03-02 project which added ADIOS2 support to the 2DECOMP&FFT library used by the Xcompact3D framework dedicated to the study of turbulent flows on supercomputers (amongst others). Doing so serves two purposes: 1) the I/O task is abstracted by the ADIOS2 library, giving access to different formats without having to implement them directly; and 2) ADIOS2 supports streaming I/O to another application, enabling in-situ analysis to be implemented and avoiding the disk entirely. As a part of this work, the Py4Incompact3D postprocessing toolkit was parallelised to make it suitable for use in an in-situ context, facilitating users implementing their own analyses.

A comparison of Xcompact3D I/O performance using the original MPIIO-based and ADIOS2-based backends achieves better scaling on ARCHER2. This becomes significant as we consider large-scale problems running on 64-1,024 ARCHER2 nodes, with the ADIOS2-based backend achieving near the theoretical limit of I/O performance on ARCHER2.

The parallelisation of Py4Incompact3D follows the same approach used in Xcompact3D by developing a Python wrapper for the 2DECOMP&FFT library. As would be expected based on Xcompact3D, the parallelised Py4Incompact3D demonstrates excellent scalability. When testing the in-situ analysis configuration it was found that the Python implementation of the numerical methods used became a bottleneck. A pure numpy solution was implemented, and Xcompact3D-Py4Incompact3D in-situ analysis demonstrated with performance equivalent to the pure Fortran solution.

DOI: 10.5281/zenodo.10849005

18 Implementing an implicit solvent model in a periodic DFT code

Pavel Stishenko, Cardiff University

Many industrially-relevant chemical reactions and physico-chemical processes occur in solvent environments. Most of the solvent molecules do not directly participate in these processes, yet affect them via electrostatic, steric, and entropic contributions to the total free energy. Implicit solvent models provide a shortcut to accout for these effects avoiding explicit simulation of a huge number of solvent molecules. Recently a piecewise Multipolar Expansion (MPE) model has been proposed [10.1021/acs.jctc.1c00834] and implemented in the FHI-aims DFT code [10.1016/j.cpc.2009.06.022]. The model demostrates remarkable 0.1 eV accuracy compared with experimental solvation energies for molecules with up to 28 non-hydrogen atoms [10.13020/3eks-j059]. But simulations with the MPE model require solutions to systems of linear equations (SLE) with a large dense matrix, that scales as O(n3) with number of solute atoms, and becomes a bottleneck for larger systems.

Working on the eCSE08-3 project, we have adopted the piecewise representation of the electrostatic potential in solvent and managed to dramatically improve sparsity of the SLE matrix. Thus, asymptotically linear scaling has been achieved and memory requirements were reduced. Moreover, with the new solvent representation we managed to support the MPE model for periodic systems. It enables FHI-aims users to account for solvent effects in models of adsorption and reactions on surfaces. Such systems and processes are routinely simulated using FHI-aims for purposes of green chemistry, energy harvesting and storage.

The poster will showcase the computational benefits achieved, newly enabled features of the FHI-aims code, and challenges encountered during the project.

DOI: 10.5281/zenodo.10849218

19 Unravelling Attosecond Dynamics: A General Approach To Ultrafast Atomic Simulations

Sean Mashallsay, Queen’s University Belfast

The 2023 Nobel-prize was awarded for work which pushed the field of ultrafast atomic physics into the attosecond (10-18s) timescale. However, simulating ultrafast atomic interactions poses a significant challenge. Simple models often neglect important processes, while more accurate approaches tend to be restricted to specific laser regimes or targets. While a general approach would require fully describing the multielectron dynamics of the system, extending that description to a large spatial region is prohibitively computationally expensive. The R-Matrix with Time-Dependence (RMT) code achieves the required performance using an R-Matrix space segmentation to provide a simpler description far from the atom while taking advantage of the full, multielectron description close to the atom, as well as using multiple MPI parallelisation schemes and OpenMP parallelism. With such a general model, RMT can not only provide insights into experimental findings but also guide the direction of future experimental work.

In recent years there are two key areas in which RMT has been used to further the field: light-atom interactions in the perturbative regime, and laser-atom interactions in the strong-field regime. The perturbative regime is particularly important in understanding how atomic structure affects interactions. However, perturbative models neglect many atomic processes by necessity, whereas RMT can provide vital information about the effects of non-perturbative processes in and near that regime. On the other hand, strong field ionisation is central to our understanding of quantum phenomena like tunnelling, and RMT’s full description of electron dynamics, alongside the detailed atomic structure it can use, provides novel insights.

DOI: 10.5281/zenodo.10849345

20 Small-scale mixing in a parcel-based model of moist convection

Steven Boeing, University of Leeds

The Elliptical Parcel-in-Cell (EPIC) model is used to simulate cumulus clouds. For the first time, this includes both idealised single clouds and a realistic tradewind cumulus case study. In EPIC, mixing is modelled as the stretching, splitting and merging of parcels. Previous studies have already shown that mixing between parcels is relatively weak at coarse resolution in EPIC, and large (parallel) simulations are needed to capture all the relevant turbulent scales. A modification to EPIC is made to represent the effects of mixing by eddies that are not resolved on the grid scale. This modification is inspired by subgrid formulations in Large Eddy Simulation models, and is designed in such a way that e.g. heat and humidity are conserved. Although this modification reduces the issue of weak mixing at low resolution, it also leads to the undesirable suppression of fine-scale circulations when applied to the vorticity field

DOI: 10.5281/zenodo.10849385

21 Laser-plasma instabilities at Shock Ignition scales

Stuart Morris, University of Warwick

When hydrogen isotopes are heated and compressed to high temperature and densities, they transition into a plasma state and can release energy through the nuclear fusion process. This has the potential to create a near-limitless clean energy source, provided the fusion conditions can be achieved within the plasma. In the direct-drive method, powerful laser beams are focused onto fuel pellets to provide the heating and compression required to fuse the hydrogen isotopes within. However, the plasma can become unstable under laser irradiation, and act to redirect laser energy away from the direct-drive process. These laser-plasma instabilities can prevent the fuel from reaching fusion conditions, and they must be suppressed to achieve any significant power output for future direct-drive fusion reactors.

In the Shock-Ignition scheme, the laser intensity is varied over time to suppress some forms of instability, but other instabilities may still occur. The laser-profile contains regions of high intensity (speckles) which can trigger instabilities, but various techniques exist to minimise this, like high-bandwidth lasers or frequency-modulated optics. It’s important to understand the role each of these play in the growth of instabilities, in order to design a system capable of fusion generation.

These complex laser-plasma systems can be modelled using particle-in-cell (PIC) codes, but the length and time-scales involved in Shock-Ignition simulations can be prohibitively expensive for most computers. However, using Archer2, we have performed multiple scans over the design space with the PIC code EPOCH, to gain a greater understanding of laser-plasma coupling.

DOI: 10.5281/zenodo.10849407

22 The consequences of tritium mix for simulated ion cyclotron emission spectra from deuterium-tritium plasmas

Tobias Slade-Harajda, University of Warwick

Measurements of ion cyclotron emission (ICE) from magnetically confined fusion (MCF) plasmas are helpful for understanding the physics of energetic ion populations therein. ICE is studied in most large MCF experiments, and may be used in future to measure properties of the fusion-born alpha-particle population in deuterium-tritium (DT) plasmas in ITER. Diagnostic exploitation of ICE is assisted by particle-in-cell (PIC) kinetic code EPOCH which self-consistently solves the Maxwell-Lorentz system of equations for millions of computational ions and fully resolves gyromotion, hence capturing the cyclotron resonant phenomenology underlying ICE, which is driven by an energetic, spatially localised, strongly non-Maxwellian ion population relaxing under the magnetoacoustic cyclotron instability (MCI). Here, for the first time, we incorporate a population of thermal tritons in addition to deuterons in EPOCH simulations of ICE relevant to MCF DT plasmas. Physically, the tritium population may support additional cyclotron harmonic waves; and tritons may also participate in wave-particle cyclotron resonant interactions involving thermal deuterons and the alpha-particles driving ICE, particularly at frequencies where deuteron and triton cyclotron harmonics are degenerate. We quantify the resulting variation in the distribution of ICE spectral peak intensities with tritium concentration. This is noticeable, and therefore important for the development of ICE diagnostics for future DT plasmas; nevertheless, simulations involving only thermal deuterons remain a good overall guide. Our conclusions are reinforced by analysis of the time-evolution of kinetic and field energy densities in the simulations, together with bicoherence analysis of the nonlinear interactions which couple energy flow between different cyclotron harmonics.

DOI: 10.5281/zenodo.10849452

23 OpenEP Workbench: A computational platform for identifying fibrotic regions and conduction disturbances in the atria using conduction velocity.

Vinush Vigneswaran, The University of Edinburgh – Centre for Cardiovascular Science

Background: Fibrotic remodelling in the atria, associated with atrial fibrillation (AF), creates areas of slowed and heterogenous conduction. These areas act as substrates that promote AF maintenance and perpetuation. Hence, detecting these regions is crucial for understanding and managing AF. One promising approach for identifying these regions is the estimation of conduction velocity (CV).

Purpose: In this study, we sought to enhance OpenEP Workbench software for researchers by (1)incorporating three well established method for CV estimation (triangulation, planar fitting, and radial basis function interpolation) and CV divergence calculation to detect conduction disturbances; (2)assess the performance of these three CV calculation methods in identifying fibrotic regions, using simulated data as the ground truth (3)developing a visualisation tool to enable identification of slow conduction regions at the optimal classification threshold attained from (2).

Results: The classification performance of the three CV methods in identifying fibrotic regions were assessed, resulting with the highest accuracy and AUC for the triangulation method (accuracy=78%, AUC=0.88). EP Workbench was enhanced with tools to create 3D surface maps of the generated voltage, CV and divergence parametrised voltage using electro-anatomical mapping data. Additionally, a histogram analysis tool was implemented to identify regions of slow conduction velocity from electroanatomic mapping data. Using a threshold of <0.3m/s, two slow conducting regions can be visualised on the posterior wall of a test case.

Conclusion: The enhanced OpenEP Workbench provides a pre-built pipeline for researcher and clinicians to optimise CV and fibrotic region classifiers and quantify conduction velocity heterogeneity in electro-anatomical mapping data.

DOI: 10.5281/zenodo.10849540

24 Introducing GeoChemFoam to Archer2

Hannah Menke, Heriot-Watt University

GeoChemFoam (GCF) (http://github.com/GeoChemFoam) is the Institute for GeoEnergy Engineering at Heriot-Watt University’s open-source pore-scale modelling code that is built on the OpenFOAM computational toolbox. Recently, we have successfully developed several new modules in GCF that have diverse applications to a range of processes relevant to the clean energy transition. These include low capillary number multiphase flow (e.g. hydrogen storage), equilibrium reactions (e.g. CO2 capture and storage), heat transfer (e.g. fuel cells and geothermal energy), the integration of machine learning tools (e.g. upscaling permeability in reservoirs), and the Darcy-Brinkman-Stokes transport and reactive transport (e.g. contaminant transport, low-carbon building materials). However, the meshing strategies are not optimised for an HPC environment. The goal of this project is to parallelise and upgrade the meshing and the pre-, and post-processing modules so that they are optimised for high performance computing environments and to create an on-demand multiscale adaptive meshing module to improve the scientific applicability of our solvers to highly heterogenous porous systems. As part of the project, we have also released a new module ‘GCFv5.0’ which is centrally available to all Archer2 users, along with user documentation and installation guide.

DOI: 10.5281/zenodo.10849917

25 ExCALIBUR: An exascale software programme

The Exascale Computing ALgorithms & Infrastructures Benefiting UK Research (ExCALIBUR) programme is a research effort aiming to enable exploitation of future supercompters by the next generation of high-performance simulation software. Focused heavily, although not exclusively, around weather and climate and fusion energy workloads, this programme aims to further develop the major software and algorithmic building blocks required for exascale supercomputing. Funded by the UK, running between 2019 and 2025, and having leveraged ARCHER2 extensively throughout its lifetime, the programme comprises four major themes; use-cases, exascale techniques and technologies, next-generation computing hardware, and investment in people. Initially activities identifying specific exascale use-cases and requirements across key scientific and engineering domains were undertaken which then fed into calls for exascale techniques and technology projects. Working with the use-cases to support their use of large-scale HPC, these have ranged across domain specific languages, quantum computing, machine learning for computation, exascale IO, workflows, simulation code coupling, task-based parallelism, and parallel in time methods. Concurrently, testbeds have been stood-up to explore next-generation computing architectures which include FPGAs, RISC-V, Graphcore, Arm, and Cerebras CS-2, and benchmarking for evaluation. Lastly, projects supporting, encouraging, and training HPC scientific software developers (RSEs) have run, exploring how to ensure RSE careers are fulfilling and valued, addressing the skills shortage faced by the HPC community. The work undertaken benefits not only the ARCHER2 community, but also future exascale supercomputers. The poster will summarize the programme, highlight key outcomes and how ARCHER2 has been key to meeting our objectives.

DOI: 10.5281/zenodo.10849992