Filter news by tags : blog  calls  events  newsletters  show all 

Getting started using ARCHER2 and HPC

The Service Desk team have helped more than a few people get from ‘zero’ to ‘HPC whizz’, so here we summarise all the useful information we have compiled over the years. This isn’t meant as a how-to guide, but a signpost to all the resources and materials which we think...

Business continuity management for ARCHER2

If you run a data centre building full of expensive kit used to run a variety of services including ARCHER2, how do you minimise the chances of something negatively impacting the services you run, and how do you decide what to work on first if something does happen that impacts...

Commercial access to ARCHER2 and Cirrus

EPCC provides world-class supercomputing and data facilities and services for science and business. We are a leading centre in the field, renowned globally for research, innovation and teaching excellence. EPCC has three key foundations: the hosting, provision and management of high performance computing (HPC) and data facilities for academia and...

Investigating the performance of NAMD and NEMO on ARCHER2

In this blog, I will show results for the optimisation and tuning of NAMD and NEMO based on the ARCHER2 architecture. We looked at the placement of parallel processes and varying the runtime configurations to generalise optimisations for different architectures to guide users in their own performance tests. We found...

Computing Insight UK: Lots of computing, insights, and nice to be back meeting in person!

In December EPCC was involved in attending and contributing to Computing Insight UK (CIUK) which was held in Manchester over two days. With players from across UK academia and industry, this annual conference focusses on the UK’s contribution to HPC and is a great opportunity to hear about new trends,...

Software usage data on ARCHER2

In this blog, I will introduce how we collect software usage data from Slurm on ARCHER2, introduce the sharing of this data on the ARCHER2 public website and then have a look at differences (or not!) in software usage on ARCHER2 between two months: December 2021: the initial access period...

ARCHER2 Driving Test launched

We are delighted to announce that the ARCHER2 Driving Test is now available. The ARCHER2 Driving Test is an online assessment tool which allows those new to ARCHER2 to demonstrate that they are sufficiently familiar with ARCHER2 and HPC to start making use of it. It is suitable for anyone...

Disaster recovery testing to improve resilience

To keep the ARCHER2 National HPC Service running around the clock requires specialised staff, covering everything from the Service Desk and science support, through to hardware maintenance and data-centre hosting, alongside third-party suppliers for power, networking, accommodation, and so on. Coordinating these elements is a complex task, even in normal...

Introducing the ARCHER2 full system

The ARCHER2 full system was opened to users on the morning of Monday 22 November 2022. In this blog post I introduce the full system and its capabilities, and look forward to additional developments on the service in the future. TL;DR For those who just want to get stuck in...

ISO External Audit Success

EPCC are delighted to be able to announce that we have passed our ISO 9001 Quality and 27001 Information Security external audits with flying colours. We put the highest importance on service delivery and secure handling of customer data throughout the year. Despite this, it is still a nerve racking...

DEA Audits – a world of pain

In spring 2020 EPCC became one of the very few organisations in the UK to become accredited data processors under the Digital Economy Act (DEA), one of exactly 8 accredited data processors in the UK! This was for the hosting and technical management of the National Safe Haven. EPCC is...

Using Containers to Install GUI-based Tools on ARCHER2

We have all become accustomed to having a wide range of pre and post-processing tools available to us on our laptops, which can make working on the login nodes of a large HPC system such as ARCHER2 rather inconvenient if your favourite tools aren’t available. On something fairly standard like...

A Container Factory for HPC

This blog post follows on from “HPC Containers?”, which showed how to run a containerized GROMACS application across multiple compute nodes. The purpose of this post is to explain how that container was made. We turn now to the container factory, the environment within which containers are first created and...

HPC Containers?

Containers are a convenient means of encapsulating complex software environments, but can this convenience be realised for parallel research codes? Running such codes costs money, which means that code performance is often tuned to specific supercomputer platforms. Therefore, for containers to be useful in the world of HPC, it must...

ARCHER2 final cabinets arrive

This week the final cabinets to complete the full ARCHER2 system have arrived onsite in Edinburgh on multiple trucks. The boxes were so large that doors had to be removed. All cabinets are now safely in place, with water cooling enabled. Further work to integrate them into the system is...

ARCHER2 Job Priority

An HPC service such as ARCHER2 manages thousands of user-submitted jobs per day. A scheduler is used to accept, prioritise and run this work. In order to control how jobs are scheduled all schedulers have features for defining the manner in which work is prioritised. On the ARCHER and ARCHER2...

CSE Team gain access to the main ARCHER2 system

Today saw a significant milestone in our move towards our full 23 cabinet ARCHER2 system. A small group of CSE staff gained access to the main HPE Cray EX supercomputing system today and are currently putting it through its paces. Andy, William, David, Adrian, Julien and Kevin will be testing...

Training on ARCHER2, what's next?

The first year of the ARCHER2 service has been very challenging, mainly due to the COVID-19 pandemic. Nonetheless, we have successfully delivered a fully online training programme. Since April 2020, a total of 66 days of training were delivered under the ARCHER2 service. We have used Blackboard Collaborate software for...

Meet the ARCHER2 team – UKRI

ARCHER2 is funded by UKRI through two partner councils, EPSRC and NERC, who started building the case for the system back in 2016. Rebecca How from EPSRC’s Research Infrastructure team joined the ARCHER2 project in January 2019, taking over the Project Manager role for the upcoming service, while also acting...

ARCHER2 MPI with OpenMP mini-app benchmarks

Quite a few application codes running on ARCHER2 are implemented using both MPI and OpenMP. This introduces an extra parameter that determines performance on a given number of nodes - the number of OpenMP threads per MPI process. The optimum value depends on the application, but is also influenced by...

Software use on ARCHER2 - an initial look

Back in February, I reviewed the usage of different research software on ARCHER over a large period of its life. Now we have just come to the end of the first month of charged use on the ARCHER2 4-cabinet system I thought it would be interesting to have an initial...

Urgent atmospheric modelling of an active volcano using ARCHER2

In the evening of March 19th 2021 a volcanic eruption started in Fagradalsfjall, Iceland. Although the episode posed no threat to aviation (no ash was produced), significant amounts of volcanic gases were (and still are being) released. Such gases can cause respiratory problems, and if the concentrations are high enough,...

Attending the “Efficient use of the HPE Cray EX Supercomputer ARCHER2” course

With the end of the ARCHER service in January 2021, the ARCHER2 4-cabinet pilot system has now been operating as the national service for three months. A new architecture, programming environments, tools and scheduling system is a new challenge for users to experiment with and discover techniques to achieve optimal...

Meet the ARCHER2 team - HPC Systems

The HPC Systems Team provides the System Development and System Operations functions for ARCHER2 - but who are we and what do we do? We are a team of 15 System Administrators and Developers who work to deploy, manage and maintain the services and systems offered by EPCC, as well...

Hello ARCHER2 - Install video

We recently received the main ARCHER2 hardware at the ACF and our team recorded the process of installation as this exciting new system was deployed. You’ll see large “Mountain” cabinets being deployed each of which holds 256 Compute Nodes with 128 CPU cores each as well as a number of...

Future investment in Science

It seems strange to be thinking about how to justify investment in ARCHER3 and beyond, with ARCHER2 not fully in service yet, but it is never too early to start planning this. ARCHER generated a significant amount of world-leading science and we fully anticipate ARCHER2 will as well. It is...

Hello Archer2

It’s been a busy week at the Advanced Computing Facility (ACF) with the arrival of the remaining ARCHER2 cabinets. The long journey started from the HPE Cray factory in Chippewa Falls, Wisconsin (birthtown of Seymour Cray) before arriving in Prestwick airport. It took three separate journeys of four lorries to...

ARCHER Quiz

Having said Farewell to ARCHER, we invite you to try our short quiz, to see how much you can remember about the service.

eCSE Panel Meeting - Behind the scenes

The ARCHER2 eCSE programme provides funding to carry out development of software used by the ARCHER2 user community. As part of our commitment to encouraging and developing Early Career Researchers, we offer a small number of early career researchers the opportunity to attend the eCSE Panel Meeting as observers. The...

From the Service Desk

Last week marked the end of the ARCHER Service after seven years. You may have heard some statistics over the last week about ARCHER but I wanted to tell you some from the helpdesk. During the lifetime of ARCHER, the User Support and Systems teams (SP) have resolved 57,489 contractual...

Research Software Use on ARCHER

Now that the ARCHER service has finished, I thought it would be interesting to take a brief look at the use of different research software on the service over its lifetime. ARCHER was in service for over 7 years: from late 2013 until its shutdown in early 2021. Over the...

ARCHER Driving Test retrospective

Back in February 2015, a little over a year after the ARCHER service began, EPCC launched an entirely new and innovative access route, for users to get time on ARCHER via a “Driving Test” The idea behind this was to help those who had never used ARCHER, or possibly any...

ARCHER's final moments

As the ARCHER service was switched off, our colleagues at the ACF photographed and recorded ARCHER’s final moments. With a particular thank you to Connor, Aaron, Jordan, Craig and Paul at the ACF and to Spyro for putting this video together, we give you ARCHER’s final moment. It is best...

Farewell to ARCHER

At 8am this morning the ARCHER service was switched off. Funded by EPSRC and NERC, this sees the end of a remarkable service, a service that has acted as the UK’s National HPC Service for the last 7 years. Entering service at the end of 2013, just over 5.6 million...

ARCHER2 Early Access - communication and community

We are currently in (or have recently finished, depending on when you read this) the Early Access period for the ARCHER2 service. During this period, users nominated by their peers and the ARCHER2 teams at EPCC and HPE Cray have had access to the ARCHER2 4-cabinet system. The Early User...

External audits and cake in a time of plague

With ARCHER2 part arrived and in early access, and with all the preparation to move from ARCHER to ARCHER2, with staff working remotely from home and all the other work ongoing, why on earth were we preparing for our annual ISO external audit and how could we make it work?...

Testing GROMACS on ARCHER2 4-cab

One of the upsides of working on the ARCHER2 CSE team is that, sometimes, one finds oneself in the interesting position of being the only user on the ARCHER2 four-cabinet system (at least the only apparent user – I’m sure that AdrianJ is lurking on one of the compute nodes...

ARCHER2 – Build Timelapse

On the 14th July 2020 the first four cabinets of ARCHER2 arrived at the ACF and over the following few days they were unpacked, moved into position, connected up and ultimately powered up ready to go. Gregor Muir was on hand, capturing the whole process in still images and timelapse...

ARCHER2 Build Intern log

A full week on from the 4 Cab Archer2 installation (affectionately dubbed Mini-Archer2, due to only having 18% of the full Shasta system’s 5848 compute nodes), and life is beginning to quiet down again for the temporary Summer Team at the ACF. This is now the second year that the...

ARCHER2 Build Final Day

Final build day! The final day of the build of our 4 cabinet system. Yesterday saw the system booted and lots of testing carried out. First job of the day was to locate some erroneous cables that were causing power to not be read properly. Troubleshooting then identified the need...

Day 4 of the ARCHER2 Build

Day 4 and the vast majority of the work has been done, with the power commissioned and the management and storage systems configured and tested. As of last night, remote access was also enabled. As we near completion a number of our American colleagues have now left to return to...

Day 3 of the ARCHER2 Build

Day 3 and significant progress has been made. Yesterday saw the fitting of the cooling infrastructure between the mountain cabinets and the CDU. The site’s water supply has now been connected, bled and made live. All the power connections have been made and the CDU has been powered up as...

ARCHER2 – Build Day 2

Yesterday saw the first phase of ARCHER2 arrive on site, with all four Shasta Mountain cabinets moved in to their correct position. The images below show pictures of the back and the front of these cabinets. The red and blue cables are colour coded water pipes for the cooling system,...

ARCHER2 – 4 cabinets arrive

The first phase of ARCHER2 is in Edinburgh! The 4 cabinet Shasta Mountain system, the first phase of the 23 cabinet system, has completed its journey from Chippewa Falls in Wisconsin, making its way from Prestwick airport to Edinburgh this morning. The arrival of these large crates has, I admit,...

ARCHER2 – 4 cabinets ship from the US

Covid-19 has created significant challenges for the delivery of the new ARCHER2 system. It is therefore really exciting to see the first 4 cabinets of ARCHER2 leave Cray/HPE’s factory in Chippewa Falls, Wisconsin to begin their journey to Edinburgh. ARCHER2 will replace the current ARCHER system, a Cray XC30, as...

UKRI ARCHER2 update

UKRI are having weekly meetings with the ARCHER2 providers, EPCC and Cray/HPE. We are making good progress and are still on track to start installation of some compute capacity (a pilot system) in mid-July. We are also looking at further ARCHER extensions due to the COVID-19 delay. We now expect...

UKRI Announce ARCHER2 services contracts awarded

UK Research and Innovation (UKRI) has awarded contracts to run elements of the next national supercomputer, ARCHER2, which will represent a significant step forward in capability for the UK’s science community. EPCC has been awarded contracts to run the Service Provision (SP) and Computational Science and Engineering (CSE) services for...

Welcome to the ARCHER2 website!

We are pleased to welcome you to the new ARCHER2 website. Here you will find all the information about the service including updates on progress with the installation and setup of the new machine. Some sections of the site are still under development but we are actively working to ensure...

UKRI ARCHER2 Hardware Announcement

Details of the ARCHER2 hardware which will be provided by Cray (an HPE company). Following a procurement exercise, UK Research and Innovation (UKRI) are pleased to announce that Cray have been awarded the contract to supply the hardware for the next national supercomputer, ARCHER2. ARCHER2 should be capable on average...