Australasian Leadership Computing Symposium 2019 (ALCS):

Aidan Heerdegen and Claire Carouge from CMS have attended the Australasian Leadership Computing Symposium (ALCS) 2019 in Canberra. The symposium brought together scientists from astronomy, materials, genetics, geosciences, and climate and weather sciences. Talks at the conference looked at high-performance computing (HPC) and high-performance data (HPD), the current practices and issues in the different sciences as well as the outlook for HPC.

It appears everyone had similar issues with increasing volumes of data and larger computational needs. Interestingly, everyone is struggling with problems of very different sizes, it is just that everyone has to learn new technical skills around programming and data management to tackle those new problems.

Claire gave a presentation on the benefits brought by the CMS team to CLEX and the previous Centre of Excellence for Climate System Science (ARCCSS). The presentation was well-received with people not in CLEX commenting they would really appreciate having access to a CMS team.

There was also a presentation about the Astronomy Data And Computing Services (ADACS). This service is similar to the CMS in its goals but it is structured differently. It is a stand-alone NCRIS structure serving the entire Australian astronomy community. The presentation was interesting as it highlighted the same benefits as the CMS team for the Centre and the same feedback from the researchers they helped.

Considering the increase in technical complexities of the research being undertaken and the problems faced by the researchers as highlighted at ALCS, it seems investment in teams like CMS and ADACS will become more and more advantageous to research teams.

Grand challenge simulation

NCI has decided to run Grand Challenges simulations in order to showcase the capacities of the new supercomputer Gadi. CLEX in collaboration with the Bureau of Meteorology is preparing an Australia-wide 400m resolution atmospheric simulation. The simulation will run a 2.2km domain with a 400m domain nested inside. The outer lower-resolution domain is forced by BARRA, the Bureau’s reanalysis. We will simulate March 27 and 28, 2017, when cyclone Debbie made landfall.

Scott Wales, from CMS, with Charmaine Franklin and Chun-Hsu Su from the Bureau of Meteorology have prepared the following for the simulation:

  • Estimation of the resources needed for the simulation
  • Preparation of the initial conditions for both the 2.2 km and 400 m domains
  • Production of the spin-up with the 2.2 km domain only.

Preparing the initial conditions for the 400m domain was tricky as interpolation operations do not parallelise well. Scott had to split the domain into sub-domains and apply regridding to each sub-domain, making sure the borders were properly processed to avoid artificial artefacts when stitching the sub-domains back together. Even applying this technique, this phase of the preparation required use of the “megamem” nodes on Raijin with 3TB of RAM per node.

The scaling tests with the whole domain still have to be performed but extrapolation from smaller domains at the same resolution indicate this simulation will use a significant portion of Gadi capacity. We also estimate we’ll need about 100 Tb of storage per simulated day.

Extreme indices for CMIP6:

It is important for the CLEX’s researchers to look at climate extremes in the CMIP6 dataset. ClimPACT v2 was developed to provide an easy way to calculate the ET-SCI indices. Considering there was a very short timeframe between the release of CMIP6 data and the deadline for paper submissions for consideration by the IPCC for the AR6 report, CMS was asked to calculate the indices for all available CMIP6 data currently available.

Scott Wales from CMS has performed the calculations using Clef to find the available data and ClimPACT v2 to calculate the indices. Unfortunately, this indices dataset cannot be published currently, but it is available to use by any Australian researcher.

Please contact CMS at for information on where to find this dataset. We will update the dataset regularly as new data becomes available in CMIP6. There is no automated process at the moment, please let us know if you think indices for some model or experiment are missing.

Gadi, NCI new supercomputer:

With Gadi becoming available to use, CMS has started porting models from Raijin. We are posting updates on our wiki.

It is worth noting it was decided Holger Wolff from CMS would sit at CSIRO Aspendale to work in collaboration with the ACCESS-ESM 1.5 team at CSIRO. Holger is working on porting the ACCESS-ESM 1.5 code as well as designing a better build system for this code. This should ensure it is easier for researchers to modify the ACCESS-ESM 1.5 model components and recompile them as needed.

We would like to remind everyone that Raijin’s nodes are now slowly being shut down, with the vast majority of them being decommissioned by December 27. There is also a planned downtime for Gadi between December 27 and January 6. It might be possible to use the Broadwell and Skylake nodes on Raijin during that time but it isn’t assured. There will be a downtime for these nodes during that period as well as NCI needs to transfer those nodes to Gadi. It is probably better to consider there is no access to any HPC node between 27 December and 6 January.