Integrative High Performance Computing

With the goal of impacting experimental and wet lab researchers who are capturing ever-increasing amounts of data, the MASSIVE facility has intentionally taken an approach that is complementary to peak HPC. This approach is integrative and this means an emphasis on:

  • usability by new HPC user communities over capacity;

  • hardware suited to data processing over simulation;

  • underpinning high performing wet and experimental laboratories, with growing data processing needs;

  • workflows that increase return on investment in instruments;

  • porosity and flexibility to serve specific requirements in the life sciences and other areas new to HPC; and

  • support for new data science techniques, including machine learning.

MASSIVE delivers world-class data processing, analysis and visualisation capability through a focus on: Technology and Capacity; Community; and Accessibility.

and Capacity

MASSIVE provides access to high performance computing hardware that is designed for data processing, analysis and visualisation. This capability is delivered via M3 which was commissioned in 2016 and entered full production in 2017. It is a goal of the project to upgrade M3 twice per year with both a large-scale upgrade, and a smaller specialised upgrade.

At July 2018, M3 is composed of 4,112 CPU cores, 168 GPU co-processors across a range of products suited to parallel processing, visualisation and machine learning, and a 3PB fast parallel Lustre file system. M3 provides a combination of GPU coprocessors, including the NVIDIA K1 (for remote scientific desktops), K80, P100, V100, and the DGX1-V.


MASSIVE underpins a wide variety of research fields, including neuroscience, molecular imaging, genomics, material science and engineering. These fields share a number of characteristics:

  • The increased availability of scientific instruments that produce large volumes of multidimensional or large-cohort data.

  • The increased opportunity offered by the availability and volume of data that requires significant processing, analysis and visualisation to gain insight.

  • The increased opportunity offered by data and compute intensive processing techniques, including machine learning.

MASSIVE is partnered with the ARC Centre of Excellence in Integrative Brain Function, and the ARC Centre of Excellence in Advanced Molecular Imaging.


With the goal of impacting wet lab and experimental scientists, MASSIVE has a strong focus on accessibility. To underpin this new generation of HPC users, MASSIVE applies a number of initiatives, including:

  • An instrument integration program, to provide data capture, processing and visualisation from the point of capture, and in specific cases ‘in-experiment’.

  • MASSIVE develops a curated remote desktop environment that is used by hundreds of researchers. The Strudel suite of software we developed to make interactive HPC easy is used at NCI, Pawsey, Jülich Supercomputing Centre, and many other HPC facilities.

  • MASSIVE is deploying resource allocation management software, developed by the Monash eResearch Centre, to allow researchers easier self management of their projects, resources and accounts.