Monitoring - A software bug is causing frequent restarts of the underlying infrastructure behind Freezer. An upgrade to resolve this is planned for Monday morning.
Nov 28, 2025 - 13:27 NZDT
Investigating - Freezer was down briefly from Friday 2025/11/28 12:40PM until Friday 2025/11/28 12:46PM. Please check if your transfers have been interrupted. For more information see https://docs.nesi.org.nz/Storage/Long_Term_Storage/Freezer_Guide/#synchronise-data
Nov 28, 2025 - 13:26 NZDT
Update - There has been a period of IO stalls this morning as we've dealt with some storage hardware failures. That issue is now resolved, however space reclamation continues in the backend and is having a detrimental impact on read performance. We are working with WEKA support to look at mitigation options. Apologies for the performance impact - if your jobs are impacted and need a runtime limit extension then please reach out to support.
Nov 24, 2025 - 12:44 NZDT
Identified - Our storage system is currently very full and this is forcing the backend object storage to undertake some urgent administration by way of defragmentation. This increased load is having a detrimental affect on I/O performance, especially read I/O, and this is likely to continue for some days. We are urgently looking at ways to mitigate this. Researchers could help alleviate this in the short term by cleaning up any unwanted files and data as soon as possible please.
Nov 21, 2025 - 12:29 NZDT

About This Site

This page shares the system status of REANNZ's advanced computing platform and storage services.
To view the status of REANNZ's network services, visit: https://reannz.status.io

Apply for Access ? Operational
Data Transfer Degraded Performance
Submit new HPC Jobs Operational
Jobs running on HPC Operational
NeSI OnDemand ? Operational
90 days ago
99.98 % uptime
Today
HPC Storage Degraded Performance
User Support System ? Operational
Flexible High Performance Cloud ? Operational
Long-term Storage (Freezer) ? Operational
90 days ago
100.0 % uptime
Today
Flexible High Performance Cloud Services ? Operational
90 days ago
99.99 % uptime
Today
Virtual Compute Service Operational
Bare Metal Compute Service Operational
FlexiHPC Dashboard (web interface) ? Operational
90 days ago
100.0 % uptime
Today
FlexiHPC CLI interface ? Operational
90 days ago
100.0 % uptime
Today
Public API of the FlexiHPC Service ? Operational
90 days ago
99.99 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.

Scheduled Maintenance

WEKA filesystem and compute core changes - Announcement Sep 8, 2025 09:00-17:00 NZST

In order to get our new WEKA filesystems up to their best possible performance we need to dedicate some cores on each compute node for exclusive use by WEKA. So on Milan nodes Slurm jobs will only be able to use 126 cores per node rather than 128, and on Genoa nodes 166 rather than 168.
This change has already begun, having already been applied to all of the Milan nodes and the majority of the Genoa nodes. We expect to be doing the last of the Genoa nodes on September 8th.

Posted on Sep 04, 2025 - 12:10 NZST

Automatic cleaning launched for scratch filesystem (/nesi/nobackup) - Announcement Nov 20, 2025 09:00 - Nov 21, 2025 09:00 NZDT

As announced earlier, we’ve re-activated an automatic cleaning process for temporary data stored on our scratch filesystem (/nesi/nobackup). Project members with files scheduled for deletion were notified by email (Subject to their my.nesi.org.nz Notification preferences. To confirm or adjust what notifications you receive, follow these instructions: https://docs.nesi.org.nz/Getting_Started/my-nesi-org-nz/Managing_notification_preferences/)
Files will be deleted on Wednesday 03 December (two weeks from yesterday's email notification). For more details on how the auto-deletion process works, visit:
https://docs.nesi.org.nz/Storage/File_Systems_and_Quotas/Automatic_cleaning_of_nobackup_file_system/
If you have files identified as candidates for deletion that you need to keep beyond the scheduled expiry date, you can move them to your project directory or to Freezer, our long-term storage service. However, if you plan to move more than 2 TB of data or if you need to increase your project directory quota, email support@nesi.org.nz so that we can discuss your storage needs and assist you.

Posted on Nov 20, 2025 - 08:44 NZDT

Cumulus switch upgrades Dec 2, 2025 18:00 - Dec 3, 2025 06:00 NZDT

Update - This maintenance has now been postponed to Dec 2nd at 1800hrs
Nov 18, 2025 - 08:50 NZDT
Scheduled - The border switches will have an upgrade applied overnight on Nov 18th from 6pm. Any ssh and external connections to the cluster and OnDemand may get broken during this maintenance. Slurm jobs will be unaffected.
Nov 07, 2025 - 11:13 NZDT
Dec 2, 2025

No incidents reported today.

Dec 1, 2025
Completed - The scheduled maintenance has been completed.
Dec 1, 13:31 NZDT
Update - The scheduled maintenance has been completed.
Dec 1, 13:31 NZDT
Update - Scheduled maintenance is still in progress. We will provide updates as necessary.
Dec 1, 13:30 NZDT
Update - Scheduled maintenance is still in progress. We will provide updates as necessary.
Dec 1, 13:29 NZDT
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Dec 1, 10:00 NZDT
Scheduled - We will be undergoing scheduled maintenance during this time.
Nov 26, 12:32 NZDT
Nov 30, 2025

No incidents reported.

Nov 29, 2025

No incidents reported.

Nov 28, 2025
Resolved - Freezer was down briefly from 12:40PM until 12:46PM. Please check if your transfers have been interrupted. For more information see https://docs.nesi.org.nz/Storage/Long_Term_Storage/Freezer_Guide/#synchronise-data
Nov 28, 12:30 NZDT
Nov 27, 2025

No incidents reported.

Nov 26, 2025

No incidents reported.

Nov 25, 2025

No incidents reported.

Nov 24, 2025

Unresolved incident: Slow I/O Performance.

Nov 23, 2025

No incidents reported.

Nov 22, 2025

No incidents reported.

Nov 21, 2025
Nov 20, 2025
Completed - The scheduled maintenance has been completed.
Nov 20, 10:30 NZDT
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 20, 09:30 NZDT
Scheduled - We will be performing some updates to our test file system that will result in short waits to the production storage mounts in OnDemand

This should only be for a small period of time and should resolve itself

If you are having any issues please contact support

Nov 20, 09:21 NZDT
Nov 19, 2025

No incidents reported.

Nov 18, 2025
Resolved - This incident has been resolved.
Nov 18, 10:48 NZDT
Monitoring - We have done a rolling restart of NFS services which look to have resolved the user/group resolution issues in OnDemand and the correct mappings are now being displayed

We are now monitoring for any further impact.

If you notice any other problems with OnDemand then please let us know.

Nov 17, 17:06 NZDT
Identified - We've identified the underlying cause of the issue and revised down our understanding of the impact. OnDemand apps such as Jupyter are working and able to access user's files consistently with user and group permissions on the main cluster, despite displaying as nobody:nobody. The "NeSI HPC Shell Access" feature is still impacted due to additional ownership checks in SSH, users that need help connecting via SSH can reach out to support.

We are now planning a rolling restart of NFS services later this afternoon which should resolve the user/group resolution issues in OnDemand and result in the correct mappings being displayed.

If you notice any other problems with OnDemand then please let us know.

Nov 17, 13:14 NZDT
Investigating - We have encountered an issue with user and group resolution that is currently impacting OnDemand sessions. Users may be experiencing file listing permissions showing as "nobody:nobody" and getting permission denied when trying to read files. This is also impacting the "NeSI HPC Shell Access" feature in OnDemand. Regular/native SSH access to the cluster is not impacted.

Apologies for the Monday disruption, we're investigating workarounds and will provide an update at approx 1pm.

Nov 17, 12:28 NZDT