Identified - Our storage system is currently very full and this is forcing the backend object storage to undertake some urgent administration by way of defragmentation. This increased load is having a detrimental affect on I/O performance, especially read I/O, and this is likely to continue for some days. We are urgently looking at ways to mitigate this. Researchers could help alleviate this in the short term by cleaning up any unwanted files and data as soon as possible please.
Nov 21, 2025 - 12:29 NZDT
This page shares the system status of REANNZ's advanced computing platform and storage services.
To view the status of REANNZ's network services, visit: https://reannz.status.io
Apply for Access
?
Operational
Data Transfer
Degraded Performance
Submit new HPC Jobs
Operational
Jobs running on HPC
Operational
NeSI OnDemand
?
Operational
90 days ago
99.94
% uptime
Today
HPC Storage
Degraded Performance
User Support System
?
Operational
Flexible High Performance Cloud
?
Operational
Long-term Storage (Freezer)
?
Operational
90 days ago
99.98
% uptime
Today
Flexible High Performance Cloud Services
?
Operational
90 days ago
99.99
% uptime
Today
Virtual Compute Service
Operational
Bare Metal Compute Service
Operational
FlexiHPC Dashboard (web interface)
?
Operational
90 days ago
100.0
% uptime
Today
FlexiHPC CLI interface
?
Operational
90 days ago
100.0
% uptime
Today
Public API of the FlexiHPC Service
?
Operational
90 days ago
99.99
% uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Related
No incidents or maintenance related to this downtime.
In order to get our new WEKA filesystems up to their best possible performance we need to dedicate some cores on each compute node for exclusive use by WEKA. So on Milan nodes Slurm jobs will only be able to use 126 cores per node rather than 128, and on Genoa nodes 166 rather than 168. This change has already begun, having already been applied to all of the Milan nodes and the majority of the Genoa nodes. We expect to be doing the last of the Genoa nodes on September 8th. Posted on
Sep 04, 2025 - 12:10 NZST
As announced earlier, we’ve re-activated an automatic cleaning process for temporary data stored on our scratch filesystem (/nesi/nobackup). Project members with files scheduled for deletion were notified by email (Subject to their my.nesi.org.nz Notification preferences. To confirm or adjust what notifications you receive, follow these instructions: https://docs.nesi.org.nz/Getting_Started/my-nesi-org-nz/Managing_notification_preferences/) Files will be deleted on Wednesday 03 December (two weeks from yesterday's email notification). For more details on how the auto-deletion process works, visit: https://docs.nesi.org.nz/Storage/File_Systems_and_Quotas/Automatic_cleaning_of_nobackup_file_system/ If you have files identified as candidates for deletion that you need to keep beyond the scheduled expiry date, you can move them to your project directory or to Freezer, our long-term storage service. However, if you plan to move more than 2 TB of data or if you need to increase your project directory quota, email support@nesi.org.nz so that we can discuss your storage needs and assist you. Posted on
Nov 20, 2025 - 08:44 NZDT
Update -
This maintenance has now been postponed to Dec 2nd at 1800hrs
Nov 18, 2025 - 08:50 NZDT
Scheduled -
The border switches will have an upgrade applied overnight on Nov 18th from 6pm. Any ssh and external connections to the cluster and OnDemand may get broken during this maintenance. Slurm jobs will be unaffected.
Nov 07, 2025 - 11:13 NZDT
Resolved -
This incident has been resolved.
Nov 18, 10:48 NZDT
Monitoring -
We have done a rolling restart of NFS services which look to have resolved the user/group resolution issues in OnDemand and the correct mappings are now being displayed
We are now monitoring for any further impact.
If you notice any other problems with OnDemand then please let us know.
Nov 17, 17:06 NZDT
Identified -
We've identified the underlying cause of the issue and revised down our understanding of the impact. OnDemand apps such as Jupyter are working and able to access user's files consistently with user and group permissions on the main cluster, despite displaying as nobody:nobody. The "NeSI HPC Shell Access" feature is still impacted due to additional ownership checks in SSH, users that need help connecting via SSH can reach out to support.
We are now planning a rolling restart of NFS services later this afternoon which should resolve the user/group resolution issues in OnDemand and result in the correct mappings being displayed.
If you notice any other problems with OnDemand then please let us know.
Nov 17, 13:14 NZDT
Investigating -
We have encountered an issue with user and group resolution that is currently impacting OnDemand sessions. Users may be experiencing file listing permissions showing as "nobody:nobody" and getting permission denied when trying to read files. This is also impacting the "NeSI HPC Shell Access" feature in OnDemand. Regular/native SSH access to the cluster is not impacted.
Apologies for the Monday disruption, we're investigating workarounds and will provide an update at approx 1pm.
Nov 17, 12:28 NZDT
Completed -
The scheduled maintenance has been completed.
Nov 14, 11:50 NZDT
Scheduled -
We are updating OpenStack control components at this time. The public APIs (this impacts Dashboard and CLI clients too) may be unavailable for short periods during this window. There is no impact to existing running cloud infrastructure.
Nov 5, 09:13 NZDT
Completed -
The scheduled maintenance has been completed.
Nov 11, 10:30 NZDT
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 11, 09:30 NZDT
Scheduled -
During this time we will be performing an update for OnDemand.
Current running sessions will be affected during this time as we will be doing some behind the scene updates.
We advise that all work is saved prior to this change, after the update all previous running sessions will not be active and new sessions will need to be created.
We apologize for any inconvenience caused at this time
Nov 10, 11:08 NZDT