Skip to content

Commit

Permalink
Merge pull request #108 from slaclab/main
Browse files Browse the repository at this point in the history
Outage updates
  • Loading branch information
YemBot authored Jan 18, 2025
2 parents 5997aa8 + 2200b21 commit fdb83a1
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 43 deletions.
2 changes: 0 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@ and the Rubin observatory. The S3DF infrastructure is optimized for
data analytics and is characterized by large, massive throughput, high
concurrency storage systems.

**January 6th 8:40am PST: All S3DF services are back UP. Users with k8s workloads should check for any lingering issues (stale file handles) and report to [email protected]. Thank you for your patience.**

## Quick Reference

| Access | Address |
Expand Down
43 changes: 2 additions & 41 deletions changelog.md
Original file line number Diff line number Diff line change
@@ -1,57 +1,18 @@
# Status & Outages

## Support during Winter Shutdown

S3DF will remain operational over the Winter shutdown (Dec 21st 2024 to Jan 5th 2025). Staff will be taking time off as per SLAC guidelines. S3DF resources will continue to be managed remotely if there are interruptions to operations. Response times for issues will vary, depending on the criticality of the issue as detailed below.

**Contacting S3DF staff for issues:**
Users should email [email protected] for ALL issues (critical and non-critical) providing full details of the problem (including what resources were being used, the impact and other information that may be useful in resolving the issue).
We will update the #comp-sdf Slack channel for critical issues as they are being worked on with status updates.
[This S3DF status web-page](https://s3df.slac.stanford.edu/#/changelog) will also have any updates on current issues.
If critical issues are not responded to within 2 hours of reporting the issue please contact your [Facility Czar](https://s3df.slac.stanford.edu/#/contact-us) for escalation.

**Critical issues** will be responded to as we become aware of them, except for the period of Dec 24-25 and Jan 31-1, which will be handled as soon as possible depending on staff availability.
* Critical issues are defined as full (a system-wide) outages that impact:
* Access to S3DF resources including
* All SSH logins
* All IANA interactive resources
* B50 compute resources(*)
* Bullet Cluster
* Access to all of the S3DF storage
* Home directories
* Group, Data and Scratch filesystems
* B50 Lustre, GPFS and NFS storage(*)
* Batch system access to S3DF Compute resources
* S3DF Kubernetes vClusters
* VMware clusters
* S3DF virtual machines
* B50 virtual machines(*)
* Critical issues for other SCS-managed systems and services for Experimental system support will be managed in conjunction with the experiment as appropriate. This includes
* LCLS workflows
* Rubin USDF resources
* CryoEM workflows
* Fermi workflows
(*) B50 resources are also dependent on SLAC-IT resources being available.

**Non-critical issues** will be responded to in the order they were received in the ticketing system when normal operations resume after the Winter Shutdown. Non-critical issues include:
* Individual node-outages in the compute or interactive pool
* Variable or unexpected performance issues for compute, storage or networking resources.
* Batch job errors (that do not impact overall batch system scheduling)
* Tape restores and data transfer issues

## Outages

### Current

**January 6th 8:40am PST: All S3DF services are back UP. Users with k8s workloads should check for any lingering issues (stale file handles) and report to [email protected]. Thank you for your patience.**

### Upcoming


### Past

|When |Duration | What |
| --- | --- | --- |
|Dec 26 2024| 1 days (unplanned)|One of our core networking switches in the data center failed and had to be replaced. The fall-out from this impacted other systems and services on S3DF. Staff worked through the night on stabilization of the network devices and connections as well as recovery of the storage subsystem.|
|Dec 26 2024| 12 days (unplanned)|One of our core networking switches in the data center failed and had to be replaced. The fall-out from this impacted other systems and services on S3DF. Staff worked through the night on stabilization of the network devices and connections as well as recovery of the storage subsystem.|
|Dec 10 2024|(unplanned)|StaaS GPFS disk array outage (partial /gpfs/slac/staas/fs1 unavailability)|
| Dec 3 2024 | 1 hr (planned) | Mandatory upgrade of the slurm controller, the database, and the client components on all batch nodes, kubernetes nodes, and interactive nodes.
|Nov 18 2024|8 days (unplanned)|StaaS GPFS disk array outage (partial /gpfs/slac/staas/fs1 unavailability)|
Expand Down

0 comments on commit fdb83a1

Please sign in to comment.