From 3b27123ecce83caff7ac7b5cfa4fe9b4a9ea14d4 Mon Sep 17 00:00:00 2001 From: YemBot Date: Fri, 17 Jan 2025 16:49:35 -0800 Subject: [PATCH 1/3] Update README.md --- README.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/README.md b/README.md index 1f0eceb..190ae23 100644 --- a/README.md +++ b/README.md @@ -6,8 +6,6 @@ and the Rubin observatory. The S3DF infrastructure is optimized for data analytics and is characterized by large, massive throughput, high concurrency storage systems. -**January 6th 8:40am PST: All S3DF services are back UP. Users with k8s workloads should check for any lingering issues (stale file handles) and report to s3df-help@slac.stanford.edu. Thank you for your patience.** - ## Quick Reference | Access | Address | From 4cbec5fba63e9c23e1bcab4694a312f6af7718ad Mon Sep 17 00:00:00 2001 From: YemBot Date: Fri, 17 Jan 2025 16:51:17 -0800 Subject: [PATCH 2/3] Update changelog.md --- changelog.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/changelog.md b/changelog.md index e357531..613350e 100644 --- a/changelog.md +++ b/changelog.md @@ -43,15 +43,13 @@ If critical issues are not responded to within 2 hours of reporting the issue pl ### Current -**January 6th 8:40am PST: All S3DF services are back UP. Users with k8s workloads should check for any lingering issues (stale file handles) and report to s3df-help@slac.stanford.edu. Thank you for your patience.** - ### Upcoming ### Past |When |Duration | What | | --- | --- | --- | -|Dec 26 2024| 1 days (unplanned)|One of our core networking switches in the data center failed and had to be replaced. The fall-out from this impacted other systems and services on S3DF. Staff worked through the night on stabilization of the network devices and connections as well as recovery of the storage subsystem.| +|Dec 26 2024| 12 days (unplanned)|One of our core networking switches in the data center failed and had to be replaced. The fall-out from this impacted other systems and services on S3DF. Staff worked through the night on stabilization of the network devices and connections as well as recovery of the storage subsystem.| |Dec 10 2024|(unplanned)|StaaS GPFS disk array outage (partial /gpfs/slac/staas/fs1 unavailability)| | Dec 3 2024 | 1 hr (planned) | Mandatory upgrade of the slurm controller, the database, and the client components on all batch nodes, kubernetes nodes, and interactive nodes. |Nov 18 2024|8 days (unplanned)|StaaS GPFS disk array outage (partial /gpfs/slac/staas/fs1 unavailability)| From 2200b214238c60476fcaaf84aa6c79dacb06890d Mon Sep 17 00:00:00 2001 From: YemBot Date: Fri, 17 Jan 2025 17:04:35 -0800 Subject: [PATCH 3/3] Update changelog.md --- changelog.md | 41 ++--------------------------------------- 1 file changed, 2 insertions(+), 39 deletions(-) diff --git a/changelog.md b/changelog.md index 613350e..0eae947 100644 --- a/changelog.md +++ b/changelog.md @@ -1,50 +1,13 @@ # Status & Outages -## Support during Winter Shutdown - -S3DF will remain operational over the Winter shutdown (Dec 21st 2024 to Jan 5th 2025). Staff will be taking time off as per SLAC guidelines. S3DF resources will continue to be managed remotely if there are interruptions to operations. Response times for issues will vary, depending on the criticality of the issue as detailed below. - -**Contacting S3DF staff for issues:** -Users should email s3df-help@slac.stanford.edu for ALL issues (critical and non-critical) providing full details of the problem (including what resources were being used, the impact and other information that may be useful in resolving the issue). -We will update the #comp-sdf Slack channel for critical issues as they are being worked on with status updates. -[This S3DF status web-page](https://s3df.slac.stanford.edu/#/changelog) will also have any updates on current issues. -If critical issues are not responded to within 2 hours of reporting the issue please contact your [Facility Czar](https://s3df.slac.stanford.edu/#/contact-us) for escalation. - -**Critical issues** will be responded to as we become aware of them, except for the period of Dec 24-25 and Jan 31-1, which will be handled as soon as possible depending on staff availability. -* Critical issues are defined as full (a system-wide) outages that impact: - * Access to S3DF resources including - * All SSH logins - * All IANA interactive resources - * B50 compute resources(*) - * Bullet Cluster - * Access to all of the S3DF storage - * Home directories - * Group, Data and Scratch filesystems - * B50 Lustre, GPFS and NFS storage(*) - * Batch system access to S3DF Compute resources - * S3DF Kubernetes vClusters - * VMware clusters - * S3DF virtual machines - * B50 virtual machines(*) -* Critical issues for other SCS-managed systems and services for Experimental system support will be managed in conjunction with the experiment as appropriate. This includes - * LCLS workflows - * Rubin USDF resources - * CryoEM workflows - * Fermi workflows -(*) B50 resources are also dependent on SLAC-IT resources being available. - -**Non-critical issues** will be responded to in the order they were received in the ticketing system when normal operations resume after the Winter Shutdown. Non-critical issues include: - * Individual node-outages in the compute or interactive pool - * Variable or unexpected performance issues for compute, storage or networking resources. - * Batch job errors (that do not impact overall batch system scheduling) - * Tape restores and data transfer issues - ## Outages ### Current + ### Upcoming + ### Past |When |Duration | What |