Close this window

Email Announcement Archive

[Users] NERSC Weekly Email, Week of June 20, 2022

Author: Rebecca Hartman-Baker <rjhartmanbaker_at_lbl.gov>
Date: 2022-06-20 06:41:13

# NERSC Weekly Email, Week of June 20, 2022<a name="top"></a> # ## Contents ## - [Summary of Upcoming Events and Key Dates](#dates) ## [NERSC Status](#section1) ## - [NERSC Operations Continue as Berkeley Lab Reopens, with Minimal Changes](#curtailment) ## [This Week's Events and Deadlines](#section2) ## - [Juneteenth Holiday Today; No Consulting or Account Support](#juneteenth) - [Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts Wednesday!](#spinup) - [Register for the Smoky Mountains Data Challenge by Thursday!](#smdatachallenge) ## [Perlmutter](#section3) ## - [Perlmutter Machine Status](#perlmutter) - [Integration of Perlmutter Phase 2 Will Minimize System Downtime](#pmintegration) - [(NEW) "Preempt" Queue Available on Perlmutter Nodes](#pmpreempt) ## [Updates at NERSC ](#section4) ## - [(NEW) NERSC Help Portal Upgrade on June 29; Intermittent Unavailability](#helpupgrade) - [Annoucing the NUG SIG for WRF Users at NERSC](#nugwrf) - [Many Counters Used by Performance Tools Temporarily Disabled; NERSC Awaiting Vendor Patch Before Re-enabling](#vtune) - [Need Help? Check out NERSC Documentation, Send in a Ticket or Consult Your Peers!](#gettinghelp) - [Federated ID Available for Some National Labs](#fedid) - [Please Take the Machine Learning at NERSC Survey!](#mlsurvey) ## [Calls for Participation](#section5) ## - [Nominations for George Michael Memorial HPC Fellowship Due Next Thursday](#gmichael) - [Call for Participation: Third International Symposium on Checkpointing for Supercomputing](#supercheck) ## [Upcoming Training Events ](#section6) ## - [(NEW) "Supercomputing Spotlights" Webinar Featuring Prof. Satoshi Matsuoka on June 27](#scspotlights) - [Training on Profiling Deep Learning Applications with NVIDIA Nsight, June 30](#dlnsight) - [Tutorial on Coordinating Dynamic Ensembles of Computations with libEnsemble, July 7](#libensemble) ## [NERSC News ](#section7) ## - [Come Work for NERSC!](#careers) - [Upcoming Outages](#outages) - [About this Email](#about) ## Summary of Upcoming Events and Key Dates <a name="dates"/></a> ## June 2022 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 *20* 21 *22**23* 24 25 20 Jun Juneteenth Holiday [1] 22 Jun SpinUp Workshop [2] 23 Jun Smoky Mtns Data Challenge Subs Due [3] 26 *27* 28 *29**30* 27 Jun Supercomputing Spotlights Webinar [4] 29 Jun NERSC Help Portal Upgrade [5] 30 Jun George Michael HPC Fellow Noms Due [6] 30 Jun NVIDIA NSight for DL Training [7] July 2022 Su Mo Tu We Th Fr Sa 1 2 3 *4* 5 *6* *7* 8 9 4 Jul Independence Day Holiday [8] 6 Jul IDEAS-ECP Monthly Webinar [9] 7 Jul Pyton libEnsemble Tutorial [10] 10 11 12 13 14 15 16 17 18 19 *20* 21 22 23 20 Jul Cori Monthly Maintenance [11] 24 25 26 27 28 29 30 31 August 2022 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 *10* 11 12 13 10 Aug SpinUp Workshop [2] 14 15 16 *17* 18 19 20 17 Aug Cori Monthly Maintenance [11] 21 22 23 24 25 *26* 27 26 Aug SuperCheck-SC22 Subs Due [12] 28 29 30 31 1. **June 20, 2022**: [Juneteenth Holiday](#juneteenth) (No Consulting or Account Support) 2. **June 22 or August 10, 2022**: [SpinUp Workshop](#spinup) 3. **June 23, 2022**: [Smoky Mountains Data Challenge Submissions Due](#smdatachallenge) 4. **June 27, 2022**: [Supercomputing Spotlights Webinar](#scspotlights) 5. **June 29, 2022**: [NERSC Help Portal Upgrade](#helpupgrade) 6. **June 30, 2022**: [George Michael Memorial HPC Fellowship Nominations Due](#gmichael) 7. **June 30, 2022**: [Profiling DL Applications with NVIDIA Nsight Training](#dlnsight) 8. **July 4, 2022**: Independence Day Holiday (No Consulting or Account Support) 9. **July 6, 2022**: [IDEAS-ECP Monthly Webinar](#ecpwebinar2) 10. **July 7, 2022**: [Python libEnsemble Tutorial](#libensemble) 11. **July 20 & August 17, 2022**: Cori Monthly Maintenance 12. **August 26, 2022**: [Submissions due for SuperCheck-SC22](#supercheck) 13. All times are **Pacific Time zone** - **Upcoming Planned Outage Dates** (see [Outages section](#outages) for more details) - **Wednesday**: HPSS Regent (Backup), - **Thursday**: Spin - **Other Significant Dates** - **August 3-4, 2022**: OpenACC and Hackathons 2022 Summit - **October 5 & November 30, 2022**: SpinUp Workshops - **September 21, & October 19, 2022**: Cori Monthly Maintenance Window - **September 5, 2022**: Labor Day Holiday (No Consulting or Account Support) - **November 14, 2022**: [SuperCheck-SC22 Workshop](https://supercheck.lbl.gov) - **November 24-25, 2022**: Thanksgiving Holiday (No Consulting or Account Support) - **December 23, 2022-January 2, 2023**: Winter Shutdown (Limited Consulting and Account Support) ([back to top](#top)) --- ## NERSC Status <a name="section1"/></a> ## ### NERSC Operations Continue as Berkeley Lab Reopens, with Minimal Changes <a name="curtailment"/></a> Berkeley Lab, where NERSC is located, is beginning to welcome employees back on-site following a two-year absence. NERSC remains in operation, with the majority of NERSC staff continuing to work remotely, and staff essential to operations onsite. We do not expect any disruptions to our operations in the next few months as the site reopens. You can continue to expect regular online consulting and account support as well as schedulable online appointments. Trainings continue to be held online. Regular maintenances on the systems continue to be performed while minimizing onsite staff presence, which could result in longer downtimes than would occur under normal circumstances. Because onsite staffing remains minimal, we request that you continue to refrain from calling NERSC Operations except to report urgent system issues. For **current NERSC systems status**, please see the online [MOTD](https://www.nersc.gov/live-status/motd/) and [current known issues](https://docs.nersc.gov/current/) webpages. ([back to top](#top)) --- ## This Week's Events and Deadlines <a name="section2"/></a> ## ### Juneteenth Holiday Today; No Consulting or Account Support <a name="juneteenth"/></a> Consulting and account support will be unavailable today, June 20, due to the new Berkeley Lab-observed Juneteenth holiday. Regular consulting and account support will resume tomorrow. About the holiday: Juneteenth, celebrated on the 19th of June and also referred to as Emancipation Day or Freedom Day, marks the day in 1865 that enslaved people in Texas learned they were free after federal troops arrived to enforce federal law prohibiting slavery. Juneteenth is an opportunity to reflect on the history of our country's treatment of Black Americans, and to consider how our values can help guide us into a more equitable, just, and inclusive future. ### Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts Wednesday! <a name="spinup"/></a> Spin is a service platform at NERSC based on Docker container technology. It can be used to deploy science gateways, workflow managers, databases, and all sorts of other services that can access NERSC systems and storage on the back end. New large-memory nodes have been added to the platform, increasing the potential of the platform for new memory-constrained applications. To learn more about how Spin works and what it can do, please listen to the NERSC User News podcast on Spin: <https://anchor.fm/nersc-news/episodes/Spin--Interview-with-Cory-Snavely-and-Val-Hendrix-e1pa7p>. Attend an upcoming SpinUp workshop to learn to use Spin for your own science gateway projects! Applications for sessions that begin this [Wednesday, June 22](https://www.nersc.gov/users/training/spin/) are still open. SpinUp is hands-on and interactive, so space is limited. Participants will attend an instructional session and a hack-a-thon to learn about the platform, create running services, and learn maintenance and troubleshooting techniques. Local and remote participants are welcome. If you can't make these upcoming sessions, never fear! The next session begins August 10, and more are planned for October and November. See a video of Spin in action at the [Spin documentation](https://docs.nersc.gov/services/spin/) page. ### Register for the Smoky Mountains Data Challenge by Thursday! <a name="smdatachallenge"/></a> You are invited to participate in Oak Ridge National Laboratory's [Smoky Mountains Data Challenge](https://smc-datachallenge.ornl.gov/). It is comprised of 8 data analytics challenges based on data sets contributed by ORNL, Industries, and Academia. Expert data scientists, as well as students, are called to participate. Registration is open until Thursday at <https://smc-datachallenge.ornl.gov/registration/>. Winners will have an opportunity to publish their solutions papers and participate in the Smoky Mountains Computational Sciences and Engineering Conference. ([back to top](#top)) --- ## Perlmutter <a name="section3"/></a> ## ### Perlmutter Machine Status <a name="perlmutter"/></a> The initial phase of the Perlmutter supercomputer is in the NERSC machine room, running user jobs. Some nodes of the CPU-only second phase of the machine have been added to the machine. NERSC has now added all users to Perlmutter. Everyone is welcome to try out the GPU-accelerated nodes and the new CPU-only nodes. The walltime limit for jobs on Perlmutter has been raised from 6 hours to 12 hours. This newsletter section will be updated each week with the latest Perlmutter status. ### Integration of Perlmutter Phase 2 Will Minimize System Downtime <a name="pmintegration"/></a> All cabinets for the second phase of Perlmutter have arrived. The Phase 2 nodes need to be added to the system, and this integration process has been designed to minimize system downtime for users. You may see the number of nodes available continue to fluctuate, but we expect to be able to keep at least 500 Phase-1 nodes available to users throughout the process. While we will keep Perlmutter available as much as possible, it is not yet a production system so there are no uptime guarantees. ### (NEW) "Preempt" Queue Available on Perlmutter Nodes <a name="pmpreempt"/></a> A "preempt" queue is available on the Perlmutter system for jobs running on GPU or CPU nodes. This queue is aimed at users whose jobs are capable of running for a relatively short amount of time before terminating. For example, if your code is able to checkpoint and restart where it left off, you may be interested in the preempt queue. The preempt queue is accessed by adding `-q preempt` in your job script. Jobs in this queue may specify a walltime up to 24 hours (vs. the current max walltime of 12 hours), but are subject to preemption after 2 hours. Additionally, the maximum number of nodes requested must not exceed 128. For an example preemptible job script, please see our documentation pages: <https://docs.nersc.gov/jobs/examples/#preemptible-jobs>. ([back to top](#top)) --- ## Updates at NERSC <a name="section4"/></a> ## ### (NEW) NERSC Help Portal Upgrade on June 29; Intermittent Unavailability <a name="helpupgrade"/></a> The NERSC Help Portal (<https://help.nersc.gov>) will undergo an upgrade on June 29, 2022, from 9 am to 2 pm (Pacific time). During this upgrade, the platform may be intermittently unavailable. During the unavailable periods, users will not be able to access or use the Help Portal. Updates will be provided on the NERSC Status mailing list as the upgrade progresses. ### Annoucing the NUG SIG for WRF Users at NERSC <a name="nugwrf"/></a> NERSC users come from many institutions and do diverse research, but can share common challenges and best practices. The new special interest group for the Weather Research and Forecasting (WRF) model users at NERSC is a forum for participants to share compilation scripts, data, and tips for using WRF at NERSC. It meets via Zoom and uses the [`#wrf_user` channel at NUG Slack](https://www.nersc.gov/users/NUG/nersc-users-slack/) for discussion. If you use WRF at NERSC we invite you to join the WRF Users at NERSC SIG via that slack channel or [this sign-up sheet](https://forms.gle/vts4AYvtKuzju9qn7). ### Many Counters Used by Performance Tools Temporarily Disabled; NERSC Awaiting Vendor Patch Before Re-enabling <a name="vtune"/></a> Many of the counters used by performance tools have been temporarily disabled on NERSC resources to mitigate a security vulnerability. This could impact users of many performance tools, including VTune, CrayPat, PAPI, Nsight System (CPU metrics only), HPCToolkit, MAP, and more. NERSC is awaiting a patch from the vendor before re-enabling these counters. We will let you know when this issue has been resolved. ### Need Help? Check out NERSC Documentation, Send in a Ticket or Consult Your Peers! <a name="gettinghelp"/></a> Are you confused about setting up your MFA token? Is there something not quite right with your job script that causes the job submission filter to reject it? Are you struggling to understand the performance of your code on the KNL nodes? There are many ways that you can get help with issues at NERSC: - First, we recommend the NERSC [documentation](https://docs.nersc.gov) (<https://docs.nersc.gov/>). Usually the answers for simpler issues, such as setting up your MFA token using Google Authenticator, can be found there. (The answers to more complex issues can be found in the documentation too!) - For more complicated issues, or issues that leave you unable to work, submitting a [ticket](https://help.nersc.gov) is a good way to get help fast. NERSC's consulting team will get back to you within four business-hours (8 am - 5 pm, Monday through Friday, except holidays) with a response. To submit a ticket, log in to <https://help.nersc.gov> (or, if the issue prevents you from logging in, send an email to <accounts@nersc.gov>). - For queries that might require some back-and-forth, NERSC provides an [appointment service](https://docs.nersc.gov/getting-started/#appointments-with-nersc-user-support-staff). Sign up for an appointment on a variety of topics, including "NERSC 101", KNL Optimization, Containers at NERSC, NERSC File Systems, GPU Basics, GPUs in Python, and Checkpoint/Restart. - The **NERSC Users Group Slack**, while not an official channel for help, is a place where NERSC users often answer each others' questions, such as whether anyone else is seeing something strange, or how to get better job throughput. You can join the NUG Slack by following [this link](https://www.nersc.gov/users/NUG/nersc-users-slack/) (login required) - Sometimes, a **colleague** can figure out the issue faster than NERSC, because they already understand your workflow. It's possible that they know what flag you need to add to your Makefile for better performance, or how to set up your job submission script just so. ### Federated ID Available for Some National Labs <a name="fedid"/></a> NERSC's Federated Identity (FedID) infrastructure rollout last month enabled users affiliated with most Department of Energy national laboratories to link their home institution's identity to NERSC, and subsequently use that identity to log into NERSC web resources such as Iris, ServiceNow, and Jupyter. The web login page was changed slightly to accommodate for this expanded capability. ### Please Take the Machine Learning at NERSC Survey! <a name="mlsurvey"/></a> NERSC is conducting a survey of scientific researchers who are developing and using machine-learning (ML) models for scientific problems. We want to better understand users' current and future ML ecosystem and computational needs for development, training, and deployment of models. Your feedback is critical to help us optimize Perlmutter and future systems for ML capability and performance. Please take the survey at <https://forms.gle/1CJ9x2ndXTfjsYfx9>. ([back to top](#top)) --- ## Calls for Participation <a name="section5"/></a> ## ### Nominations for George Michael Memorial HPC Fellowship Due Next Thursday <a name="gmichael"/></a> The George Michael Memorial HPC Fellowship award committee is seeking nominations for this fellowship, which honors exceptional PhD students throughout the world whose research focus is on high-performance computing applications, networking, storage, or large-scale data analysis using the most powerful computers currently available. The Fellowship includes a $5000 honorarium, recognition on the ACM, IEEE CS, and ACM SIGHPC websites, and paid travel expenses to attend SC22, where the recipient will be honored at the SC Conference Awards Ceremony. Candidates must be enrolled in a full-time PhD program at an accredited college or university and must meet the minimum scholastic requirements at their institution. They are expected to have completed at least one year of study and to have at least one year remaining between the application deadline and their expected graduation. Nominations are in the form of self-nominations, submitted online. For more information and to nominate, please see <https://www.computer.org/volunteering/awards/michael>. Nominations are due next Thursday, June 30! ### Call for Participation: Third International Symposium on Checkpointing for Supercomputing <a name="supercheck"/></a> You are invited to participate in the Third International Symposium on Checkpointing for Supercomputing (SuperCheck-SC22), which will be held on November 14, 2022, in conjunction with SC22. The workshop will feature the latest work in checkpoint/restart research, tools development, and production use. Topics of interest for the workshop include but are not limited to: - Application-level checkpointing: APIs to define critical states, techniques to capture critical states (e.g., efficient serialization) - Transparent/system-level checkpointing: techniques to capture state of devices and accelerators (CPUs, GPUs, network interfaces, etc.) - I/O and storage solutions that leverage heterogeneous storage to persist checkpoints at scale - Checkpoint size-reduction techniques (compression, deduplication) - Alternative techniques that avoid persisting checkpoints to storage (e.g., erasure coding) - Synchronous vs. asynchronous checkpointing strategies - Multi-level and hybrid strategies combining application-level, system-level, transparent checkpointing on heterogeneous hardware - Application-specific techniques combined with checkpointing (e.g., ABFT) - Performance evaluation and reproducibility, study of real failures and their recovery - Research on optimal checkpointing interval, C/R-aware job scheduling and resource management - Experience with traditional use cases of checkpointing on novel platforms - New use cases of checkpointing beyond resilience - Support on HPC systems (e.g., resource scheduling, system utilization, batch system integration, best practices, etc.) The call for participation is available at: <https://supercheck.lbl.gov/call-for-participation>. Submissions are due **August 26, 2022.** ([back to top](#top)) --- ## Upcoming Training Events <a name="section6"/></a> ## ### (NEW) "Supercomputing Spotlights" Webinar Featuring Prof. Satoshi Matsuoka on June 27 <a name="scspotlights"/></a> The new "Supercomputing Spotlights" webinar series, presented by the [SIAM Activity Group on Supercomputing (SIAG/SC)](https://www.siam.org/membership/activity-groups/detail/supercomputing) features short TED-style presentations that highlight the impact and successes of high-performance computing (HPC) throughout our world. Presentations, emphasizing achievements and opportunities in HPC, are intended for the broad international community, especially students and newcomers to the field. Join us for the inaugural session featuring Professor Satoshi Matsuoka (RIKEN Center for Computational Science, Japan) on June 27, which will be held 6-6:30 am Pacific time. *Abstract*: Come and learn how high-performance computing (HPC) fosters scientific discovery! We will look back at nearly two decades of supercomputing innovations that have reshaped the gaming industry, enhanced climate/weather predictions, and facilitated research on drug design. Artificial Intelligence (AI) is yet another game changer. Did you know that AI powered by supercomputers has been at the forefront of rapid research responses in the international fight against Covid-19? This presentation will highlight successful HPC advances and call out opportunities for impacting the future of our global society. *Bio*: Satoshi Matsuoka is the Director of RIKEN Center for Computational Science, and he oversees the deployment and operation of the Fugaku supercomputer. He has been the leader of the TSUBAME series of supercomputers that pioneered the use of GPUs in supercomputing. He has been a major driving force behind the development of Fugaku, which won first place in four major rankings of supercomputer performance (Top500, HPCG, HPL-AI, and Graph500) in June 2020. He is the recipient of the 2014 IEEE-CS Sidney Fernbach Memorial Award and more recently was awarded the Medal of Honor with purple ribbon from the emperor of Japan. Participation is free, but [registration](https://exascaleproject.zoomgov.com/meeting/register/vJIsde6uqzgrGjAIUsnDEUr2bzex1LeggyA) is required. ### Training on Profiling Deep Learning Applications with NVIDIA Nsight, June 30 <a name="dlnsight"/></a> Hosted by ALCF, this one-hour tutorial will introduce performance analysis techniques for deep learning applications using the NVIDIA Nsight Systems profiling tool to peek under the covers. The talk will cover how to collect performance information for the neural network layers to relate GPU work back to higher-level concepts, as well as for other sections of code that feed the DNN or consume its results. Participants will gain deeper insights into the execution and interactions among the processes, OS, GPU CUDA kernels, Tensor Cores, NVLinks, and even nodes. Speakers will discuss ways to access report data for deeper analysis as well as some common pitfalls with both training and inference applications. For more information, please see the NERSC event page at <https://www.nersc.gov/users/training/events/profiling-deep-learning-applications-with-nvidia-nsight/>. ### Tutorial on Coordinating Dynamic Ensembles of Computations with libEnsemble, July 7 <a name="libensemble"/></a> Are you running large numbers of computations to train models, perform optimizations based on simulation results, or perform other adaptive parameter studies? If so, consider registering for the upcoming tutorial on libEnsemble, a Python toolkit for coordinating asynchronous and dynamic ensembles of calculations across massively parallel resources. The tutorial, which will be held from 10 am to 11:30 am (Pacific time) on Thursday, July 7, will address how to copule libEnsemble workflows with any user application and apply advanced features including the allocation of variable resources and the cancellation of simulations based on intermediate outputs. Using examples from current ECP software technology and application integrations, the presenters will demonstrate how libEnsemble's mix-and-match approach can help interface libraries and applications with exascale-level resources. For more information and to register, please see <https://www.exascaleproject.org/event/libensemble_jul2022/>. ([back to top](#top)) --- ## NERSC News <a name="section7"/></a> ## ### Come Work for NERSC! <a name="careers"/></a> NERSC currently has several openings for postdocs, system administrators, and more! If you are looking for new opportunities, please consider the following openings: - **NEW** [Scientific Data Architect](http://m.rfer.us/LBL7BZ58O): Support a high-performing data and AI software stack for NERSC users, and collaborate on multidisciplinary, cross-institution scientific projects with scientists and instruments from around the world. - [HPC Architecture and Performance Engineer](http://m.rfer.us/LBL1rb56n): Contribute to NERSC's understanding of future systems (compute, storage, and more) by evaluating their efficacy across leading-edge DOE Office of Science application codes. - [Technical and User Support Engineer](http://m.rfer.us/LBLPYs4pz): Assist users with account setup, login issues, project membership, and other requests. - [NESAP for Simulations Postdoctoral Fellow](http://m.rfer.us/LBLRUa4lS): Collaborate with computational and domain scientists to enable extreme-scale scientific simulations on NERSC's Perlmutter supercomputer. - [Linux Systems Administrator & DevOps Engineer](http://m.rfer.us/LBLUPg4hf): Help to build and manage container and virtual machine platforms and high-performance storage that complement the supercomputing environment. - [Cyber Security Engineer](http://m.rfer.us/LBLa_B4hg): Join the team to help protect NERSC resources from malicious and unauthorized activity. - [NESAP for Data Postdoctoral Fellow](http://m.rfer.us/LBLXEt4g5): Collaborate with computational and domain scientists to enable extreme-scale scientific data analysis on NERSC's Perlmutter supercomputer. - [Machine Learning Postdoctoral Fellow](http://m.rfer.us/LBL2sf4cR): Collaborate with computational and domain scientists to enable machine learning at scale on NERSC's Perlmutter supercomputer. - [HPC Performance Engineer](http://m.rfer.us/LBLsGT43z): Join a multidisciplinary team of computational and domain scientists to speed up scientific codes on cutting-edge computing architectures. (**Note:** You can browse all our job openings on the [NERSC Careers](https://lbl.referrals.selectminds.com/page/nersc-careers-85) page, and all Berkeley Lab jobs at <https://jobs.lbl.gov>.) We know that NERSC users can make great NERSC employees! We look forward to seeing your application. ### Upcoming Outages <a name="outages"/></a> - **Cori** - 07/20/22 07:00-20:00 PDT, Scheduled Maintenance - 08/17/22 07:00-20:00 PDT, Scheduled Maintenance - 09/21/22 07:00-20:00 PDT, Scheduled Maintenance - **HPSS Archive (User)** - 06/29/22 09:00-13:00 PDT, Scheduled Maintenance - HPSS Archive system will be available while engineers apply tape drive firmware updates. Some file retrievals may be delayed during the maintenance window. - **HPSS Regent (Backup)** - 06/22/22 09:00-11:00 PDT, Scheduled Maintenance - System unavailable while engineers perform disk system maintenance. - **Spin** - 06/23/22 09:00-13:00 PDT, Scheduled Maintenance - Rancher 2 workloads and the Rancher 2 UI will be unavailable briefly (1-2 min) at least once within the window for upgrades to system software. Please note updated date/time. - **Help Portal** - 06/29/22 09:00-14:00 PDT, Scheduled Maintenance - NERSC Help Portal Upgrade; Intermittent Unavailability. The NERSC Help Portal (https://help.nersc.gov) will undergo an upgrade on June 29, 2022, from 9 am to 2 pm (Pacific time). During this upgrade, the platform may be intermittently unavailable. During the unavailable periods, users will not be able to access or use the Help Portal. Updates will be provided on the NERSC Status mailing list as the upgrade progresses. Visit <http://my.nersc.gov/> for latest status and outage information. ### About this Email <a name="about"/></a> You are receiving this email because you are the owner of an active account at NERSC. This mailing list is automatically populated with the email addresses associated with active NERSC accounts. In order to remove yourself from this mailing list, you must close your account, which can be done by emailing <accounts@nersc.gov> with your request. _______________________________________________ Users mailing list Users@nersc.gov

Close this window