Close this window

Email Announcement Archive

[Users] NERSC Weekly Email, Week of March 20, 2023

Author: Rebecca Hartman-Baker <rjhartmanbaker_at_lbl.gov>
Date: 2023-03-20 16:22:32

# NERSC Weekly Email, Week of March 20, 2023<a name="top"></a> # ## Contents ## ## [Summary of Upcoming Events and Key Dates](#section1) ## - [Scheduled Outages](#outages) - [Key Dates](#dates) ## [This Week's Events and Deadlines](#section2) ## - [Register for April 5-6 N-Ways to GPU Programming Bootcamp by Wednesday](#nways) - [Sign the NERSC Appropriate Use Policy / Code of Conduct Today!](#aupsign) ## [Perlmutter](#section3) ## - [Perlmutter Machine Status](#perlmutter) - [Prepare Now for Transitioning to Perlmutter from Cori!](#pmprep) - [Perlmutter Maintenance Updates Resolving GPU Issues](#pmmaint) - [(NEW/UPDATED) Help NERSC with Perlmutter System Shakeout -- Charging Holiday through March 29!](#pmchargingholiday) ## [Updates at NERSC ](#section4) ## - [E4S v22.11 Now Available on Perlmutter!](#e4s) - [E4S Version 21.11 Deprecation March 31](#e4sdep) - [Attention Students: NERSC Summer Internships Available!](#summerprojects) - [Cori Retirement Date: End of April](#coriretire) ## [NERSC User Community ](#section5) ## - [(NEW/UPDATED) Join Fortran Users of NERSC (FUN) Today!](#fun) - [Please Submit to NERSC Users Slack Channel Guide!](#nerscslack) ## [Calls for Participation](#section6) ## - [Call for Submissions for US Research Software Engineer Association Conference](#usrse) - [(NEW/UPDATED) Call for Participation Open for RSE-eScience-2023 Workshop](#rseesci) ## [Upcoming Training Events ](#section7) ## - [(NEW/UPDATED) Learn to Write Accelerated Code at Expert Level with Codee, April 25-26!](#codee) - [Join NERSC for Cori to Perlmutter Office Hours on March 31](#c2poh) - [DOE Cross-facility Workflows Workshop April 12, 2023](#doewf) - [IDEAS-ECP Webinar on "Facilitating Electronic Structure Calculations on GPU based Exascale Platforms" on April 12](#ecpwebinar) ## [NERSC News ](#section8) ## - [Come Work for NERSC!](#careers) - [About this Email](#about) ([back to top](#top)) --- ## Summary of Upcoming Events and Key Dates <a name="section1"/></a> ## ### Scheduled Outages <a name="outages"/></a> (See <https://www.nersc.gov/live-status/motd/> for more info): - **Cori** - 04/19/23 07:00-20:00 PDT, Scheduled Maintenance - **Perlmutter** - 03/22/23 06:00-22:00 PST, Scheduled Maintenance - 04/04/23 06:00-22:00 PDT, Scheduled Maintenance - 04/27/23 06:00-22:00 PDT, Scheduled Maintenance - **HPSS Archive (User)** - 03/29/23 09:00-13:00 PDT, Scheduled Maintenance - System will remain available during firmware upgrades - 04/04/23 09:00-04/06/23 17:00 PDT, Scheduled Maintenance - Some retrievals may be delayed during tape library preventative maintenance. - **HPSS Regent (Backup)** - 04/04/23 09:00-04/06/23 17:00 PDT, Scheduled Maintenance - Some retrievals may be delayed during tape library preventative maintenance. - **Authentication Services** - 03/22/23 10:00-11:00 PDT, Scheduled Maintenance - Web-based authentication will be briefly unavailable (5-10 min) during the window due to a software upgrade that requires a database reconfiguration. ### Key Dates <a name="dates"/></a> March 2023 April 2023 May 2023 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 1 1 2 3 4 5 6 5 6 7 8 9 10 11 2 3 4 5 6 7 8 7 8 9 10 11 12 13 12 13 14 15 16 17 18 9 10 11 12 13 14 15 14 15 16 17 18 19 20 19 20 21 22 23 24 25 16 17 18 19 20 21 22 21 22 23 24 25 26 27 26 27 28 29 30 31 23 24 25 26 27 28 29 28 29 30 31 30 #### This Week - **March 22, 2023**: [Registration Deadline for NVIDIA N-Ways to GPU Programming Bootcamp](#nways) #### Next Week - **March 29, 2023**: [End of Perlmutter System Shakeout/Charging Holiday Period](#pmchargingholiday) - **March 31, 2023**: - [Cori Large Memory Nodes Offline for Move to Perlmutter](#coriretire) - [Cori GPU Nodes Permanently Retired](#coriretire) - [Cori to Perlmutter Office Hours](#c2poh) - [E4S/22.11 on Perlmutter Deprecated](#e4sdep) #### Future - **April 4, 2023**: - [Fortran Users of NERSC Discussion Group / Office Hours](#fun) - [US-RSE'23 Workshop/Tutorial/BoF Submissions Due](#usrse) - **April 5-6, 2023**: [NVIDIA N-Ways to GPU Programming Bootcamp](#nways) - **April 12, 2023**: - [DOE Cross-Facility Workflows Workshop](#doewf) - [IDEAS-ECP Monthly Webinar](#ecpwebinar) - **End of April, 2023**: [Cori Retirement](#coriretire) - **May 1, 2023**: [US-RSE'23 Papers & Notebooks Submission Deadline](#usrse) - **June 19, 2023**: [US-RSE'23 Poster Abstract Submission Deadline](#usrse) - **June 30, 2023**: [RSE-eScience-2023 Abstract Submission Deadline](#rseesci) ([back to top](#top)) --- ## This Week's Events and Deadlines <a name="section2"/></a> ## ### Register for April 5-6 N-Ways to GPU Programming Bootcamp by Wednesday <a name="nways"/></a> NERSC, in collaboration with the OpenACC Organization and NVIDIA, is hosting an online N-Ways to GPU Programming bootcamp for two days, Wednesday and Thursday, April 5-6, 2023. Beginners to GPU programming are especially encouraged to participate. During this two-day Bootcamp, participants will learn about multiple GPU programming models from which they can select the best one for their needs for running their scientific codes on GPUs. The programming models introduced include OpenACC, OpenMP, stdpar, and CUDA C. Hands-on exercises will guide you through step-by-step and a code challenge event will help develop your skills while you port mini-apps with a team of fellow attendees. Teaching assistants will be on hand throughout the training. For more information and to register, please see <https://www.nersc.gov/users/training/events/nways-gpu-programming-bootcamp-apr2023/>. This event has limited capacity, and active NERSC/ALCF/OLCF users will be prioritized, but for best results **register before March 22**. Acceptance will be confirmed via email close to the bootcamp start date. ### Sign the NERSC Appropriate Use Policy / Code of Conduct Today! <a name="aupsign"/></a> All active users are required to sign the NERSC [Appropriate Use Policy](https://www.nersc.gov/users/policies/appropriate-use-of-nersc-resources/) and [Code of Conduct](https://www.nersc.gov/users/nersc-code-of-conduct/). This process is simple -- just log into [Iris](https://iris.nersc.gov) and a dialog will pop up with the policies for you to read over and approve. If you log into Iris and find that no dialog pops up, this means that you have already agreed to the new policies and no action is required. The accounts of users who do not agree to the Appropriate Use Policy and Code of Conduct will be **deactivated on April 17**. ([back to top](#top)) --- ## Perlmutter <a name="section3"/></a> ## ### Perlmutter Machine Status <a name="perlmutter"/></a> Perlmutter is available to all users with an active NERSC account. See <https://docs.nersc.gov/current/#perlmutter> for a list of current known issues and <https://docs.nersc.gov/jobs/policy/#qos-limits-and-charges> for tables of the queues available on Perlmutter. This newsletter section will be updated regularly with the latest Perlmutter status. ### Prepare Now for Transitioning to Perlmutter from Cori! <a name="pmprep"/></a> With Cori scheduled to be retired at the end of April, it is a good time to make sure that you are prepared to transition your workflows to Perlmutter. NERSC is here to help -- we have prepared a [Cori to Perlmutter migration webpage](https://docs.nersc.gov/systems/cori/migrate_to_perlmutter/) and provided several trainings recently that will be beneficial to current users looking to transition to Perlmutter, with more events in the works. - September's [New User Training](https://www.nersc.gov/users/training/events/new-user-training-sept2022/) contained lots of useful information about Perlmutter and how to use it. Slides are available and professionally captioned videos are linked from the training webpage. - The [GPUs for Science Day](https://www.nersc.gov/users/training/events/gpus-for-science-day-2022-october-25th/) (slides and videos with professional captions available) contained valuable resources for those migrating their applications to Perlmutter GPUs. - The [Data Day](https://www.nersc.gov/users/training/events/data-day-2022-october-26-27/) event (slides and videos currently available) included content aimed at users who are interested in porting their data workflows to Perlmutter. - The [Migrating from Cori to Perlmutter](https://www.nersc.gov/users/training/events/migrating-from-cori-to-perlmutter-training-dec2022/) training, which took place on December 1, focused on building and running jobs on Perlmutter. The slides and videos with professional captions from this training have been published on the event webpage. A repeat training with minor updates was offered on [March 10](https://www.nersc.gov/users/training/events/migrating-from-cori-to-perlmutter-training-march2023/). In addition, NERSC is providing more [Cori to Perlmutter Office Hours](#c2poh) in February and March. We encourage users to bring their questions to be answered by NERSC staff experts to these events. ### Perlmutter Maintenance Updates Resolving GPU Issues <a name="pmmaint"/></a> Last Wednesday, NERSC temporarily disabled a network feature for performant GPU-RDMA to mitigate a critical issue leading to node failures. This resulted in substantially lower performance for applications using these capabilities for inter-node communication (such as CUDA-Aware MPI or GASNet), but meant that other types of jobs that had been crashing could once again run. As of 9:30 am this morning, March 20, we were able to re-enable the network feature due after NERSC and the associated vendors developed a workaround, which was deployed with a non-disruptive rolling reboot of the compute nodes. We expect to deploy a full fix during the next maintenance, scheduled for this Wednesday. ### (NEW/UPDATED) Help NERSC with Perlmutter System Shakeout -- Charging Holiday through March 29! <a name="pmchargingholiday"/></a> We are pleased to have identified and resolved the bulk of major issues on Perlmutter, but NERSC still needs your help to identify and debug any remaining issues on Perlmutter! There are still some issues that show up only at scale with our diverse workload. To encourage you to use the system during this period, all GPU and CPU jobs running through the end of the day (11:59:59 pm Pacific time) on March 29 will run free of charge against your allocation. Please note that in order to ensure a typical workload, while jobs will run free of charge, your account must be able to support paying for that job. ([back to top](#top)) --- ## Updates at NERSC <a name="section4"/></a> ## ### E4S v22.11 Now Available on Perlmutter! <a name="e4s"/></a> We are pleased to announce that E4S version 22.11 is now available on Perlmutter. E4S is a curated collection of open-source software packages for developing, deploying, and running scientific applications on high-performance computing platforms. Version 22.11 includes over 500 total packages compiled with the GCC, NVHPC, CCE, and CUDA compilers. To use the software, begin with `module load e4s/22.11`. To use software compatible with your particular programming environment, use `spack env activate -V <compiler>` where the `<compiler>` argument is one of `cce, cuda, gcc, nvhpc`. For more information on what's available in E4S/22.11, as well as how to use it, please see <https://docs.nersc.gov/applications/e4s/perlmutter/22.11/>. ### E4S Version 21.11 Deprecation March 31 <a name="e4sdep"/></a> E4S version 21.11 will be deprecated effective March 31, 2023. On that date, NERSC will remove from Perlmutter the e4s/21.11 modulefile and the software directories for that version. If you are using e4s/21.11, please migrate to the latest version. ### Attention Students: NERSC Summer Internships Available! <a name="summerprojects"/></a> Are you an undergraduate or graduate student who will be enrolled as a student in the fall? Are you interested in working with NERSC staff on interesting technical projects? NERSC is looking for motivated students to join us for the summer in a paid internship role. Qualifications vary depending on the project, and pay is based on years of education completed. We have created a list of summer internship projects on our website at <https://www.nersc.gov/research-and-development/internships/>. Projects are still being added to the list so please check back for further additions. ### Cori Retirement Date: End of April <a name="coriretire"/></a> NERSC plans to retire Cori at the end of April. The KNL and Haswell nodes will be available to users through then. Time used on Cori is charged against your project's CPU allocation. The Cori Large Memory nodes will be taken offline on March 31, to prepare them to be moved to Perlmutter. The GPU nodes on Cori will be taken offline and decommissioned on March 31. Cori has reached the end of its lifetime. No new parts are being manufactured for the machine, and spare parts, if they exist, are primarly refurbished. We expect failures to grow more common going forward, and recovery from failures to take longer. Of particular concern is the scratch file system on Cori, for which spare parts are particularly scarce. Failures could result in data loss, making it especially imperative to back up important data to a more reliable resource (such as CFS, HPSS, or a file system outside NERSC) in a timely manner. ([back to top](#top)) --- ## NERSC User Community <a name="section5"/></a> ## ### (NEW/UPDATED) Join Fortran Users of NERSC (FUN) Today! <a name="fun"/></a> NERSC is forming a new user group, called Fortran Users of NERSC (FUN). If you maintain or run Fortran code on NERSC systems, we'd love to have you join us! As we are starting up this group, we are looking for feedback from interested Fortran users. What kinds of activities and services would you like to see through this group? If you're interested in participating or hearing more about it, - Please contact Brad Richardson (brad.richardson@lbl.gov) - Join the email list (fortranusers@lbl.gov): <https://groups.google.com/u/1/a/lbl.gov/g/fortranusers> - Join the #fortran channel on the NERSC User Group Slack - Fill out the Fortran Users of NERSC (FUN) Interest Survey: <https://forms.gle/Y3UjQnRLvp5GRRFe7> Don't miss the first FUN discussion group/office hours on Tuesday, April 4th at 1 pm (Pacific time)! Make sure to come introduce yourself, let us know what you use Fortran for at NERSC, and let us know what you'd like to get out of the group. Here's a link to [add it to your calendar](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=MTg1NTU5ZzdsaGN0MWY5Y2FxNzlndWFwdTUgbGJsLmdvdl9sczBnZHRnaTdiOTNqcmVkbGVzMGlibDB1NEBn&tmsrc=lbl.gov_ls0gdtgi7b93jredles0ibl0u4%40group.calendar.google.com). ### Please Submit to NERSC Users Slack Channel Guide! <a name="nerscslack"/></a> In an effort to spread user knowledge and give more visibility to user communities, NERSC is creating an official channel guide for the [NERSC Users Slack](https://www.nersc.gov/users/NUG/nersc-users-slack/) (login required) special interest groups that welcome new users. The guide will be featured on nersc.gov and display a list of channels, topics, and at least one point of contact for each group. If you are interested in starting a new special interest group channel on NERSC Users Slack around a topic common to members of the NERSC user community, we encourage you to do so. To have your group featured on this public list, any NERSC user must be welcome to join your channel, and there must be at least one person willing to serve as a point-of-contact for the group. If your group fulfills these requirements, please send an email to <nersc-community-managers@lbl.gov> that includes the following: - The name of your channel; - A brief description (no more than 50 words); - The names/NERSC Users Slack usernames of one or more people who will serve as the group's point of contact; - Confirmation that any NERSC user is welcome to join the channel. **Note:** An error in the email alias settings rejected submissions briefly. If your email to <nersc-community-managers@lbl.gov> was rejected, please try again today! ([back to top](#top)) --- ## Calls for Participation <a name="section6"/></a> ## ### Call for Submissions for US Research Software Engineer Association Conference <a name="usrse"/></a> Submissions are now open for the first-ever US Research Software Engineer Association Conference (US-RSE'23), which will be held October 16-18, 2023, in Chicago, Illinois. The theme of the conference is "Software Enabled Discovery and Beyond." Topics of interest include but are not limited to: - Discovery enabled by software - Architectures, frameworks, libraries, and technology trends - Research data management - Support for scalability and data-driven methods - Improving the reproducibility of research - Usability, portals, workflows, and tools - Sustainability, security, and stability - Software engineering approaches supporting research - Community engagement - Diversity, Equity, and Inclusion for RSEs and in RSEng - Training and workforce development - Building an RSE profession For more information, including how to submit, please see <https://us-rse.org/usrse23/>. **Workshop, Tutorial, and Birds of a Feather submissions are now due April 4**; paper and notebook submissions are due May 1; and poster abstracts are due June 19. ### (NEW/UPDATED) Call for Participation Open for RSE-eScience-2023 Workshop <a name="rseesci"/></a> A workshop on Research Software Engineers (RSE) with eScience (RSE-eScience-2023) is being held as part of [eScience 2023](https://www.escience-conference.org/2023/). RSEs combine professional software engineering expertise with an intimate understanding of research, and are uniquely placed in the eScience ecosystem to ensure development of sustainable research environments and reproducible research outputs. The theme of this workshop is sustainable RSE ecosystems, encompassing both the role of RSEs in sustainable eScience and making the RSE ecosystem itself more sustainable. Prospective participants are encouraged to submit talk abstracts of at most 300 words on topics related to Sustainable RSE Ecosystems within eScience. Topics of interest include (but are not limited to): - Experiences as an RSE in eScience - Struggles between RSEs and domain scientists -- how to find the common ground? - Different roles in the development of research software - How to make the eScience and RSE ecosystem more sustainable? - What can the eScience community do to support their RSEs? How can RSEs develop and progress their careers within the eScience community? - How to argue for funding to develop and sustain scientific software, and support the RSEs doing the work? - Examples of RSEs enabling sustainability within the eScience community For more information, please see <https://us-rse.org/rse-escience-2023/>. Abstract submissions are due June 30. ([back to top](#top)) --- ## Upcoming Training Events <a name="section7"/></a> ## ### (NEW/UPDATED) Learn to Write Accelerated Code at Expert Level with Codee, April 25-26! <a name="codee"/></a> Appentra's Codee Analyzer is a programming development tool for C/C++/Fortran parallel codes on multicore CPUs and GPUs using OpenMP and OpenACC. One great feature of **the Codee Analyzer Tool is that it can automatically insert OpenMP or OpenACC directives in your codes** to run on CPUs or offload to accelerator devices such as GPUs, so that novice programmers can write codes at the expert level. This programming developer tool also provides code inspections for debugging and improving OpenMP/OpenACC programming on GPUs with a systematic, more predictable approach that leverages parallel programming best practices. Join NERSC on April 25 & 26 as Codee staff present a 2-part training series intended to help new and experienced programmers to understand best practices for CPU and GPU programming using OpenMP and OpenACC. This training event will be held online using Zoom. Part one will include a quick presentation of the Codee command-line tools using as a guide several well-known C/C++/Fortran codes that can be accelerated on CPU and GPU through the tips documented in the open catalog of performance optimization best practices available in the Codee website. The session will consist of short demos and hands-on exercises with step-by-step guides on Perlmutter. Part two will showcase the Codee command line tools using MBedTLS (a library of cryptographic codes) and ZPIC (a particle-in-cell code), focusing on CPU and GPU programming challenges that appear frequently in real scientific and engineering applications. Users can also bring their own codes to explore using Codee for Part 2. For more information and to register please see <https://www.nersc.gov/users/training/events/codee-training-series-apr2023/>. ### Join NERSC for Cori to Perlmutter Office Hours on March 31 <a name="c2poh"/></a> Buoyed by the success of our previous round of office hours in which nearly 130 users were helped, NERSC has scheduled additional Cori to Perlmutter office hours. Users are invited to drop into our virtual office hours (held on Zoom) to get help from NERSC experts on migrating their applications and workflows to Perlmutter from Cori. User questions of any kind are welcomed at all sessions, but we extend a special invitation to those working on machine learning to the final session of the series next Friday, March 31, when NERSC machine learning experts will be staffing the session along with general HPC experts. For more information, including connection information (login required for Zoom link), please see <https://www.nersc.gov/users/training/events/migrating-from-cori-to-perlmutter-office-hours-febmar-2023> ### DOE Cross-facility Workflows Workshop April 12, 2023 <a name="doewf"/></a> Do you need help choosing the right workflow tool? Do you have questions about running workflows on supercomputers? Join us for a joint ALCF/NERSC/OLCF training on the topic of workflows and workflow tools across the DOE. We will offer a half-day Zoom training with hands-on examples of GNU Parallel, Parsl, FireWorks, and Balsam -- all of which can be used at ALCF, NERSC, and OLCF. We'll help answer questions like: - Do I need a workflow tool? - How do I choose the right workflow tool? - What are the advantages and disadvantages of various tools? - How do I install, configure and use these tools on DOE systems? We invite anyone who is interested in workflow tools, from beginners to experienced users, to join us on April 12 from 11AM-4PM Eastern / 8AM- 1PM Pacific. Please see our [event page](https://www.nersc.gov/users/training/events/doe-cross-facility-workflows-training-april2023) and register for our virtual training at [this link](https://forms.gle/EUZsCdLraXig5taHA). To ensure you have access to all systems required for the exercises, please **register by April 5**. Anyone who registers after this date is still welcome to attend, but may not be able to complete the exercises. ### IDEAS-ECP Webinar on "Facilitating Electronic Structure Calculations on GPU based Exascale Platforms" on April 12 <a name="ecpwebinar"/></a> The next webinar in the [Best Practices for HPC Software Developers](http://ideas-productivity.org/events/hpc-best-practices-webinars/) series is entitled "Facilitating Electronic Structure Calculations on GPU based Exascale Platforms", and will take place **Wednesday, April 12, at 10:00 am Pacific time.** In this webinar, Jean-Luc Fattebert (Oak Ridge National Lab) will discuss the PROGRESS and BML libraries, developed within ECP’s Co-design Center for Particle Applications (CoPA) project, which allow electronic structure codes to offload their most expensive kernels, with a unified interface for various matrix formats and computer architectures. The webinar will focus on implementations and algorithmic choices made in those libraries, and lessons learned while trying to achieve performance portability on exascale platforms. Specifically, the webinar will discuss eigensolvers and their alternatives, as well as strong scaling in fast time-to-solution in molecular dynamics. There is no cost to attend, but registration is required. Please register [at the event webpage](https://www.exascaleproject.org/event/copa/). ([back to top](#top)) --- ## NERSC News <a name="section8"/></a> ## ### Come Work for NERSC! <a name="careers"/></a> NERSC currently has several openings for postdocs, system administrators, and more! If you are looking for new opportunities, please consider the following openings: - [HPC Programming Model & Performance Engineer](http://m.rfer.us/LBL28f5xl): Contribute to efforts in developing and implementing state-of-the-art HPC programming models and software environments for NERSC users. - [Scientific IO & Data Architect](http://m.rfer.us/LBLzdP5jy): Collaborate with scientists to enable their data, AI, and analytics needs using NERSC supercomputers. - [Network Engineer](http://m.rfer.us/LBLNxI5jz): Engineer and manage the NERSC data-center network to support NERSC's world-class compute and storage systems. - [HPC User Environment Architect](http://m.rfer.us/LBLtG15iO): Help NERSC define and implement innovative development environments and programming models that scientists can use to get the most out of advanced computing architectures for their scientific research. - [Linux Systems Administrator / DevOps Engineer](http://m.rfer.us/LBL8bO5dU): Help build and manage NERSC's container and virtual machine platforms and deploy services that help our supercomputing center run smoothly. - [Data Science Workflows Architect](http://m.rfer.us/LBLAlL5b5): Work with multidisciplinary teams to adapt and optimize workflows for HPC systems, including data transfer, code optimization, AI, and automation. - [HPC Storage Systems Developer](http://m.rfer.us/LBLdsq5XB): Use your systems programming skills to develop the High Performance Storage System (HPSS) and supporting software. - [HPC Systems Software Engineer](http://m.rfer.us/LBL3Hv5XA): Combine your software and system development skills to support world-class HPC computational systems. - [HPC Storage Infrastructure Engineer](http://m.rfer.us/LBLqP65X9): Join the team of engineers integrating NERSC's distributed parallel file systems with NERSC's computational and networking infrastructure, troubleshoot performance issues at scale, and develop innovative solutions to optimize operational and user productivity. - [HPC Storage Systems Analyst](http://m.rfer.us/LBLgDg5VX): Join the team of engineers and programmers supporting HPSS and parallel center-wide systems. - [HPC Architecture and Performance Engineer](http://m.rfer.us/LBL1rb56n): Contribute to NERSC's understanding of future systems (compute, storage, and more) by evaluating their efficacy across leading-edge DOE Office of Science application codes. - [NESAP for Simulations Postdoctoral Fellow](http://m.rfer.us/LBLRUa4lS): Collaborate with computational and domain scientists to enable extreme-scale scientific simulations on NERSC's Perlmutter supercomputer. (**Note:** You can browse all our job openings on the [NERSC Careers](https://lbl.referrals.selectminds.com/page/nersc-careers-85) page, and all Berkeley Lab jobs at <https://jobs.lbl.gov>.) We know that NERSC users can make great NERSC employees! We look forward to seeing your application. ### About this Email <a name="about"/></a> You are receiving this email because you are the owner of an active account at NERSC. This mailing list is automatically populated with the email addresses associated with active NERSC accounts. In order to remove yourself from this mailing list, you must close your account, which can be done by emailing <accounts@nersc.gov> with your request. _______________________________________________ Users mailing list Users@nersc.gov

Close this window