Close this window

Email Announcement Archive

[Users] NERSC Weekly Email, Week of May 29, 2023

Author: Rebecca Hartman-Baker <rjhartmanbaker_at_lbl.gov>
Date: 2023-05-29 07:13:48

# NERSC Weekly Email, Week of May 29, 2023<a name="top"></a> # ## Contents ## ## [Summary of Upcoming Events and Key Dates](#section1) ## - [Scheduled Outages](#outages) - [Key Dates](#dates) ## [This Week's Events and Deadlines](#section2) ## - [Proposals Deadline for NERSC GPU Hackathon, July 12th & 19th-21st, Extended to June 5](#hackathon) - [Memorial Day Holiday Today, May 29; No Consulting or Account Support](#memday) - [Join Us Tomorrow for Final Cori to Perlmutter Office Hours Event in Series](#c2pohthisweek) - [Training on Advanced SYCL Techniques & Best Practices Tomorrow!](#sycl) - [Cori Retirement Date Approaching: this Wednesday at Noon](#coriretireC) ## [Perlmutter](#section3) ## - [Perlmutter Machine Status](#perlmutter) - [Prepare Now for Transitioning to Perlmutter from Cori!](#pmprep) - [Scratch Purging on Perlmutter Begins in June](#pmscratch) ## [Updates at NERSC ](#section4) ## - [(NEW/UPDATED) Migrate Cori Cron Jobs to scrontab on Perlmutter](#cron) ## [Calls for Participation](#section5) ## ## [NERSC User Community ](#section6) ## - [NERSC Users Slack Channel Guide Now Available](#slackguide) - [Submit a Science Highlight Today!](#scihigh) ## [Seminars](#section7) ## - [IDEAS-ECP Webinar on "The OpenSSF Best Practices Badge Program" June 14](#ecpwebinar) ## [Conferences & Workshops](#section8) ## - [(NEW/UPDATED) Call for Papers: In Situ Infrastructures for Enabling Extreme-Scale Analysis & Visualization, Deadline August 4](#isav) - [Call for Submissions for US Research Software Engineer Association Conference](#usrse) - [Call for Participation Open for RSE-eScience-2023 Workshop](#rseesci) - [Call for Papers: 6th Annual Parallel Applications Workshop, Alternatives To MPI+X (PAW-ATM) at SC23](#pawatm) ## [Training Events ](#section9) ## - [Introduction to NERSC Resources Training, Next Thursday, June 8](#intronersc) - [OLCF AI for Science at Scale Introductory Training on June 15](#ai4sciolcf) - [Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts June 21!](#spinup) - [Attention Novice Parallel Programmers: Sign up for June 22 "Crash Course in Supercomputing"](#crashcourse) ## [NERSC News ](#section10) ## - [Come Work for NERSC!](#careers) - [About this Email](#about) ([back to top](#top)) --- ## Summary of Upcoming Events and Key Dates <a name="section1"/></a> ## ### Scheduled Outages <a name="outages"/></a> (See <https://www.nersc.gov/live-status/motd/> for more info): - **Cori** - 05/31/23 12:00-07/15/23 12:00 PDT, Retired - Cori will be removed from service and retired at this time. - **Perlmutter** - 06/01/23 10:00-19:00 PDT, Scheduled Maintenance (Rolling Reboot) - Rolling reboot of login and compute nodes. One small disruption to a few specific login nodes at 10am, followed by the rolling reboot initiating between 2pm and 3pm. There will be a 5 minute disruption to all login nodes during that window. Slightly delayed job start following that window for up to 4 hours. May be canceled with 12 hours notice. - 06/07/23 10:00-19:00 PDT, Scheduled Maintenance (Rolling Reboot) - 06/14/23 10:00-19:00 PDT, Scheduled Maintenance (Rolling Reboot) - 06/21/23 10:00-19:00 PDT, Scheduled Maintenance (Rolling Reboot) - 06/28/23 10:00-19:00 PDT, Scheduled Maintenance (Rolling Reboot) - **HPSS Archive (User)** - 05/31/23 09:00-15:00 PDT, Scheduled Maintenance - Some retrievals may be delayed due to library maintenance. - **LDAP** - 06/14/23 10:00-14:00 PDT, Scheduled Maintenance - Deleting unused trees from the LDAP environment. Updates to LDAP such as changed passwords will be cached and delayed until the completion of the window. ### Key Dates <a name="dates"/></a> May 2023 June 2023 July 2023 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 1 2 3 1 7 8 9 10 11 12 13 4 5 6 7 8 9 10 2 3 4 5 6 7 8 14 15 16 17 18 19 20 11 12 13 14 15 16 17 9 10 11 12 13 14 15 21 22 23 24 25 26 27 18 19 20 21 22 23 24 16 17 18 19 20 21 22 28 29 30 31 25 26 27 28 29 30 23 24 25 26 27 28 29 30 31 #### This Week - **May 29, 2023**: [Memorial Day Holiday](#memday) (No Consulting or Account Support) - **May 30, 2023**: - [Cori to Perlmutter Office Hours](#c2poh) - [Advanced SYCL Training](#sycl) - **May 31, 2023**: [Cori Retirement](#coriretireC) #### Next Week - **June 5, 2023**: [NERSC GPU Hackathon Application Deadline](#hackathon) - **June 8, 2023**: [Intro to NERSC Resources](#intronersc) #### Future - **June 14, 2023**: [IDEAS-ECP Monthly Webinar](#hpcwebinar) - **June 15, 2023**: [AI for Science at Scale](#ai4sciolcf) - **June 19, 2023**: - [US-RSE'23 Poster Abstract Submission Deadline](#usrse) - Juneteenth Holiday (No Consulting or Account Support) - **June 21, 2023**: [SpinUp Workshop](#spinup) - **June 22, 2023**: [Crash Course in Supercomputing](#crashcourse) - **June 30, 2023**: [RSE-eScience-2023 Abstract Submission Deadline](#rseesci) - **July 4, 2023**: Independence Day Holiday (No Consulting or Account Support) - **July 24, 2023**: [PAW-ATM Submission Deadline](#pawatm) - **August 4, 2023**: [ISAV Workshop Submission Deadline](#isav) ([back to top](#top)) --- ## This Week's Events and Deadlines <a name="section2"/></a> ## ### Proposals Deadline for NERSC GPU Hackathon, July 12th & 19th-21st, Extended to June 5 <a name="hackathon"/></a> NERSC, in conjunction with NVIDIA and OLCF, will be hosting a GPU Hackathon from July 19th-21st with an opening day on July 12th as part of the annual GPU Hackathon Series. This year, the hackathon will be virtual and selected code teams will be able to test and develop on Perlmutter, and there is also the option to use an ARM system. **The deadline to submit a proposal has been extended to 11:59 PM Pacific, next Monday, June 5, 2023. Apply now!** Hackathons combine teams of developers with mentors to either prepare their own application(s) to run on GPUs or optimize their application(s) that currently run on GPUs. This virtual event consists of a kick-off day, where hackers and mentors video-conference to meet and develop their list of hackathon goals, as well as get set up on the relevant systems. This is followed by a one week preparation period before the 3-day intensive primary event. If you are interested in more information, or would like to submit a short proposal form, please visit the [GPU Hackathon event page](https://www.openhackathons.org/s/siteevent/a0C5e000005Va7IEAS/se000163) or [NERSC's event page](https://sites.google.com/lbl.gov/july-2023-gpu-hackathon/home). Questions can also be directed to NERSC's host, Hannah Ross (HRoss@lbl.gov). ### Memorial Day Holiday Today, May 29; No Consulting or Account Support <a name="memday"/></a> Consulting and account support will be unavailable **today, May 29**, due to the Berkeley Lab-observed Memorial Day holiday. Regular consulting and account support will resume tomorrow. ### Join Us Tomorrow for Final Cori to Perlmutter Office Hours Event in Series <a name="c2pohthisweek"/></a> Tomorrow, May 30, from 10 am to 12 noon (Pacific time), drop into NERSC's final Cori to Perlmutter transition virtual office hours with your questions for NERSC experts on migrating your applications and workflows from Cori to Perlmutter. For more information, including connection information (login required for Zoom link), please see <https://www.nersc.gov/users/training/events/migrating-from-cori-to-perlmutter-office-hours-may-2023/>. ### Training on Advanced SYCL Techniques & Best Practices Tomorrow! <a name="sycl"/></a> NERSC will host a virtual training event on advanced SYCL techniques and best practices tomorrow, May 30. The SYCL programming model means heterogeneous programming using C++ is now more accessible than ever. SYCL uses modern standard C++, and it's a programming model that lets developers support a wide variety of devices (CPUs, GPUs, FPGAs, and more) from a single code base. The growing popularity of this programming model means that developers are eager to understand how to use all the features of SYCL and how to achieve great performance for their code. While the tutorial assumes existing knowledge and some experince with using SYCL to develop code for accelerators such as GPUs, video recordings of more introductory SYCL trainings that may help prepare you for this training are available on [this YouTube playlist](https://www.youtube.com/playlist?list=PL20S5EeApOStYsTQ7AFkiIpPzuOBWo0jI). Concepts included in this training include strategies for optimizing code, managing data flow, how to use different memory acces patterns, understanding work group sizes, using vectorization, the importance of ND ranges, and making the most of the multiple devices available on your architecture. For more information and to register, please see <https://www.nersc.gov/users/training/events/advanced-sycl-techniques-and-best-practices-may2023/>. ### Cori Retirement Date Approaching: this Wednesday at Noon <a name="coriretireC"/></a> Cori is scheduled to be **retired in two days: this Wednesday, May 31, 2023 at noon PDT.** NERSC system administrators have put in place a reservation that will prevent any jobs from running past 12:00 noon on May 31. Users will still be able to log into the Cori login nodes and access the Cori scratch file system for one week, until next Wednesday, June 7, 2023. At that time, the system will be powered down and prepared for removal. ([back to top](#top)) --- ## Perlmutter <a name="section3"/></a> ## ### Perlmutter Machine Status <a name="perlmutter"/></a> Perlmutter is available to all users with an active NERSC account. Some helpful NERSC pages for Perlmutter users: * [Perlmutter queue information](https://docs.nersc.gov/jobs/policy/#qos-limits-and-charge) * [Timeline of major changes](https://docs.nersc.gov/systems/perlmutter/timeline/) * [Current known issues](https://docs.nersc.gov/current/#perlmutter) This section of the newsletter will be updated regularly with the latest Perlmutter status. ### Prepare Now for Transitioning to Perlmutter from Cori! <a name="pmprep"/></a> With Cori retiring on Wednesday at noon, now is a good time to make sure that you are prepared to transition your workflows to Perlmutter. NERSC is here to help -- we have prepared a [Cori to Perlmutter migration webpage](https://docs.nersc.gov/systems/cori/migrate_to_perlmutter/) and provided several trainings recently that will be beneficial to current users looking to transition to Perlmutter, with more events in the works: - September's [New User Training](https://www.nersc.gov/users/training/events/new-user-training-sept2022/) contained lots of useful information about Perlmutter and how to use it. Slides are available and professionally captioned videos are linked from the training webpage. - The [GPUs for Science Day](https://www.nersc.gov/users/training/events/gpus-for-science-day-2022-october-25th/) (slides and videos with professional captions available) contained valuable resources for those migrating their applications to Perlmutter GPUs. - The [Data Day](https://www.nersc.gov/users/training/events/data-day-2022-october-26-27/) event (slides and videos currently available) included content aimed at users who are interested in porting their data workflows to Perlmutter. - The [Migrating from Cori to Perlmutter](https://www.nersc.gov/users/training/events/migrating-from-cori-to-perlmutter-training-dec2022/) training, which took place on December 1, focused on building and running jobs on Perlmutter. The slides and videos with professional captions from this training have been published on the event webpage. A repeat training with minor updates was offered on [March 10](https://www.nersc.gov/users/training/events/migrating-from-cori-to-perlmutter-training-march2023/). ### Scratch Purging on Perlmutter Begins in June <a name="pmscratch"/></a> The purpose of the scratch file systems attached to large NERSC supercomputing resources like Perlmutter and Cori is to provide a temporary storage space for user jobs to use while running. A scratch file system is less of a file system for storing files and more of a temporary space for data. Given this, NERSC has scratch purge policies in place, where NERSC reserves the right to remove old files from the scratch system to free up space. Starting at a date to be determined in June, NERSC will begin applying the scratch purge policy to Perlmutter's scratch file system. The precise date the policy goes into effect will be announced at least two weeks beforehand in a standalone email from NERSC. At that point, any files older than 8 weeks may be deleted from the scratch file system. For more information on NERSC file systems and scratch purge policies, please see <https://docs.nersc.gov/filesystems/quotas/>. ([back to top](#top)) --- ## Updates at NERSC <a name="section4"/></a> ## ### (NEW/UPDATED) Migrate Cori Cron Jobs to scrontab on Perlmutter <a name="cron"/></a> Some users require regularly scheduled tasks as part of their NERSC workflows. On Cori, these were provided through the cron functionality in Linux. On Perlmutter, however, this functionality is provided through the Slurm crontab tool called `scrontab`. This tool combines the same functionality as cron with the resiliency of the batch system. We encourage you to examine any crontab scripts that you may have in play on Cori, and convert them to `scrontab` on Perlmutter. For more information on `scrontab`, please see <https://docs.nersc.gov/jobs/workflow/scrontab/>. ([back to top](#top)) --- ## Calls for Participation <a name="section5"/></a> ## ([back to top](#top)) --- ## NERSC User Community <a name="section6"/></a> ## ### NERSC Users Slack Channel Guide Now Available <a name="slackguide"/></a> Thanks to all who submitted to the NERSC Users Slack Channel Guide. A list of public channels for special interests is now avaialble at <https://www.nersc.gov/users/NUG/nersc-users-slack/> (login required -- you may also join the NERSC Users Slack at this link). If you'd like your group to be added to the list, please contact <NERSC-Community-Managers@lbl.gov> with the name of your channel, at least one contact person, and confirmation that the channel is public. ### Submit a Science Highlight Today! <a name="scihigh"/></a> Doing cool science at NERSC? NERSC is looking cool, scientific and code development success stories to highlight to NERSC users, DOE Program Managers, and the broader scientific community in Science Highlights. If you're interested in your work being considered to be used as a featured Science highlight, please let us know via our [highlight form](https://docs.google.com/forms/d/e/1FAIpQLScP4bRCtcde43nqUx4Z_sz780G9HsXtpecQ_qIPKvGafDVVKQ/viewform). ([back to top](#top)) --- ## Seminars <a name="section7"/></a> ## ### IDEAS-ECP Webinar on "The OpenSSF Best Practices Badge Program" June 14 <a name="ecpwebinar"/></a> The next webinar in the [Best Practices for HPC Software Developers](http://ideas-productivity.org/events/hpc-best-practices-webinars/) series is entitled "The OpenSSF Best Practices Badge Program", and will take place Wednesday, June 14, at 10:00 am Pacific time. This webinar, presented by Roscoe A. Bartlett (Sandia National Laboratories), will give an overview of the OpenSSF Best Practices Badge Program, which provides resources for the creation, maintenance, and sustainability of robust, high quality, and secure open source software. The webinar will also describe how these resources can be used to support advances in software quality and sustainability efforts in CS&E. There is no cost to attend, but registration is required. Please register [at the event webpage](https://www.exascaleproject.org/event/openssf/). ([back to top](#top)) --- ## Conferences & Workshops <a name="section8"/></a> ## ### (NEW/UPDATED) Call for Papers: In Situ Infrastructures for Enabling Extreme-Scale Analysis & Visualization, Deadline August 4 <a name="isav"/></a> The Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization (ISAV 2023) will be held on Monday, November 13, 2023, in conjunction with SC23. In situ analysis and visualization are essential workflow components in modern HPC. The goal of this workshop is to present research findings, lessons learned, and insights related to developing and applying in situ methods and infrastructure in HPC environments; to discuss topics of common interest in order to foster and enable in situ analysis and visualization. ISAV has opened a call for papers on relevant topics of interest, which include, but are not limited to - In situ infrastructures - System resources, hardware, and emerging architectures - Methods / Algorithms - Case studies and data sources - Simulation and workflows - Requirements and usability. For more information and to submit, please see <https://isav-workshop.github.io/2023/>. Submissions are due August 4. ### Call for Submissions for US Research Software Engineer Association Conference <a name="usrse"/></a> Submissions are now open for the first-ever US Research Software Engineer Association Conference (US-RSE'23), which will be held October 16-18, 2023, in Chicago, Illinois. The theme of the conference is "Software Enabled Discovery and Beyond." Topics of interest include but are not limited to: - Discovery enabled by software - Architectures, frameworks, libraries, and technology trends - Research data management - Support for scalability and data-driven methods - Improving the reproducibility of research - Usability, portals, workflows, and tools - Sustainability, security, and stability - Software engineering approaches supporting research - Community engagement - Diversity, Equity, and Inclusion for RSEs and in RSEng - Training and workforce development - Building an RSE profession For more information, including how to submit, please see <https://us-rse.org/usrse23/>. Poster abstracts are due June 19. ### Call for Participation Open for RSE-eScience-2023 Workshop <a name="rseesci"/></a> A workshop on Research Software Engineers (RSE) with eScience (RSE-eScience-2023) is being held as part of [eScience 2023](https://www.escience-conference.org/2023/). RSEs combine professional software engineering expertise with an intimate understanding of research, and are uniquely placed in the eScience ecosystem to ensure development of sustainable research environments and reproducible research outputs. The theme of this workshop is sustainable RSE ecosystems, encompassing both the role of RSEs in sustainable eScience and making the RSE ecosystem itself more sustainable. Prospective participants are encouraged to submit talk abstracts of at most 300 words on topics related to Sustainable RSE Ecosystems within eScience. Topics of interest include (but are not limited to): - Experiences as an RSE in eScience - Struggles between RSEs and domain scientists -- how to find the common ground? - Different roles in the development of research software - How to make the eScience and RSE ecosystem more sustainable? - What can the eScience community do to support their RSEs? How can RSEs develop and progress their careers within the eScience community? - How to argue for funding to develop and sustain scientific software, and support the RSEs doing the work? - Examples of RSEs enabling sustainability within the eScience community For more information, please see <https://us-rse.org/rse-escience-2023/>. Abstract submissions are due June 30. ### Call for Papers: 6th Annual Parallel Applications Workshop, Alternatives To MPI+X (PAW-ATM) at SC23 <a name="pawatm"/></a> The Parallel Applications Workshop, Alternatives to MPI+X (PAW-ATM) is seeking full-length papers and talk abstracts for the workshop to be held this November in conjunction with SC23. PAW-ATM is a forum for discussing HPC applications written in alternatives to MPI+X. These alternatives include, but are not limited to, new languages (e.g., Chapel, Regent, XcalableMP), frameworks for large-scale data science (e.g., Arkouda, Dask, Spark), and extensions to existing languages (e.g., Charm++, COMPSs, Fortran, Legion, UPC++). Topics of interest include, but are not limited to: - Novel application development using high-level parallel programming languages and frameworks. - Examples that demonstrate performance, compiler optimization, error checking, and reduced software complexity. - Applications from artificial intelligence, data analytics, bioinformatics, and other novel areas. - Performance evaluation of applications developed using alternatives to MPI+X and comparisons to standard programming models. - Novel algorithms enabled by high-level parallel abstractions. - Experience with the use of new compilers and runtime environments. - Libraries using or supporting alternatives to MPI+X. - Benefits of hardware abstraction and data locality on algorithm implementation. Submissions close July 24, 2023. For more information and to submit a paper, please visit: <https://go.lbl.gov/paw-atm>. ([back to top](#top)) --- ## Training Events <a name="section9"/></a> ## ### Introduction to NERSC Resources Training, Next Thursday, June 8 <a name="intronersc"/></a> NERSC is offering a training entitled "Introduction to NERSC Resources" next Thursday, June 8. This training, offered through the 2023 Berkeley Lab Computing Sciences Summer Student program and open to NERSC users, is aimed at novice users of NERSC resources. Topics covered include: systems overview, connecting to NERSC, software environment, file systems and data management/transfer, and available data analytics software and services. More details on how to compile applications and run jobs on NERSC systems will be presented including hands-on exercises on Perlmutter. The class will also showcase various online resources that are available on NERSC web pages. For more information and to register, please see <https://www.nersc.gov/users/training/events/introduction-to-nersc-resources-jun2023/>. ### OLCF AI for Science at Scale Introductory Training on June 15 <a name="ai4sciolcf"/></a> OLCF is holding a series of training events on the topic of AI for Science at Scale, which are open to NERSC users. The first training in the series is scheduled for June 15, and will provide an introduction to AI/ML/DL principles used for science in an HPC environment. Participants will get the opportunity to apply techniques learned in the session to run hands-on examples on OLCF's Ascent system. For more information and to register, please see <https://www.nersc.gov/users/training/events/olcf-ai-training-series-ai-for-science-at-scale-introduction-jun2023/>. ### Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts June 21! <a name="spinup"/></a> Spin is a service platform at NERSC based on Docker container technology. It can be used to deploy science gateways, workflow managers, databases, and all sorts of other services that can access NERSC systems and storage on the back end. New large-memory nodes have been added to the platform, increasing the potential of the platform for new memory-constrained applications. To learn more about how Spin works and what it can do, please listen to the NERSC User News podcast on Spin: <https://anchor.fm/nersc-news/episodes/Spin--Interview-with-Cory-Snavely-and-Val-Hendrix-e1pa7p>. Attend an upcoming SpinUp workshop to learn to use Spin for your own science gateway projects! Applications for sessions that begin on Wednesday, June 21 [are now open](https://www.nersc.gov/users/training/spin). SpinUp is hands-on and interactive, so space is limited. Participants will attend an instructional session and a hack-a-thon to learn about the platform, create running services, and learn maintenance and troubleshooting techniques. Local and remote participants are welcome. If you can't make these upcoming sessions, never fear! More sessions are being planned for later in the year. See a video of Spin in action at the [Spin documentation](https://docs.nersc.gov/services/spin/) page. ### Attention Novice Parallel Programmers: Sign up for June 22 "Crash Course in Supercomputing" <a name="crashcourse"/></a> In collaboration with the Berkeley Lab Computing Sciences Summer Student Program, NERSC is again offering the "Crash Course in Supercomputing", open to NERSC users as well as Berkeley Lab summer students on Thursday, June 22. In this course, students will learn to write parallel programs that can be run on a supercomputer. We begin by discussing the concepts of parallelization before introducing MPI and OpenMP, the two leading parallel programming libraries. Hands-on exercises reinforce the concepts learned in the course. Training accounts will be provided for students who have not yet set up a NERSC account. For more information and to register, please see <https://www.nersc.gov/users/training/events/crash-course-in-supercomputing-jun2023/>. ([back to top](#top)) --- ## NERSC News <a name="section10"/></a> ## ### Come Work for NERSC! <a name="careers"/></a> NERSC currently has several openings for postdocs, system administrators, and more! If you are looking for new opportunities, please consider the following openings: - [HPC Systems Software Engineer](http://m.rfer.us/LBLSQh6ZH): Help architect, deploy, configure, and operate NERSC's large scale, leading-edge HPC systems. - [Computer Systems Engineer 3](http://m.rfer.us/LBL9_U6Ez): Monitor, administer, and optimize NERSC's storage resources. - [Site Reliability Engineer](http://m.rfer.us/LBLO7X6Ai): Provide a variety of engineering support services to ensure that NERSC is accessible, reliable, secure, and available to its scientific users. - [HPC Performance Engineer](http://m.rfer.us/LBLYLg68i): Enable advanced science at scale on NERSC's Perlmuter supercomputer. - [Network Engineer](http://m.rfer.us/LBLNxI5jz): Engineer and manage the NERSC data-center network to support NERSC's world-class compute and storage systems. - [HPC Storage Systems Developer](http://m.rfer.us/LBLdsq5XB): Use your systems programming skills to develop the High Performance Storage System (HPSS) and supporting software. - [HPC Storage Infrastructure Engineer](http://m.rfer.us/LBLqP65X9): Join the team of engineers integrating NERSC's distributed parallel file systems with NERSC's computational and networking infrastructure, troubleshoot performance issues at scale, and develop innovative solutions to optimize operational and user productivity. - [HPC Storage Systems Analyst](http://m.rfer.us/LBLgDg5VX): Join the team of engineers and programmers supporting HPSS and parallel center-wide systems. - [NESAP for Simulations Postdoctoral Fellow](http://m.rfer.us/LBLRUa4lS): Collaborate with computational and domain scientists to enable extreme-scale scientific simulations on NERSC's Perlmutter supercomputer. (**Note:** You can browse all our job openings on the [NERSC Careers](https://lbl.referrals.selectminds.com/page/nersc-careers-85) page, and all Berkeley Lab jobs at <https://jobs.lbl.gov>.) We know that NERSC users can make great NERSC employees! We look forward to seeing your application. ### About this Email <a name="about"/></a> You are receiving this email because you are the owner of an active account at NERSC. This mailing list is automatically populated with the email addresses associated with active NERSC accounts. In order to remove yourself from this mailing list, you must close your account, which can be done by emailing <accounts@nersc.gov> with your request. _______________________________________________ Users mailing list Users@nersc.gov

Close this window