Close this window

Email Announcement Archive

[Users] NERSC Weekly Email, Week of August 1, 2022

Author: Steve Leak <sleak_at_nersc.gov>
Date: 2022-08-01 13:08:40

# NERSC Weekly Email, Week of August 1, 2022<a name="top"></a> # ## Contents ## - [Summary of Upcoming Events and Key Dates](#dates) ## [NERSC Status](#section1) ## - [NERSC Operations Continue with Minimal Changes](#curtailment) ## [This Week's Events and Deadlines](#section2) ## - [(NEW/UPDATED) OpenACC & Hackathons 2022 Summit this week, August 2-4](#oaccsummit) - [(NEW/UPDATED) Call for Papers: PAW-ATM at SC22, deadline extended till this Friday Aug 5](#pawatm) - [(NEW/UPDATED) Submissions due this Friday: Workshop on Accelerator Programming Using Directives](#waccpd) ## [Perlmutter](#section3) ## - [(NEW/UPDATED) Perlmutter Machine Status](#perlmutter) - [(NEW/UPDATED) Perlmutter Network Upgrade Is In Progress- Slingshot11 Is Now The Default](#pmss10ss11) - [(NEW/UPDATED) Perlmutter Maintenance Schedule for July & August](#pmmaint) ## [Updates at NERSC ](#section4) ## - [ERCAP Allocations Requests Opens August 15](#ercap) - [(NEW/UPDATED) Cori to be retired after AY2022](#coriretire) - [(NEW/UPDATED) E4S 22.05 Now Available on Perlmutter](#e4s) - [Counters Used by Performance Tools Re-Enabled](#vtune) ## [Calls for Participation](#section5) ## - [(NEW/UPDATED) Nominations Open for NERSC Early Career HPC Acheivements Awards](#ecaward) - [Call for Submissions: Combined International Workshop on Interactive / Urgent Supercomputing at SC22](#urgentcompute) - [Registration is Open for Confab22 ESnet User Meeting, October 12-13!](#confab22) ## [Upcoming Training Events ](#section6) ## - [Announcing Beyond-DFT Electrochemistry with Accelerated & Solvated Techniques (BEAST) Workshop, August 15-16](#beast) - [IDEAS-ECP Webinar on "Effective Strategies for Writing Proposal Work Plans for Research Software", August 10](#ecpwebinar) - [Two-Part OpenMP Offload Training, August 11 & September 1](#omptrain) - [Using R on HPC Clusters Training, August 17 & 19](#r4hpc) - [(NEW/UPDATED) HDF5 Workshop, Aug 31, 2022](#hdf5) - [(NEW/UPDATED) Nsight Systems and Nsight Compute Profiling Workshop, August 31, 2022](#nsight) ## [NERSC News ](#section7) ## - [Come Work for NERSC!](#careers) - [About this Email](#about) ## Summary of Upcoming Events and Key Dates <a name="dates"/></a> ## **Scheduled Outages** (See <http://my.nersc.gov/>): - **Cori** - 08/17/22 07:00-20:00 PDT, Scheduled Maintenance - 09/21/22 07:00-20:00 PDT, Scheduled Maintenance - 10/19/22 07:00-20:00 PDT, Scheduled Maintenance - 11/16/22 07:00-20:00 PDT, Scheduled Maintenance - **Perlmutter** - **08/01/22 08:00-17:00 PDT, Scheduled Maintenance** - 08/08/22 08:00-17:00 PDT, Scheduled Maintenance - 08/15/22 08:00-17:00 PDT, Scheduled Maintenance - **Community File System (CFS)**: - 08/17/22 07:00-10:00 PDT, Unavailable **Key Dates** August 2022 September 2022 October 2022 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 1 2 3 1 7 8 9 10 11 12 13 4 5 6 7 8 9 10 2 3 4 5 6 7 8 14 15 16 17 18 19 20 11 12 13 14 15 16 17 9 10 11 12 13 14 15 21 22 23 24 25 26 27 18 19 20 21 22 23 24 16 17 18 19 20 21 22 28 29 30 31 25 26 27 28 29 30 23 24 25 26 27 28 29 30 31 - **August 2-4, 2022**: [OpenACC Summit](#oaccsummit) - **August 5, 2022**: - [PAW-ATM Submissions Due](#pawatm) - [WACCPD Submissions Due](#waccpd) - **August 10, 2022**: - [SpinUp Workshop](#spinup) - [IDEAS-ECP Monthly Webinar](#ecpwebinar) - **August 11, 2022**: [OpenMP Offload Basics Training](#omptrain) - **August 15, 2022**: - [Interactive/Urgent HPC Workshop Submissions Due](#urgentcompute) - [ERCAP Allocation Requests Opens](#ercap) - **August 15-16, 2022**: [BEAST Workshop](#beast) - **August 17, 2022**: Cori Monthly Maintenance - **August 17 & 19, 2022**: [R for HPC Training](#r4hpc) - **August 25, 2022**: [E4S at NERSC Training](https://www.nersc.gov/users/training/events/e4s-at-nersc-2022/) - **August 25-26, 2022**: [AI for Science Bootcamp](https://www.nersc.gov/users/training/events/nersc-ai-for-science-bootcamp-august-25-26-2022/) - **August 26, 2022**: [Submissions due for SuperCheck-SC22](https://supercheck.lbl.gov/call-for-participation) - **August 31, 2022**: [HDF5 Workshop](#hdf5) - **September 1, 2022**: [OpenMP Offload Optimization Training](#omptrain) - **September 5, 2022**: - Labor Day Holiday (No Consulting or Account Support) - [Early Career HPC Award Nominations Due](#ecaward) - **September 21, 2022: Cori Monthly Maintenance - **October 3, 2022**: ERCAP Requests Due - **October 5, 2022**: SpinUp Workshops - **October 12-13, 2022**: [Confab22 (ESnet User Meeting)](#confab22) - **October 19, 2022**: Cori Monthly Maintenance ([back to top](#top)) --- ## NERSC Status <a name="section1"/></a> ## ### NERSC Operations Continue with Minimal Changes <a name="curtailment"/></a> Berkeley Lab, where NERSC is located, continues its operations with pandemic-related protocols in place. NERSC remains in operation, with the majority of NERSC staff continuing to work remotely, and staff essential to operations onsite. We do not expect any disruptions to our operations in the foreseeable future. You can continue to expect regular online consulting and account support as well as schedulable online appointments. Trainings continue to be held online. Regular maintenances on the systems continue to be performed while minimizing onsite staff presence, which could result in longer downtimes than would occur under normal circumstances. Because onsite staffing remains minimal, we request that you continue to refrain from calling NERSC Operations except to report urgent system issues. For **current NERSC systems status**, please see the online [MOTD](https://www.nersc.gov/live-status/motd/) and [current known issues](https://docs.nersc.gov/current/) webpages. ([back to top](#top)) --- ## This Week's Events and Deadlines <a name="section2"/></a> ## ### (NEW/UPDATED) OpenACC & Hackathons 2022 Summit this week, August 2-4 <a name="oaccsummit"/></a> You are invited to attend the [2022 OpenACC and Hackathons Summit](https://bit.ly/3OwoXjF). Scheduled August 2 to 4, 2022, this FREE digital event showcases leading research accelerated by the OpenACC directives-based programming model or optimized through the Open Hackathons program. This year's Summit features two keynote speakers, tutorials covering HPC and AI topics, and invited talks from preeminent scientists from across national laboratories, research institutions, universities, and supercomputing centers worldwide. By attending this event, you will: - Delve into relevant OpenACC use cases from astrophysics to materials modeling; - Explore recent research advanced through efforts at our global hackathons; - Gain critical skills attending our HPC and AI tutorials; and - Discover the latest tools and resources for computational scientists. ### (NEW/UPDATED) Call for Papers: PAW-ATM at SC22, deadline extended till this Friday Aug 5 <a name="pawatm"/></a> The Parallel Applications Workshop, Alternatives to MPI+X (PAW-ATM) is seeking full-length papers and talk abstracts for the workshop to be held Monday, November 14, 2022, in conjunction with SC22. PAW-ATM is a forum for discussing HPC applications written in alternatives to MPI+X. These alternatives include new languages (e.g., Chapel, Regent, XcalableMP), frameworks for large-scale data science (e.g., Arkouda, Dask, Spark), and extensions to existing languages (e.g., Charm++, COMPSs, Fortran, Legion, UPC++). Topics of interest include, but are not limited to: - Novel application development using high-level parallel programming languages and frameworks. - Examples that demonstrate performance, compiler optimization, error checking, and reduced software complexity. - Applications from artificial intelligence, data analytics, bioinformatics, and other novel areas. - Performance evaluation of applications developed using alternatives to MPI+X and comparisons to standard programming models. - Novel algorithms enabled by high-level parallel abstractions. - Experience with the use of new compilers and runtime environments. - Libraries using or supporting alternatives to MPI+X. - Benefits of hardware abstraction and data locality on algorithm implementation. Submissions close **August 5, 2022**. For more information and to submit a paper, please visit: <https://go.lbl.gov/paw-atm>. ### (NEW/UPDATED) Submissions due this Friday: Workshop on Accelerator Programming Using Directives <a name="waccpd"/></a> The Call for Papers for the 9th Workshop on Accelerator Programming Using Directives WACCPD) is now open! The workshop aims to showcase all aspects of accelerator programming for heterogeneous systems such as innovative high-level language or library approaches, lessons learned while using directives or other portable approaches to migrate scientific legacy code to modern systems, and compilation and runtime scheduling techniques. The paper submission deadline is August 5, 2022 AOE. For more information, see <https://www.waccpd.org/>. ([back to top](#top)) --- ## Perlmutter <a name="section3"/></a> ## ### (NEW/UPDATED) Perlmutter Machine Status <a name="perlmutter"/></a> Perlmutter is now available to all users with an active NERSC account. This includes both the phase 1 (GPU based) and phase 2 (CPU-only) nodes. There is currently **no** charge to run jobs on Perlmutter, but it is not yet a production system and is subject to unannounced and unexpected periods of unavailabilty. NERSC is currently in the process of upgrading Perlmutter's GPU nodes from the Slingshot10 interconnect to the Slingshot11 interconnect, see the item below for details. After today's maintenance jobs will by default run on Slingshot11 nodes. See <https://docs.nersc.gov/current/#perlmutter> for a list of current known issues and <https://docs.nersc.gov/jobs/policy/#qos-limits-and-charges> for tables of the QOS's available on Perlmutter. This newsletter section will be updated regularly with the latest Perlmutter status. ### (NEW/UPDATED) Perlmutter Network Upgrade Is In Progress- Slingshot11 Is Now The Default <a name="pmss10ss11"/></a> Over the last few weeks we have been upgrading Perlmutter's GPU nodes from the Slingshot10 interconnect to the Slingshot11 interconnect. This involves updating both the hardware and the software on the nodes. Each node will get its Network Interface Cards replaced with an upgraded version, plus an update to the associated software. Following the maintenance on Monday, August 1, all GPU queues (regular, interactive, debug, etc.) will steer jobs to GPU nodes with the Slingshot11 interconnect instead of Slingshot10. Recompilation is NOT required to use the nodes with the Slingshot11 interconnect, and you will not need to change your batch scripts to be automatically run on the Slingshot11 GPU nodes. All jobs that were already in the queue before the start of the maintenance will run on the type of nodes that were in the queue at time of submission (e.g., a job submitted to the regular queue last week would run on Slingshot10 nodes next week). The CPU-only nodes have always had Slingshot11; no change is required for CPU-only jobs. If you wish to continue using the Slingshot10 GPU nodes, you will need to explicitly request them by adding `_ss10` to the queue name, e.g., `-C gpu -q regular_ss10`. Jobs cannot use a mixture of Slingshot10 and Slingshot11 nodes. The login nodes will also be upgraded to the Slingshot11 interconnect. No user impact is expected. With Slingshot11, GPU nodes are upgraded from 2x12.5GB/s NICs to 4x25GB/s NICs. The additional bandwidth and NIC resources may bring an immediate benefit for communication-bound codes. Due to a known software issue, machine learning codes may initially be slower when run at scale on these nodes; we are waiting for a libfabric-optimized fix from the vendors to address this. We strongly encourage you to try running your jobs on the Slingshot11 GPU nodes to help us identify and rectify any issues. Please report any issue you encounter via a [ticket](https://help.nersc.gov). ### (NEW/UPDATED) Perlmutter Maintenance Schedule for July & August <a name="pmmaint"/></a> Perlmutter is undergoing full-machine maintenances on approximately a weekly basis to remove nodes with the Slingshot10 interconnect so they can be refurbished, and add in nodes previously removed that have been upgraded to the Slingshot11 interconnect. This process will continue for several weeks, with the goal of completing the process by the end of August. ([back to top](#top)) --- ## Updates at NERSC <a name="section4"/></a> ## ### ERCAP Allocations Requests Opens August 15 <a name="ercap"/></a> The [Call for Proposals](https://www.nersc.gov/users/accounts/allocations/2023-call-for-proposals-to-use-nersc-resources) for the 2023 Energy Research Computing Allocations Process (ERCAP) has been announced. We will begin accepting requests on August 15 and will close on October 3, 2022. The majority of NERSC resources and compute time are allocated through the ERCAP process. Proposals are reviewed and awarded by Office of Science allocation managers and implemented by NERSC. While NERSC accepts proposals at any time during the year, applicants are encouraged to submit proposals by the above deadline in order to receive full consideration for Allocation Year 2023 (AY2023). All current projects (including Exploratory, Education, and Director's Reserve, but excluding ALCC) must be renewed for 2023 if you wish to continue using NERSC. New projects for AY2023 should be submitted at this time as well. In 2023, NERSC will be allocating compute time based on the capacity of Perlmutter GPU and Perlmutter CPU nodes only. You will need to request time on each resource individually; hours are not transferrable between the two different architectures. NERSC allocations experts will provide an overview of the process at the next NUG meeting on August 18, and hold virtual office hours on the following dates: August 25, September 15, September 29, and October 3 (ERCAP due date). In addition, you can always submit a question or help request through the NERSC help portal (<https://help.nersc.gov>) or to <allocations@nersc.gov>. ### (NEW/UPDATED) Cori to be retired after AY2022 <a name="coriretire"/></a> Cori was installed in 2015, and after more than six years may be NERSC's longest-lived system. Perlmutter, whose CPU partition provides computing power equivalent to all of Cori, is expected to be fully operational for AY2023. We plan to retire Cori at the end of this allocation year - all AY2023 allocations will be based on Perlmutter's capacity. AY2023 Allocations will be the topic of the next NUG Monthly Meeting, on August 18, and we may delay Cori's retirement if unexpected issues arise with Perlmutter. If you have any concerns or questions, please let us know via https://help.nersc.gov. ### (NEW/UPDATED) E4S 22.05 Now Available on Perlmutter <a name="e4s"/></a> Version 22.05 of the Extreme-Scale Scientific Software Stack (E4S) is now available on Perlmutter, with a total of 442 Spack packages built with GNU compiler version 11.2.0 and Cray compiler version 13.0.2. The current release does not include any packages built with the NVIDIA HPC software stack; once that programming environment is more thoroughly tested, we will install a subset of packages with that programming environment. E4S, a curated collection of software libraries for high-performance computing incorporating mutually compatible library versions, includes programming models such as MPI, development tools such as HPCToolkit, mathematical libraries such as PETSc and Trilinos, and data/visualization libraries such as HDF5 and Paraview. At NERSC we build a subset of the collection, on top of vendor-provided optimized libraries such as cray-mpich. The 22.05 E4S software stack will be supported through July 31, 2023. For more details about E4S/22.05 please see https://docs.nersc.gov/applications/e4s/perlmutter/22.05/. Some updates to Spack and E4S may have an impact on current E4S users and advanced developers who want to upgrade to the latest version. In particular, changes to Spack's structure of config.yaml and modules.yaml mean that the module_roots configuration has migrated to modules.yaml . For more details, the E4S/22.05 site configuration files can be found at https://software.nersc.gov/NERSC/spack-infrastructure/-/tree/main/spack_site_scope/perlmutter/22.05. ### Counters Used by Performance Tools Re-Enabled <a name="vtune"/></a> Many of the counters used by performance tools were temporarily disabled on NERSC resources to mitigate a security vulnerability. This impacted users of many performance tools, including VTune, CrayPat, PAPI, Nsight System (CPU metrics only), HPCToolkit, MAP, and more. At the last maintenance, a patch was added on Cori that resolved the vulnerability, and the counters have now been re-enabled. ([back to top](#top)) --- ## Calls for Participation <a name="section5"/></a> ## ### (NEW/UPDATED) Nominations Open for NERSC Early Career HPC Acheivements Awards <a name="ecaward"/></a> Nominations are now open for the 2021-2022 NERSC Early Career High-Performance Computing Achievement Awards! Awards will be made in the following two categories: - **High Impact Scientific Achievement**: recognizing work that has or is expected to have an exceptional impact on scientific understanding, engineering design for scientific facilities, or a broad societal problem. - **Innovative Use of High-Performance Computing**: recognizing researchers who have used NERSC's resources in innovative ways to solve a significant problem or have provided a new methodology with the potential to have a large scientific impact. Examples might include application of HPC to a new scientific field or combining computing, data, networking, and edge services to do something entirely new in a domain where HPC is already established. Eligibility: The awards recognize scientific research that used NERSC resources during allocation years 2021 and/or 2022. Any NERSC user who - at the time of their cited accomplishments - was a student or had received their degree after January 1, 2016 is eligible. For more information and to nominate someone, please see the [NERSC HPC Achievement Awards](https://www.nersc.gov/science/nersc-hpc-achievement-awards/) page. Nominations are due by Monday, September 5, 2022 at 11:59 PM PDT. ### Call for Submissions: Combined International Workshop on Interactive / Urgent Supercomputing at SC22 <a name="urgentcompute"/></a> This year the Interactive HPC and Urgent HPC workshops have joined forces to host the [Combined International Workshop on Interactive / Urgent Supercomputing](https://www.urgenthpc.com/) at SC22. The workshop will be held Monday, November 14, bringing together stakeholders, researchers, and practitioners from across the HPC community who are working, or interested in, the fields of interactive HPC and the use of supercomputing for urgent decision-making. Success stories, case studies, and best practices will be shared across the two themes with a goal of enhancing the communities' activities and identifying new opportunities for collaboration. If you, as a NERSC user, are interested in contributing to the workshop yourself, you can do so by submitting a paper in one of 2 categories: - Full research paper: Describing novel research, up to 10 pages. - Hot topic paper: Less mature/late breaking results, up to 6 pages. Authors of accepted papers will be invited to speak at the workshop. Important dates for submission: - Submission deadline: 15th August 2022 (AoE) - Author notification: 9th September 2022 - Camera ready: 30th September 2022 (AoE) - Workshop: Monday morning, 14th November 2022 ### Registration is Open for Confab22 ESnet User Meeting, October 12-13! <a name="confab22"/></a> Registration is open for Confab22 -- ESnet's first annual user meeting! The event will take place from October 12-13 at the Berkeley Marriott Residence Inn (as well as online, for remote attendees). Learn more or register at: <https://go.lbl.gov/confab22>. ([back to top](#top)) --- ## Upcoming Training Events <a name="section6"/></a> ## ### Announcing Beyond-DFT Electrochemistry with Accelerated & Solvated Techniques (BEAST) Workshop, August 15-16 <a name="beast"/></a> The First Annual Beyond-DFT Electrochemistry with Accelerated and Solvated Techniques (BEAST) Workshop will be held virtually on August 15 and 16, 2022. This workshop is not sponsored by or affiliated with NERSC, but it will use NERSC resources. The BEAST workshop will include hands-on user sessions on simulation methods best suited for studying electrocatalysis. These include grand-canonical DFT (GC-DFT) with the latest solvation methods implemented in the JDFTx computational package, beyond-DFT random phase approximation (RPA) calculations in the BerkeleyGW package, and a preview of our GC-DFT electrocatalysis database, BEAST DB. The target participants for the Workshop are graduate students, postdocs, and researchers who are interested in learning about or sharpening their skills on ab initio calculations of electrocatalytic systems, including the effect of solvation, self-consistent applied potential and beyond-DFT exchange-correlation effects. For more information and to register, please see <http://beast-echem.org/workshops/2022/>. ### IDEAS-ECP Webinar on "Effective Strategies for Writing Proposal Work Plans for Research Software", August 10 <a name="ecpwebinar"/></a> The next webinar in the [Best Practices for HPC Software Developers](http://ideas-productivity.org/events/hpc-best-practices-webinars/) series is entitled "Effective Strategies for Writing Proposal Work Plans for Research Software", and will take place **Wednesday, August 10, at 10:00 am Pacific time.** In this webinar, Chase Million (Million Concepts) will discuss how to develop a clear, plausible work plan to persuade review panels that the project objectives can be achieved and that the requested resources are reasonable and sufficient for doing so. This includes pre-proposal software project scoping, requirements elicitation, and estimation methods; vision and scope, concept of operations, and requirements documents; work breakdown structures; requirements / task matrices, and Gannt charts. Strategies for maximizing the impact of these artifacts within a research proposal will be discussed, with suggestions for further reading. There is no cost to attend, but registration is required. Please register [here](https://www.exascaleproject.org/event/strategies4proposalwriting/). ### Two-Part OpenMP Offload Training, August 11 & September 1 <a name="omptrain"/></a> In collaboration with OLCF, NERSC is offering a two-part training on OpenMP offload. The first session, entitled "Basics of OpenMP Offload", will be held on Thursday, August 11, from 10 am to 12:30 pm (Pacific time). This training will provide a general overview of the OpenMP programming model and cover the basics of using OpenMP directives to offload work to GPUs. Hands-on exercises will follow the lectures. The second session, entitled "Optimization and Data Movement", will be presented by OLCF and NERSC staff, covering optimization strategies and showing how efficient data movement and a better understanding of the hierarchy of parallelism available can lead to improved performance. Hands-on exercises will follow the lectures. For more information and to register, please see <https://www.nersc.gov/users/training/events/introduction-to-openmp-offload-aug-sep-2022/>. ### Using R on HPC Clusters Training, August 17 & 19 <a name="r4hpc"/></a> This OLCF-hosted Webinar tutorial helps users learn a basic workflow for how to use R on an HPC cluster. The tutorial will focus on parallel computing as a means to speed up R scripts on a cluster computer. Many packages in R offer some form of parallel computing yet they rely on a much smaller set of underlying approaches: multithreading in compiled code, the unix fork, and MPI. The tutorial will take a narrow path to focus on packages that directly engage the underlying approaches, yet are easy to use at a high-level. This workshop is targeted for current users of OLCF, CADES, ALCF and NERSC. Users who do not already have accounts on those system are welcome to attend the lectures but will not be able to participate in all of the hands-on activities. Please find more information on the NERSC event page at <https://www.nersc.gov/users/training/events/using-r-on-hpc-clusters-webinar-aug-17-2022/>. ### (NEW/UPDATED) HDF5 Workshop, Aug 31, 2022 <a name="hdf5"/></a> This workshop, presented by ALCF and open to NERSC users, is geared towards achieving HDF5 Performance on the ALCF Polaris system, with similar architecture to NERSC Perlmutter. HDF5 is a data model, file format, and I/O library that has become a de-facto standard for HPC applications to achieve scalable I/O and the storage and management of big data from computer modeling. This workshop will be geared toward the ALCF Polaris system. It will give an overview of its parallel file system and its possible effects on HDF5 performance and will provide a summary of tools useful for performance investigations. It will use examples from well-known codes and use cases from HPC science applications in hands-on demonstrations. It will discuss HDF5 tuning techniques such as collective metadata I/O, data aggregation, parallel compression, and other HDF5 tuning parameters and features. Finally, the workshop will allow for a review of attendees' HDF5 I/O implementations targeting the Polaris system. Please find more information on the NERSC event page at https://www.nersc.gov/users/training/events/hdf5-workshop/ ### (NEW/UPDATED) Nsight Systems and Nsight Compute Profiling Workshop, August 31, 2022 <a name="nsight"/></a> This workshop hosted by OLCF and presented by Nvidia is targeted at current users of OLCF, NERSC, and ALCF. The workshop will introduce the NVIDIA Nsight Systems and Nsight Compute performance analysis tools which are designed to visualize an application's algorithms, help you identify the largest opportunities to optimize, and tune to scale efficiently across any quantity or size of CPUs and GPUs. The workshop will include presentations, demos on Summit, and hands-on exercises on Summit, Permultter, and Polaris. Please find more information on the NERSC event page at https://www.nersc.gov/users/training/events/nsight-systems-and-nsight-compute-profiling-workshop-aug2022/ ([back to top](#top)) --- ## NERSC News <a name="section7"/></a> ## ### Come Work for NERSC! <a name="careers"/></a> NERSC currently has several openings for postdocs, system administrators, and more! If you are looking for new opportunities, please consider the following openings: - **NEW** [Storage Systems Developer](http://m.rfer.us/LBLtZl5Eo): Help develop code to modernize the software for the High Performance Storage System (HPSS) system using Agile Software Development and DevOps techniques. - [Scientific Data Architect](http://m.rfer.us/LBL7BZ58O): Support a high-performing data and AI software stack for NERSC users, and collaborate on multidisciplinary, cross-institution scientific projects with scientists and instruments from around the world. - [HPC Architecture and Performance Engineer](http://m.rfer.us/LBL1rb56n): Contribute to NERSC's understanding of future systems (compute, storage, and more) by evaluating their efficacy across leading-edge DOE Office of Science application codes. - [Technical and User Support Engineer](http://m.rfer.us/LBLPYs4pz): Assist users with account setup, login issues, project membership, and other requests. - [NESAP for Simulations Postdoctoral Fellow](http://m.rfer.us/LBLRUa4lS): Collaborate with computational and domain scientists to enable extreme-scale scientific simulations on NERSC's Perlmutter supercomputer. - [Cyber Security Engineer](http://m.rfer.us/LBLa_B4hg): Join the team to help protect NERSC resources from malicious and unauthorized activity. - [NESAP for Data Postdoctoral Fellow](http://m.rfer.us/LBLXEt4g5): Collaborate with computational and domain scientists to enable extreme-scale scientific data analysis on NERSC's Perlmutter supercomputer. - [Machine Learning Postdoctoral Fellow](http://m.rfer.us/LBL2sf4cR): Collaborate with computational and domain scientists to enable machine learning at scale on NERSC's Perlmutter supercomputer. - [HPC Performance Engineer](http://m.rfer.us/LBLsGT43z): Join a multidisciplinary team of computational and domain scientists to speed up scientific codes on cutting-edge computing architectures. (**Note:** You can browse all our job openings on the [NERSC Careers](https://lbl.referrals.selectminds.com/page/nersc-careers-85) page, and all Berkeley Lab jobs at <https://jobs.lbl.gov>.) We know that NERSC users can make great NERSC employees! We look forward to seeing your application. ### About this Email <a name="about"/></a> You are receiving this email because you are the owner of an active account at NERSC. This mailing list is automatically populated with the email addresses associated with active NERSC accounts. In order to remove yourself from this mailing list, you must close your account, which can be done by emailing <accounts@nersc.gov> with your request. _______________________________________________ Users mailing list Users@nersc.gov

Close this window