Close this window

Email Announcement Archive

[Users] NERSC Weekly Email, Week of May 22, 2023

Author: Rebecca Hartman-Baker <rjhartmanbaker_at_lbl.gov>
Date: 2023-05-22 16:15:22

# NERSC Weekly Email, Week of May 22, 2023<a name="top"></a> # ## Contents ## ## [Summary of Upcoming Events and Key Dates](#section1) ## - [Scheduled Outages](#outages) - [Key Dates](#dates) ## [This Week's Events and Deadlines](#section2) ## - [Join Us for the NUG Meeting, this Thursday 25 May, 11am PT](#nug) - [Join Us Friday for Cori to Perlmutter Office Hours](#c2pohthisweek) - [Proposals Due Wednesday for NERSC GPU Hackathon, July 12th & 19th-21st](#hackathon) - [Julia for High-Performance Computing Tutorial on Wednesday](#julia) - [ALCF Webinar on Porting to Aurora on Wednesday](#aurora) - [International Workshop on OpenMP (IWOMP): Call for Papers Extended to Friday!](#iwomp) - [Memorial Day Holiday Next Monday, May 29; No Consulting or Account Support](#memday) ## [Perlmutter](#section3) ## - [Perlmutter Machine Status](#perlmutter) - [Prepare Now for Transitioning to Perlmutter from Cori!](#pmprep) - [(NEW/UPDATED) Scratch Purging on Perlmutter Begins in June](#pmscratch) ## [Updates at NERSC ](#section4) ## - [Cori Retirement Date Approaching: May 31st at Noon](#coriretireC) ## [Calls for Participation](#section5) ## - [Your Feedback on LLVM Flang Compiler Sought!](#flangsurvey) ## [NERSC User Community ](#section6) ## - [NERSC Users Slack Channel Guide Now Available](#slackguide) - [Submit a Science Highlight Today!](#scihigh) ## [Seminars](#section7) ## - [IDEAS-ECP Webinar on "The OpenSSF Best Practices Badge Program" June 14](#ecpwebinar) ## [Conferences & Workshops](#section8) ## - [Call for Submissions for US Research Software Engineer Association Conference](#usrse) - [Call for Participation Open for RSE-eScience-2023 Workshop](#rseesci) - [Call for Papers: 6th Annual Parallel Applications Workshop, Alternatives To MPI+X (PAW-ATM) at SC23](#pawatm) ## [Training Events ](#section9) ## - [Join NERSC for Cori to Perlmutter Office Hours in May!](#c2poh) - [Training on Advanced SYCL Techniques & Best Practices, May 30](#sycl) - [Introduction to NERSC Resources Training, June 8](#intronersc) - [OLCF AI for Science at Scale Introductory Training on June 15](#ai4sciolcf) - [Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts June 21!](#spinup) - [Attention Novice Parallel Programmers: Sign up for June 22 "Crash Course in Supercomputing"](#crashcourse) ## [NERSC News ](#section10) ## - [Come Work for NERSC!](#careers) - [About this Email](#about) ([back to top](#top)) --- ## Summary of Upcoming Events and Key Dates <a name="section1"/></a> ## ### Scheduled Outages <a name="outages"/></a> (See <https://www.nersc.gov/live-status/motd/> for more info): - **Cori** - 05/31/23 12:00-07/15/23 12:00 PDT, Retired - Cori will be removed from service and retired at this time. - **HPSS Archive (User)** - 05/24/23 09:00-13:00 PDT, Scheduled Maintenance - Some files may be unavailable to read at this time due to library hardware maintenance. Write operations to the library will be unaffected. - **LDAP** - 06/14/23 10:00-14:00 PDT, Scheduled Maintenance - Deleting unused trees from the LDAP environment. Updates to LDAP such as changed passwords will be cached and delayed until the completion of the window. ### Key Dates <a name="dates"/></a> May 2023 June 2023 July 2023 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 1 2 3 1 7 8 9 10 11 12 13 4 5 6 7 8 9 10 2 3 4 5 6 7 8 14 15 16 17 18 19 20 11 12 13 14 15 16 17 9 10 11 12 13 14 15 21 22 23 24 25 26 27 18 19 20 21 22 23 24 16 17 18 19 20 21 22 28 29 30 31 25 26 27 28 29 30 23 24 25 26 27 28 29 30 31 #### This Week - **May 24, 2023**: - [NERSC GPU Hackathon Application Deadline](#hackathon) - [Julia for High-Performance Computing Tutorial](#julia) - [ALCF Webinar on Porting to Aurora](#aurora) - **May 25, 2023**: [NUG Monthly Webinar](#nug) - **May 26, 2023**: - [Cori to Perlmutter Office Hours](#c2pohthisweek) - [IWOMP Submission Deadline](#iwomp) #### Next Week - **May 29, 2023**: [Memorial Day Holiday](#memday) (No Consulting or Account Support) - **May 30, 2023**: - [Cori to Perlmutter Office Hours](#c2poh) - [Advanced SYCL Training](#sycl) - **May 31, 2023**: [Cori Retirement](#coriretireC) #### Future - **June 8, 2023**: [Intro to NERSC Resources](#intronersc) - **June 14, 2023**: [IDEAS-ECP Monthly Webinar](#hpcwebinar) - **June 15, 2023**: [AI for Science at Scale](#ai4sciolcf) - **June 19, 2023**: - [US-RSE'23 Poster Abstract Submission Deadline](#usrse) - Juneteenth Holiday (No Consulting or Account Support) - **June 21, 2023**: [SpinUp Workshop](#spinup) - **June 22, 2023**: [Crash Course in Supercomputing](#crashcourse) - **June 30, 2023**: [RSE-eScience-2023 Abstract Submission Deadline](#rseesci) - **July 4, 2023**: Independence Day Holiday (No Consulting or Account Support) - **July 24, 2023**: [PAW-ATM Submission Deadline](#pawatm) ([back to top](#top)) --- ## This Week's Events and Deadlines <a name="section2"/></a> ## ### Join Us for the NUG Meeting, this Thursday 25 May, 11am PT <a name="nug"/></a> The NUG monthly meeting is a forum where NERSC and its users can celebrate successes, discuss difficulties and learn from each other. Our next meeting is **this Thursday, 25 May, at 11am** (Pacific time), at <https://lbnl.zoom.us/j/285479463>. This week our topic-of-the-day will be **Slurm Magic**. Our agenda for this month is: - **Win-of-the-month:** open discussion for attendees to tell of some success you've had -- e.g., getting a paper accepted, solving a problem, or acheiving something innovative or high impact using NERSC. - **Today-I-learned:** open discussion for attendees to point out something that surprised them, or that might be valuable to other users to know. - **Announcements and CFPs:** upcoming conferences, workshops, or other events that you think might interest or benefit the NERSC user community. - **Topic-of-the-day:** **Slurm Magic: Tips and Tricks for Improved Scheduling and Application Insight**. NERSC's Charles Lively will go over some useful tips and tricks for gaining more insight into applications for scheduling and execution. - **Coming up:** Nominations and requests for future topics. We're especially interested to hear from our users -- what are you using NERSC for, and what are you learning that might be helpful for other NERSC users, and for NERSC? Please see <https://www.nersc.gov/users/NUG/teleconferences/nug-meeting-may-2023/> for details. ### Join Us Friday for Cori to Perlmutter Office Hours <a name="c2pohthisweek"/></a> On Friday, May 26, from 10 am to 12 noon (Pacific time), drop into NERSC's Cori to Perlmutter transition virtual office hours with your questions for NERSC experts on migrating your applications and workflows from Cori to Perlmutter. Can't make it this week? No problem -- one additional session is planned for Tuesday, May 30. For more information, including connection information (login required for Zoom link), please see <https://www.nersc.gov/users/training/events/migrating-from-cori-to-perlmutter-office-hours-may-2023/>. ### Proposals Due Wednesday for NERSC GPU Hackathon, July 12th & 19th-21st <a name="hackathon"/></a> NERSC, in conjunction with NVIDIA and OLCF, will be hosting a GPU Hackathon from July 19th-21st with an opening day on July 12th as part of the annual GPU Hackathon Series. This year, the hackathon will be virtual and selected code teams will be able to test and develop on Perlmutter, and there is also the option to use an ARM system. **The deadline to submit a proposal is 11:59 PM Pacific, this Wednesday, May 24th, 2023. Apply now!** Hackathons combine teams of developers with mentors to either prepare their own application(s) to run on GPUs or optimize their application(s) that currently run on GPUs. This virtual event consists of a kick-off day, where hackers and mentors video-conference to meet and develop their list of hackathon goals, as well as get set up on the relevant systems. This is followed by a one week preparation period before the 3-day intensive primary event. If you are interested in more information, or would like to submit a short proposal form, please visit the [GPU Hackathon event page](https://www.openhackathons.org/s/siteevent/a0C5e000005Va7IEAS/se000163) or [NERSC's event page](https://sites.google.com/lbl.gov/july-2023-gpu-hackathon/home). Questions can also be directed to NERSC's host, Hannah Ross (HRoss@lbl.gov). ### Julia for High-Performance Computing Tutorial on Wednesday <a name="julia"/></a> The Oak Ridge Leadership Computing Facility (OLCF) will host a virtual training event on Julia for high-performance computing on May 24, 2023. NERSC users are invited to participate. The tutorial introduces participants to the Julia language for high-performance computing applications. Julia, which proposes to fill a gap in the high-performance plus high-productivity space, is a dynamic language built on top of LLVM with lightweight interoperability with C and Fortran code and a unified ecosystem for data science and reproducibility. For more information and to register, please see <https://www.nersc.gov/users/training/events/julia-for-high-performance-computing-may-24-2023/>. ### ALCF Webinar on Porting to Aurora on Wednesday <a name="aurora"/></a> This Wednesday, as part of the ALCF Developer Sessions, Aaron Scheinberg and Esteban Rangel (ALCF) will present a webinar entitled "A Tale of Two Apps: Preparing XGC and HACC to Run on Aurora." This webinar will cover the Aurora porting strategies for two applications: the XGC gyrokinetic plasma physics code and the HACC cosmology code. Argonne's Aaron Scheinberg & Esteban Rangel will discuss lessons learned and tools that were crucial in porting these applications to Argonne's exascale machine. For the XGC portion of the talk, Scheinberg will discuss the lessons learned from running on diverse new machines (Polaris, Sunspot, and recently Frontier), the unique challenges of Aurora, and how these inform our plans as Aurora becomes available. For the HACC portion, Rangel will cover the tools and development strategies used to port HACC from CUDA to SYCL, the challenges of supporting multiple codebases (CUDA/HIP/SYCL) in HACC, and the optimizations made to improve performance for the Intel Xe GPUs. For more information and to register, please see <https://www.nersc.gov/users/training/events/preparing-xgc-and-hacc-on-aurora-may2023/>. ### International Workshop on OpenMP (IWOMP): Call for Papers Extended to Friday! <a name="iwomp"/></a> IWOMP is the annual workshop dedicated to the promotion and advancement of all aspects of parallel programming with OpenMP. IWOMP gathers attendees from academia, DOE labs, international HPC centers, and industry. Papers are published through Springer. IWOMP 2023 will be hosted by Bristol University, Bristol, United Kingdom, and will be co-located with EuroMPI 2023. The submission Deadline is **this Friday, May 26, 2023 (AoE)**. Topics of interest include but are not limited to the following: - Accelerated computing and offloading to devices - Applications (in any domain) that rely on OpenMP - Data mining and analysis or text processing and OpenMP - Machine learning and OpenMP - Memory model - Memory policies and management - Performance analysis and modeling - Performance portability - Proposed OpenMP extensions - Runtime environment - Scientific and numerical computations - Tasking - Tools - Vectorization For more information, including how to submit, please see <https://www.iwomp.org/call-for-papers/>. ### Memorial Day Holiday Next Monday, May 29; No Consulting or Account Support <a name="memday"/></a> Consulting and account support will be unavailable next Monday, May 29, due to the Berkeley Lab-observed Memorial Day holiday. Regular consulting and account support will resume the following day. ([back to top](#top)) --- ## Perlmutter <a name="section3"/></a> ## ### Perlmutter Machine Status <a name="perlmutter"/></a> Perlmutter is available to all users with an active NERSC account. Some helpful NERSC pages for Perlmutter users: * [Perlmutter queue information](https://docs.nersc.gov/jobs/policy/#qos-limits-and-charge) * [Timeline of major changes](https://docs.nersc.gov/systems/perlmutter/timeline/) * [Current known issues](https://docs.nersc.gov/current/#perlmutter) This section of the newsletter will be updated regularly with the latest Perlmutter status. ### Prepare Now for Transitioning to Perlmutter from Cori! <a name="pmprep"/></a> With Cori scheduled to be retired on May 31, it is a good time to make sure that you are prepared to transition your workflows to Perlmutter. NERSC is here to help -- we have prepared a [Cori to Perlmutter migration webpage](https://docs.nersc.gov/systems/cori/migrate_to_perlmutter/) and provided several trainings recently that will be beneficial to current users looking to transition to Perlmutter, with more events in the works: - September's [New User Training](https://www.nersc.gov/users/training/events/new-user-training-sept2022/) contained lots of useful information about Perlmutter and how to use it. Slides are available and professionally captioned videos are linked from the training webpage. - The [GPUs for Science Day](https://www.nersc.gov/users/training/events/gpus-for-science-day-2022-october-25th/) (slides and videos with professional captions available) contained valuable resources for those migrating their applications to Perlmutter GPUs. - The [Data Day](https://www.nersc.gov/users/training/events/data-day-2022-october-26-27/) event (slides and videos currently available) included content aimed at users who are interested in porting their data workflows to Perlmutter. - The [Migrating from Cori to Perlmutter](https://www.nersc.gov/users/training/events/migrating-from-cori-to-perlmutter-training-dec2022/) training, which took place on December 1, focused on building and running jobs on Perlmutter. The slides and videos with professional captions from this training have been published on the event webpage. A repeat training with minor updates was offered on [March 10](https://www.nersc.gov/users/training/events/migrating-from-cori-to-perlmutter-training-march2023/). - Attend [Cori to Perlmutter Transition Office Hours](#c2poh) this month to get one-on-one help from an expert on transitioning to the new Perlmutter machine. ### (NEW/UPDATED) Scratch Purging on Perlmutter Begins in June <a name="pmscratch"/></a> The purpose of the scratch file systems attached to large NERSC supercomputing resources like Perlmutter and Cori is to provide a temporary storage space for user jobs to use while running. A scratch file system is less of a file system for storing files and more of a temporary space for data. Given this, NERSC has scratch purge policies in place, where NERSC reserves the right to remove old files from the scratch system to free up space. Starting at a date to be determined in June, NERSC will begin applying the scratch purge policy to Perlmutter's scratch file system. The precise date the policy goes into effect will be announced at least two weeks beforehand in a standalone email from NERSC. At that point, any files older than 8 weeks may be deleted from the scratch file system. For more information on NERSC file systems and scratch purge policies, please see <https://docs.nersc.gov/filesystems/quotas/>. ([back to top](#top)) --- ## Updates at NERSC <a name="section4"/></a> ## ### Cori Retirement Date Approaching: May 31st at Noon <a name="coriretireC"/></a> Cori is scheduled to be **retired in just over one week, next Wednesday, May 31, 2023 at noon PDT.** NERSC system administrators have put in place a reservation that will prevent any jobs from running past 12:00 noon on May 31. Users will still be able to log into the Cori login nodes and access the Cori scratch file system for one week, until Wednesday, June 7, 2023. At that time, the system will be powered down and prepared for removal. We encourage any users still primarily using Cori to migrate your workflows as soon as possible. This month, NERSC is holding an additional series of [virtual office hours](#c2poh) with a focus on transitioning from Cori to Perlmutter. The purpose of the office hours is to enable users to receive one-on-one assistance from NERSC staff on enabling their codes and workflows on Perlmutter. Typical questions include compiling code, leveraging GPUs, writing scripts, and more. ([back to top](#top)) --- ## Calls for Participation <a name="section5"/></a> ## ### Your Feedback on LLVM Flang Compiler Sought! <a name="flangsurvey"/></a> The Fortran Users of NERSC (FUN) group is gathering information from potential users of LLVM Flang, which is the new open-source Fortran compiler that is part of the LLVM Project. This information may be used to target new Fortran investigations that best fit the needs of NERSC users. If you're interested in the LLVM Flang compiler, please take a few minutes to fill out this survey: <https://forms.gle/vz43AwQyLJ7T47Um7> ([back to top](#top)) --- ## NERSC User Community <a name="section6"/></a> ## ### NERSC Users Slack Channel Guide Now Available <a name="slackguide"/></a> Thanks to all who submitted to the NERSC Users Slack Channel Guide. A list of public channels for special interests is now avaialble at <https://www.nersc.gov/users/NUG/nersc-users-slack/> (login required -- you may also join the NERSC Users Slack at this link). If you'd like your group to be added to the list, please contact <NERSC-Community-Managers@lbl.gov> with the name of your channel, at least one contact person, and confirmation that the channel is public. ### Submit a Science Highlight Today! <a name="scihigh"/></a> Doing cool science at NERSC? NERSC is looking cool, scientific and code development success stories to highlight to NERSC users, DOE Program Managers, and the broader scientific community in Science Highlights. If you're interested in your work being considered to be used as a featured Science highlight, please let us know via our [highlight form](https://docs.google.com/forms/d/e/1FAIpQLScP4bRCtcde43nqUx4Z_sz780G9HsXtpecQ_qIPKvGafDVVKQ/viewform). ([back to top](#top)) --- ## Seminars <a name="section7"/></a> ## ### IDEAS-ECP Webinar on "The OpenSSF Best Practices Badge Program" June 14 <a name="ecpwebinar"/></a> The next webinar in the [Best Practices for HPC Software Developers](http://ideas-productivity.org/events/hpc-best-practices-webinars/) series is entitled "The OpenSSF Best Practices Badge Program", and will take place Wednesday, June 14, at 10:00 am Pacific time. This webinar, presented by Roscoe A. Bartlett (Sandia National Laboratories), will give an overview of the OpenSSF Best Practices Badge Program, which provides resources for the creation, maintenance, and sustainability of robust, high quality, and secure open source software. The webinar will also describe how these resources can be used to support advances in software quality and sustainability efforts in CS&E. There is no cost to attend, but registration is required. Please register [at the event webpage](https://www.exascaleproject.org/event/openssf/). ([back to top](#top)) --- ## Conferences & Workshops <a name="section8"/></a> ## ### Call for Submissions for US Research Software Engineer Association Conference <a name="usrse"/></a> Submissions are now open for the first-ever US Research Software Engineer Association Conference (US-RSE'23), which will be held October 16-18, 2023, in Chicago, Illinois. The theme of the conference is "Software Enabled Discovery and Beyond." Topics of interest include but are not limited to: - Discovery enabled by software - Architectures, frameworks, libraries, and technology trends - Research data management - Support for scalability and data-driven methods - Improving the reproducibility of research - Usability, portals, workflows, and tools - Sustainability, security, and stability - Software engineering approaches supporting research - Community engagement - Diversity, Equity, and Inclusion for RSEs and in RSEng - Training and workforce development - Building an RSE profession For more information, including how to submit, please see <https://us-rse.org/usrse23/>. Poster abstracts are due June 19. ### Call for Participation Open for RSE-eScience-2023 Workshop <a name="rseesci"/></a> A workshop on Research Software Engineers (RSE) with eScience (RSE-eScience-2023) is being held as part of [eScience 2023](https://www.escience-conference.org/2023/). RSEs combine professional software engineering expertise with an intimate understanding of research, and are uniquely placed in the eScience ecosystem to ensure development of sustainable research environments and reproducible research outputs. The theme of this workshop is sustainable RSE ecosystems, encompassing both the role of RSEs in sustainable eScience and making the RSE ecosystem itself more sustainable. Prospective participants are encouraged to submit talk abstracts of at most 300 words on topics related to Sustainable RSE Ecosystems within eScience. Topics of interest include (but are not limited to): - Experiences as an RSE in eScience - Struggles between RSEs and domain scientists -- how to find the common ground? - Different roles in the development of research software - How to make the eScience and RSE ecosystem more sustainable? - What can the eScience community do to support their RSEs? How can RSEs develop and progress their careers within the eScience community? - How to argue for funding to develop and sustain scientific software, and support the RSEs doing the work? - Examples of RSEs enabling sustainability within the eScience community For more information, please see <https://us-rse.org/rse-escience-2023/>. Abstract submissions are due June 30. ### Call for Papers: 6th Annual Parallel Applications Workshop, Alternatives To MPI+X (PAW-ATM) at SC23 <a name="pawatm"/></a> The Parallel Applications Workshop, Alternatives to MPI+X (PAW-ATM) is seeking full-length papers and talk abstracts for the workshop to be held this November in conjunction with SC23. PAW-ATM is a forum for discussing HPC applications written in alternatives to MPI+X. These alternatives include, but are not limited to, new languages (e.g., Chapel, Regent, XcalableMP), frameworks for large-scale data science (e.g., Arkouda, Dask, Spark), and extensions to existing languages (e.g., Charm++, COMPSs, Fortran, Legion, UPC++). Topics of interest include, but are not limited to: - Novel application development using high-level parallel programming languages and frameworks. - Examples that demonstrate performance, compiler optimization, error checking, and reduced software complexity. - Applications from artificial intelligence, data analytics, bioinformatics, and other novel areas. - Performance evaluation of applications developed using alternatives to MPI+X and comparisons to standard programming models. - Novel algorithms enabled by high-level parallel abstractions. - Experience with the use of new compilers and runtime environments. - Libraries using or supporting alternatives to MPI+X. - Benefits of hardware abstraction and data locality on algorithm implementation. Submissions close July 24, 2023. For more information and to submit a paper, please visit: <https://go.lbl.gov/paw-atm>. ([back to top](#top)) --- ## Training Events <a name="section9"/></a> ## ### Join NERSC for Cori to Perlmutter Office Hours in May! <a name="c2poh"/></a> Buoyed by the success of our previous rounds of office hours in which 200 users were helped, NERSC has scheduled additional Cori to Perlmutter office hours. Users are invited to drop into our virtual office hours (held on Zoom) to get help from NERSC experts on migrating their applications and workflows to Perlmutter from Cori. User questions of any kind are welcomed at all sessions, which will be held from 10 am to noon (Pacific time) on the following days: - Friday, May 26 - Tuesday, May 30 (day before Cori retirement) For more information, including connection information (login required for Zoom link), please see <https://www.nersc.gov/users/training/events/migrating-from-cori-to-perlmutter-office-hours-may-2023/>. ### Training on Advanced SYCL Techniques & Best Practices, May 30 <a name="sycl"/></a> NERSC will host a virtual training event on advanced SYCL techniques and best practices on May 30. The SYCL programming model means heterogeneous programming using C++ is now more accessible than ever. SYCL uses modern standard C++, and it's a programming model that lets developers support a wide variety of devices (CPUs, GPUs, FPGAs, and more) from a single code base. The growing popularity of this programming model means that developers are eager to understand how to use all the features of SYCL and how to achieve great performance for their code. While the tutorial assumes existing knowledge and some experince with using SYCL to develop code for accelerators such as GPUs, video recordings of more introductory SYCL trainings that may help prepare you for this training are available on [this YouTube playlist](https://www.youtube.com/playlist?list=PL20S5EeApOStYsTQ7AFkiIpPzuOBWo0jI). Concepts included in this training include strategies for optimizing code, managing data flow, how to use different memory acces patterns, understanding work group sizes, using vectorization, the importance of ND ranges, and making the most of the multiple devices available on your architecture. For more information and to register, please see <https://www.nersc.gov/users/training/events/advanced-sycl-techniques-and-best-practices-may2023/>. ### Introduction to NERSC Resources Training, June 8 <a name="intronersc"/></a> NERSC is offering a training entitled "Introduction to NERSC Resources" on June 8. This training, offered through the 2023 Berkeley Lab Computing Sciences Summer Student program and open to NERSC users, is aimed at novice users of NERSC resources. Topics covered include: systems overview, connecting to NERSC, software environment, file systems and data management/transfer, and available data analytics software and services. More details on how to compile applications and run jobs on NERSC systems will be presented including hands-on exercises on Perlmutter. The class will also showcase various online resources that are available on NERSC web pages. For more information and to register, please see <https://www.nersc.gov/users/training/events/introduction-to-nersc-resources-jun2023/>. ### OLCF AI for Science at Scale Introductory Training on June 15 <a name="ai4sciolcf"/></a> OLCF is holding a series of training events on the topic of AI for Science at Scale, which are open to NERSC users. The first training in the series is scheduled for June 15, and will provide an introduction to AI/ML/DL principles used for science in an HPC environment. Participants will get the opportunity to apply techniques learned in the session to run hands-on examples on OLCF's Ascent system. For more information and to register, please see <https://www.nersc.gov/users/training/events/olcf-ai-training-series-ai-for-science-at-scale-introduction-jun2023/>. ### Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts June 21! <a name="spinup"/></a> Spin is a service platform at NERSC based on Docker container technology. It can be used to deploy science gateways, workflow managers, databases, and all sorts of other services that can access NERSC systems and storage on the back end. New large-memory nodes have been added to the platform, increasing the potential of the platform for new memory-constrained applications. To learn more about how Spin works and what it can do, please listen to the NERSC User News podcast on Spin: <https://anchor.fm/nersc-news/episodes/Spin--Interview-with-Cory-Snavely-and-Val-Hendrix-e1pa7p>. Attend an upcoming SpinUp workshop to learn to use Spin for your own science gateway projects! Applications for sessions that begin on Wednesday, June 21 [are now open](https://www.nersc.gov/users/training/spin). SpinUp is hands-on and interactive, so space is limited. Participants will attend an instructional session and a hack-a-thon to learn about the platform, create running services, and learn maintenance and troubleshooting techniques. Local and remote participants are welcome. If you can't make these upcoming sessions, never fear! More sessions are being planned for later in the year. See a video of Spin in action at the [Spin documentation](https://docs.nersc.gov/services/spin/) page. ### Attention Novice Parallel Programmers: Sign up for June 22 "Crash Course in Supercomputing" <a name="crashcourse"/></a> In collaboration with the Berkeley Lab Computing Sciences Summer Student Program, NERSC is again offering the "Crash Course in Supercomputing", open to NERSC users as well as Berkeley Lab summer students on Thursday, June 22. In this course, students will learn to write parallel programs that can be run on a supercomputer. We begin by discussing the concepts of parallelization before introducing MPI and OpenMP, the two leading parallel programming libraries. Hands-on exercises reinforce the concepts learned in the course. Training accounts will be provided for students who have not yet set up a NERSC account. For more information and to register, please see <https://www.nersc.gov/users/training/events/crash-course-in-supercomputing-jun2023/>. ([back to top](#top)) --- ## NERSC News <a name="section10"/></a> ## ### Come Work for NERSC! <a name="careers"/></a> NERSC currently has several openings for postdocs, system administrators, and more! If you are looking for new opportunities, please consider the following openings: - **NEW** [HPC Systems Software Engineer](http://m.rfer.us/LBLSQh6ZH): Help architect, deploy, configure, and operate NERSC's large scale, leading-edge HPC systems. - [Computer Systems Engineer 3](http://m.rfer.us/LBL9_U6Ez): Monitor, administer, and optimize NERSC's storage resources. - [Site Reliability Engineer](http://m.rfer.us/LBLO7X6Ai): Provide a variety of engineering support services to ensure that NERSC is accessible, reliable, secure, and available to its scientific users. - [HPC Performance Engineer](http://m.rfer.us/LBLYLg68i): Enable advanced science at scale on NERSC's Perlmuter supercomputer. - [Network Engineer](http://m.rfer.us/LBLNxI5jz): Engineer and manage the NERSC data-center network to support NERSC's world-class compute and storage systems. - [HPC Storage Systems Developer](http://m.rfer.us/LBLdsq5XB): Use your systems programming skills to develop the High Performance Storage System (HPSS) and supporting software. - [HPC Storage Infrastructure Engineer](http://m.rfer.us/LBLqP65X9): Join the team of engineers integrating NERSC's distributed parallel file systems with NERSC's computational and networking infrastructure, troubleshoot performance issues at scale, and develop innovative solutions to optimize operational and user productivity. - [HPC Storage Systems Analyst](http://m.rfer.us/LBLgDg5VX): Join the team of engineers and programmers supporting HPSS and parallel center-wide systems. - [NESAP for Simulations Postdoctoral Fellow](http://m.rfer.us/LBLRUa4lS): Collaborate with computational and domain scientists to enable extreme-scale scientific simulations on NERSC's Perlmutter supercomputer. (**Note:** You can browse all our job openings on the [NERSC Careers](https://lbl.referrals.selectminds.com/page/nersc-careers-85) page, and all Berkeley Lab jobs at <https://jobs.lbl.gov>.) We know that NERSC users can make great NERSC employees! We look forward to seeing your application. ### About this Email <a name="about"/></a> You are receiving this email because you are the owner of an active account at NERSC. This mailing list is automatically populated with the email addresses associated with active NERSC accounts. In order to remove yourself from this mailing list, you must close your account, which can be done by emailing <accounts@nersc.gov> with your request. _______________________________________________ Users mailing list Users@nersc.gov

Close this window