# NERSC Weekly Email, Week of September 25, 2023<a name="top"></a> # ## Contents ## ## [Summary of Upcoming Events and Key Dates](#section1) ## - [Scheduled Outages](#outages) - [Key Dates](#dates) ## [This Week's Events and Deadlines](#section2) ## - [(NEW/UPDATED) NERSC Globus HPSS service upgrade to Globus Connect Service v5, September 26 (TOMORROW)](#globushpss) - [Join Us Tomorrow through Thursday at the NERSC User Group Annual Meeting](#nugreg) - [Questions about ERCAP? Attend ERCAP Office Hours!](#ercapoh) - [New conda module and updated Python modules on Perlmutter](#modules) - [Join OpenMP Offload Training Series, this Friday & next Friday](#ompoffload) - [ERCAP Allocations Requests Due Monday!](#ercap) - [NESAP for Workflows Proposals Due Monday!](#nesap4wf) ## [Perlmutter](#section3) ## - [Perlmutter Machine Status](#perlmutter) - [Use Perlmutter GPUs with 50% Charging Discount through September!](#pmgpudiscount) - [Need Help with Perlmutter GPU Nodes? Virtual Office Hours Available Weekly through October 11](#pmgpuoh) ## [Updates at NERSC ](#section4) ## - [(NEW/UPDATED) Storage System Move Impacting Many Non-Compute Services on October 4](#isgmove) - [NERSC Globus Sharing Service Upgrade to Globus Connect Service v5, October 5](#globusupgrade) - [NEWT API Deprecated October 1, Queue and Command Functions Removed](#newt) ## [NERSC User Community ](#section5) ## - [Got a Tip or Trick to Share with Other Users? Post It in Slack or Add It to NERSC's Documentation!](#tipsntricks) - [Submit a Science Highlight Today!](#scihigh) ## [Calls for Submissions](#section6) ## - [Applications for the 2024 BSSw Fellowship Program Due Friday!](#bssw) ## [Training Events ](#section7) ## - [Register for Wednesday's Tutorials Held During NUG Annual Meeting; Meeting Non-Attendees Welcome](#nugtutorials) - [Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts October 18!](#spinup) - [Training on the RAJA C++ Portability Suite, October 10](#raja) - [Join NERSC's GPUs for Science Day, October 12, in-person in Berkeley!](#gpu4sci) - [Introduction to GPUs and HIP: HIP Training Series, Next Session Next Monday](#hipseries) - [AI For Scientific Computing Bootcamp, October 18-20](#ai4scicomp) - [OLCF AI for Science at Scale Training on October 12](#ai4sciolcf) - [IDEAS-ECP Webinar on "Taking HACC into the Exascale Era: New Code Capabilities, and Challenges" October 11](#ecpwebinar) ## [NERSC News ](#section8) ## - [Come Work for NERSC!](#careers) - [About this Email](#about) ([back to top](#top)) --- ## Summary of Upcoming Events and Key Dates <a name="section1"/></a> ## ### Scheduled Outages <a name="outages"/></a> (See <https://www.nersc.gov/live-status/motd/> for more info): - **Perlmutter** - 09/28/23 06:00-10:00 PDT, Scheduled Maintenance - The system will be entirely unavailable. - **Globus** - 09/26/23 09:30-17:00 PDT, Scheduled Maintenance - The NERSC HPSS Globus endpoint will be unavailable as we upgrade from GCSv4 to GCSv5. This includes all user created shared Globus endpoints. During the outage all transfers that are active, including paused transfers, will be terminated by the Globus Transfer service. Users with impacted transfers will be notified that their jobs were cancelled by Globus. NOTE: Due to this upgrade, `globus-url-copy` and `uberftp` will no longer be supported and will cease to function - 10/05/23 09:30-17:00 PDT, Scheduled Maintenance - The NERSC SHARE Globus endpoint will be unavailable as we upgrade from GCSv4 to GCSv5. This includes all user created shared Globus endpoints. During the outage all transfers that are active, including paused transfers, will be terminated by the Globus Transfer service. Users with impacted transfers will be notified that their jobs were cancelled by Globus. - **HPSS Archive (User)** - 10/04/23 09:00-13:00 PDT, Scheduled Maintenance - Some retrievals may be delayed during library maintenance. - **Jupyter** - 10/04/23 07:00-19:00 PDT, Unavailable - Jupyter will be unavailable while an underlying storage system is relocated within the data center. - **Spin** - 10/04/23 07:00-19:00 PDT, Unavailable - Spin will be unavailable while an underlying storage system is relocated within the data center. - **Iris** - 10/04/23 07:00-19:00 PDT, Unavailable - Iris will be unavailable while an underlying storage system is relocated within the data center. - **NoMachine** - 10/04/23 07:00-19:00 PDT, Unavailable - NoMachine will be unavailable while an underlying storage system is relocated within the data center. - **ssh-proxy** - 10/04/23 07:00-19:00 PDT, Unavailable - SSH Proxy will be unavailable while an underlying storage system is relocated within the data center. Regular SSH authentication not using the SSH Proxy will be unaffected. - **Authentication Services** - 10/04/23 07:00-19:00 PDT, Unavailable - Web-based federated authentication (login from linked external accounts at other DOE facilities) will be unavailable while an underlying storage system is relocated within the data center. Regular SSH authentication not using the SSH Proxy will be unaffected. - **Superfacility API** - 10/04/23 07:00-19:00 PDT, Unavailable - The API will be unavailable while an underlying storage system is relocated within the data center. - **Science Databases** - 10/04/23 07:00-19:00 PDT, Unavailable - MongoDB (mongodb05-09) and MySQL (nerscdb04) services will be unavailable while an underlying storage system is relocated within the data center. ### Key Dates <a name="dates"/></a> September 2023 October 2023 November 2023 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 1 2 3 4 5 6 7 1 2 3 4 3 4 5 6 7 8 9 8 9 10 11 12 13 14 5 6 7 8 9 10 11 10 11 12 13 14 15 16 15 16 17 18 19 20 21 12 13 14 15 16 17 18 17 18 19 20 21 22 23 22 23 24 25 26 27 28 19 20 21 22 23 24 25 24 25 26 27 28 29 30 29 30 31 26 27 28 29 30 #### This Week - **September 26, 2023**: [NERSC Globus HPSS Service Upgrade](#globushpss) - **September 26-28, 2023**: [NUG Annual Meeting](#nugreg) - **September 28, 2023**: [ERCAP Office Hours](#ercapoh) - **September 29, 2023**: - [BSSw Fellowship Application Due Date](#bssw) - [OpenMP Offload Training: Basics of Offload](#ompoffload) - **October 1, 2023**: [NEWT Deprecation](#newt) - **October 2, 2023**: - [ERCAP Office Hours](#ercapoh) - [ERCAP Submissions Due](#ercap) - [NESAP for Workflows Proposals Due](#nesap4wf) #### Next Week - **October 4, 2023**: [Many Non-Compute Services Unavailable](#isgmove) - **October 4-5, 2023**: Open Accelerated Computing Summit - **October 5, 2023**: - [Perlmutter GPU Virtual Office Hours](#pmgpuoh) - [Globus Guest Collection Globus Upgrade](#globusupgrade) - **October 6, 2023**: [OpenMP Offload Training: Optimization & Data Movement](#ompoffload) #### Future - **October 10, 2023**: [RAJA Training](#raja) - **October 11, 2023**: - [Perlmutter GPU Virtual Office Hours](#pmgpuoh) - [IDEAS-ECP Monthly Webinar](#ecpwebinar) - **October 12, 2023**: - [GPUs for Science Day](#gpu4sci) - [OLCF AI for Science at Scale Training](#ai4sciolcf) - **October 16-18, 2023**: [Confab23 Conference](#confab) - **October 18, 2023**: [SpinUp Workshop](#spinup) - **October 18-20, 2023**: [AI for Scientific Computing Bootcamp](#ai4scicomp) - **November 23-24, 2023**: Thanksgiving Holiday (No Consulting or Account Support) ([back to top](#top)) --- ## This Week's Events and Deadlines <a name="section2"/></a> ## ### (NEW/UPDATED) NERSC Globus HPSS service upgrade to Globus Connect Service v5, September 26 (TOMORROW) <a name="globushpss"/></a> NERSC will be upgrading its Globus service for the endpoint that handles the HPSS data transmission into and out of NERSC, NERSC HPSS, to Globus Connect Service v5 on Tuesday, September 26th (tomorrow). This migration should be transparent to users as the UUID for the endpoint will be automatically migrated over. However, it will require an outage while the migration is taking place. **During the outage all transfers that are active, including paused transfers, will be terminated by the Globus Transfer service.** Users with impacted transfers will be notified that their jobs were canceled by Globus. ### Join Us Tomorrow through Thursday at the NERSC User Group Annual Meeting <a name="nugreg"/></a> NERSC and the NUG Executive Committee have organized an outstanding annual meeting, which will be held tomorrow through Thursday, September 26-28 in Berkeley, California. In-person registration is closed, but it is still possible to register for online attendence. The event includes remarks from DOE allocation managers on their allocation priorities and perspectives; tutorials on the Superfacility API and Quantum Computing (which require additional pre-registration), and writing an effective ERCAP proposal; highlights from the past year and plans for the future (including NERSC-10, the next NERSC supercomputer); an invited talk on parallelizing simulation for neutrino detectors; and contributed presentations from NERSC users. For more information and to register, please see the [NUG Meeting Website](https://sites.google.com/lbl.gov/nug-2023/home). ### Questions about ERCAP? Attend ERCAP Office Hours! <a name="ercapoh"/></a> Are you working on your ERCAP proposal and have a question? Do you want to get started but don't know where to begin? If so, consider talking to an expert at ERCAP Office Hours, which will be held on the following dates, from 9 am to noon and 1-4 pm (Pacific time): - This Thursday, September 28 - Monday, October 2 (ERCAP due date) ### New conda module and updated Python modules on Perlmutter <a name="modules"/></a> We’ve added a standalone conda module to Perlmutter: module load conda We’re also working on updating the default Python module to provide the latest version of Python. You can use the new module today with: module load python/3.11 We encourage you to try it out now. It will become the new default during Wednesday's Perlmutter maintenance. If you notice issues or have questions, please [submit a ticket](https://help.nersc.gov). ### Join OpenMP Offload Training Series, this Friday & next Friday <a name="ompoffload"/></a> OpenMP Offload is a highly recommended programming model supported by multiple compilers on Perlmutter GPUs. The OpenMP Offload training series, collaboratively offered by OLCF and NERSC, is also part of the [Performance Portability training series](https://www.olcf.ornl.gov/performance-portability-training-series/). [Part 1](https://www.nersc.gov/openmp-offload-2023-training-part1-basics-of-offload/) on September 29 will provide a general overview of the OpenMP programming model and cover the basics of using OpenMP directives to offload work to GPUs. Details of using various compilers for OpenMP Offload on Perlmutter and Frontier will be presented. The hands-on sessions will be performed on OLCF Frontier and NERSC Perlmutter. [Part 2](https://www.nersc.gov/openmp-offload-training-part2-optimization-data-movement-oct2023/) on October 6 will cover optimization strategies and show how efficient data movement and a better understanding of the hierarchy of parallelism available can lead to improved performance. We will also cover best practices for OpenMP Offload. The hands-on sessions will be performed on OLCF Frontier and NERSC Perlmutter. For more information and to register for each part, please refer to the event web pages linked above. ### ERCAP Allocations Requests Due Monday! <a name="ercap"/></a> The [Call for Proposals](https://www.nersc.gov/users/accounts/allocations/2024-call-for-proposals-to-use-nersc-resources/) for the 2024 Energy Research Computing Allocations Process (ERCAP) has been announced. Requests are being accepted **through this coming Monday, October 2, 2023.** The majority of NERSC resources and compute time are allocated through the ERCAP process. Proposals are reviewed and awarded by Office of Science allocation managers and implemented by NERSC. While NERSC accepts proposals at any time during the year, applicants are encouraged to submit proposals by the above deadline in order to receive full consideration for Allocation Year 2024 (AY2024). All current projects (including Exploratory, Education, and Director's Reserve, but excluding ALCC) must be renewed for 2024 if you wish to continue using NERSC. New projects for AY2024 should be submitted at this time as well. In 2024, NERSC will allocate compute time based on the capacity of Perlmutter GPU and Perlmutter CPU nodes only. You will need to request time on each resource individually; hours are not transferrable between the two different architectures. **Join us at the [NERSC User Group Annual Meeting](https://sites.google.com/lbl.gov/nug-2023) for a session on writing an effective ERCAP proposal, tomorrow, Tuesday, September 26 at 1:30 pm (Pacific time).** ### NESAP for Workflows Proposals Due Monday! <a name="nesap4wf"/></a> NERSC is now accepting applications from NERSC projects for the NERSC Science Acceleration Program (NESAP) for Workflows. Chosen science teams will work with NERSC for one year to prepare for and better utilize advanced workflow capabilities, such as hardware acceleration, reconfigurable storage, advanced scheduling, and integration with edge services. These project goals are in alignment with the DOE's Integrated Research Infrastructure (IRI) initiative. Workflows include, but are not limited to, the following areas: - High-performance simulation and modeling workflows - High-performance AI (HPAI) workflows - Cross-facility workflows - Hybrid HPC-HPAI-HPDA (high-performance data analysis) workflows - Scientific data lifecycle workflows - External event-triggered and API-driven workflow Accepted teams will be partnered with resources at NERSC and NERSC's technology vendors. For more information and to apply, please see the [Call for Proposals](https://www.nersc.gov/research-and-development/nesap/workflows). Proposals are due **Monday, October 2, 2023**. ([back to top](#top)) --- ## Perlmutter <a name="section3"/></a> ## ### Perlmutter Machine Status <a name="perlmutter"/></a> Perlmutter is available to all users with an active NERSC account. Some helpful NERSC pages for Perlmutter users: * [Perlmutter queue information](https://docs.nersc.gov/jobs/policy/#qos-limits-and-charge) * [Timeline of major changes](https://docs.nersc.gov/systems/perlmutter/timeline/) * [Current known issues](https://docs.nersc.gov/current/#perlmutter) This section of the newsletter will be updated regularly with the latest Perlmutter status. ### Use Perlmutter GPUs with 50% Charging Discount through September! <a name="pmgpudiscount"/></a> Now is a great time to run jobs at NERSC and avoid the end-of-the-year crunch when queues are extremely long and job turnaround time is slow. Using your time now benefits the entire NERSC community and spreads demand more evenly throughout the year, so to encourage usage now, we are discounting all jobs run on the Perlmutter GPU nodes by 50% starting tomorrow and through the end of September. Any job (or portion of a job) that runs before the very start of October 1 at midnight (Pacific time), will be charged only half the usual charges, e.g., a 3-hour job on 7 nodes, which would normally incur a charge of 21 GPU node-hours, would be charged 10.5 GPU node-hours. ### Need Help with Perlmutter GPU Nodes? Virtual Office Hours Available Weekly through October 11 <a name="pmgpuoh"/></a> Are there roadblocks preventing you from getting started on Perlmutter’s GPU nodes? Are you interested in using the GPU nodes but concerned about inefficiently using your allocation? Did you get bad performance before and need some help making sure your script is set up correctly? Do you want to try out a new GPU-enabled code but aren’t sure where to start? We are offering additional help to users through [Perlmutter GPU Virtual Office Hours](https://www.nersc.gov/perlmutter-gpu-office-hours-sept/), which will be offered from 9-11 am (Pacific time) on the following dates: October 5 and October 11. Drop in anytime during the period with your questions on all things Perlmutter GPU: compiling, job scripts, job profiling, and more! ([back to top](#top)) --- ## Updates at NERSC <a name="section4"/></a> ## ### (NEW/UPDATED) Storage System Move Impacting Many Non-Compute Services on October 4 <a name="isgmove"/></a> On October 4, a storage system upon which many NERSC auxiliary services depend will be relocated within the NERSC machine room. An extensive migration of equipment to a better location has been ongoing for several months; this final stage is the only one for which we do not have a mitigation, because it involves the relocation of a file system. Due to this move, many NERSC services will be disrupted from 7 am to 7 pm (Pacific time) next Wednesday. Services that will be completely unavailable include the following: - **Jupyter:** `jupyter.nersc.gov` will be unavailable. Users will not be able to log in or run jobs. Any errant logins still running at 7 am next Wednesday will be killed. - **Spin:** Spin workflows and services will be unavailable. - **Iris:** Iris will be unavailable. User jobs will continue to accrue charges and balances will update when Iris returns to service. - **NoMachine:** Users will not be able to use NoMachine (NX) for the duration of the outage. - **Science Databases:** MongoDB (mongodb05-09) and MySQL (nerscdb04) services will be unavailable. - **Superfacility API:** The API will not be available or accessible. - **SSH Proxy:** The [SSH proxy](https://docs.nersc.gov/connect/mfa/#sshproxy), which enables users to create a limited-time key that can be used to access NERSC via SSH, will be unavailable. Services that will still be available as usual or with a minor workaround include the following: - **Perlmutter:** No maintenance or disruptions are planned for Perlmutter on that day. Users will be able to log in using the standard ssh protocol, entering their password and one-time password when prompted. - **Data Transfer Nodes:** Similar to Perlmutter, users will be able to log in using the standard ssh protocol with their password and one-time password. - **HPSS Archive (User):** The HPSS archive has an unrelated maintenance on the same day, from 9 am to 1 pm (Pacific time), during which some file retrievals may be delayed. HPSS Archive can be accessed as usual. - **NERSC Help Portal:** The NERSC help portal will be accessible, but to log in you must use your NERSC password and one-time password, not any federated account (e.g., for Berkeley Lab or other national lab staff who have federated their NERSC and Lab identities). - **NERSC Web Services:** Other than Jupyterhub, Spin, and Iris, any other web property for which a login is required is accessible, using only your NERSC password and one-time password, not any federated accounts. ### NERSC Globus Sharing Service Upgrade to Globus Connect Service v5, October 5 <a name="globusupgrade"/></a> NERSC is upgrading its Globus service for the endpoint that handles our Globus Guest Collections (also known as Globus Sharing) to Globus Connect Service v5. This will allow users to access many new features like https download and superior load balancing for transfers across clusters. The UUID for the endpoint and all Guest Collection information will be automatically migrated over, so after the maintenance you will not need to change anything to access your Globus Guest Collection. This upgrade will require an outage while the migration is taking place. **During the outage all transfers that are active, including paused transfers, will be terminated by the Globus Transfer service.** Users with impacted transfers will be notified that their jobs were canceled by Globus. The Globus Guest Collection endpoint migration is planned for October 5, 2023 from 9:30 to 17:00 PDT. We will migrate the Globus Collaboration account at a later date. ### NEWT API Deprecated October 1, Queue and Command Functions Removed <a name="newt"/></a> NERSC's legacy web API, NEWT, will be deprecated beginning October 1. User scripts calling REST endpoints to NERSC resources should now be using the Superfacility API instead. The latter should serve as a full replacement for all functionality previously handled by NEWT. For information and documentation about the Superfacility API, please point your browser to [https://api.nersc.gov](https://api.nersc.gov). The timing of this deprecation relates to the replacement of our older data transfer nodes (DTNs), which will cause NEWT to lose some functionality. The replacement nodes will not run calls to the `/newt/queue` or `/newt/command` endpoints. If this change presents a concern for you, please file a ticket at [help.nersc.gov](https://help.nersc.gov) as soon as possible and mention NEWT deprecation. The [NEWT API](https://ieeexplore.ieee.org/document/5676125) was created by Shreyas Cholia, David Skinner, and Joshua Boverhof. It was deployed in 2010, giving users their first taste of web API access to all NERSC's major compute and data systems. One of the first web APIs for scientific computing, NEWT spurred the growth of a new generation of web-based tools and laid the groundwork for its next-generation replacement, the Superfacility API. ([back to top](#top)) --- ## NERSC User Community <a name="section5"/></a> ## ### Got a Tip or Trick to Share with Other Users? Post It in Slack or Add It to NERSC's Documentation! <a name="tipsntricks"/></a> Do you have a handy tip or trick that you think other NERSC users might be able to benefit from? Something that helps make your use of NERSC resources more efficient, or saves you from needing to remember some obscure command? Share it with your fellow NERSC users in one of the following ways: - A new `#tips-and-tricks` channel on the [NERSC Users Slack](https://www.nersc.gov/users/NUG/nersc-users-slack/) (login required -- you may also join the NERSC Users Slack at this link) has been started and provides a daily tip or trick. Feel free to share it there! - Add it to the NERSC documentation -- NERSC's technical documentation pages are in a [Gitlab repository](https://gitlab.com/NERSC/nersc.gitlab.io/), and we welcome merge requests and issues. - Speak up during the "Today-I-Learned" portion of the [NUG Monthly Meeting](https://www.nersc.gov/users/NUG/teleconferences/). ### Submit a Science Highlight Today! <a name="scihigh"/></a> Doing cool science at NERSC? NERSC is looking cool, scientific and code development success stories to highlight to NERSC users, DOE Program Managers, and the broader scientific community in Science Highlights. If you're interested in your work being considered to be used as a featured Science highlight, please let us know via our [highlight form](https://docs.google.com/forms/d/e/1FAIpQLScP4bRCtcde43nqUx4Z_sz780G9HsXtpecQ_qIPKvGafDVVKQ/viewform). ([back to top](#top)) --- ## Calls for Submissions <a name="section6"/></a> ## ### Applications for the 2024 BSSw Fellowship Program Due Friday! <a name="bssw"/></a> The Better Scientific Software (BSSw) Fellowship Program provides recognition and funding for leaders and advocates of high-quality scientific software who foster practices, processes, and tools to improve scientific software productivity and sustainability. Each 2024 BSSw Fellow will receive up to $25,000 for an activity that promotes better scientific software. Activities can include organizing a workshop, preparing a tutorial, or creating content to engage the scientific software community, including broadening participation or promoting diversity, equity, and inclusion. **Application deadline:** Applications for the 2024 BSSw Fellowship Program are being accepted through **this Friday, September 29, 2023**. Those interested in applying were encouraged to participate in an informational fellowship webinar and Q&A session, scheduled for Tuesday, September 12, 2023, 2:00-3:00 pm EDT. Please subscribe to our mailing list (<https://bssw.io/pages/receive-our-email-digest>) for teleconference details and other updates on the program. Details of the program can be found at <https://bssw.io/fellowship> and in a new [blog post](https://bssw.io/blog_posts/applications-open-for-the-2024-bssw-fellowship-program). **Requirements:** We encourage applicants at all career stages, ranging from students through early-career, mid-career, and senior professionals, especially those from underrepresented groups, including people who are Black or African American, Hispanic/Latinx, American Indian, Alaska Native, Native Hawaiian, Pacific Islanders, women, persons with disabilities, and first generation scholars. The BSSw Fellowship is sponsored by the U.S. Department of Energy and National Science Foundation. Please share this announcement with interested colleagues. ([back to top](#top)) --- ## Training Events <a name="section7"/></a> ## ### Register for Wednesday's Tutorials Held During NUG Annual Meeting; Meeting Non-Attendees Welcome <a name="nugtutorials"/></a> As part of the NERSC User Group Annual Meeting, NERSC is holding two tutorials of interest to the general community. Anyone is welcome to attend the tutorials, even if they are not attending the three-day meeting, but **all attendees must register for their chosen tutorial(s)**. The [Superfacility API](https://www.nersc.gov/sfapi-nug-2023/) training, on Wednesday from 9 am to noon (Pacific time) will introduce users to a new way to interact with NERSC systems in a programmatic way via REST interface. The API can be used to enable complex scientific workflows to monitor and run jobs on Perlmutter, build interactive apps, and more! The [Quantum Computing Training](https://www.nersc.gov/xanadu-nug-2023/), on Wednesday from 1:30-5 pm (Pacific time) will introduce the PennyLane open-source quantum computing software from Xanadu.ai. ### Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts October 18! <a name="spinup"/></a> Spin is a service platform at NERSC based on Docker container technology. It can be used to deploy science gateways, workflow managers, databases, and all sorts of other services that can access NERSC systems and storage on the back end. New large-memory nodes have been added to the platform, increasing the potential of the platform for new memory-constrained applications. To learn more about how Spin works and what it can do, please listen to the NERSC User News podcast on Spin: <https://anchor.fm/nersc-news/episodes/Spin--Interview-with-Cory-Snavely-and-Val-Hendrix-e1pa7p>. Attend an upcoming SpinUp workshop to learn to use Spin for your own science gateway projects! Applications for sessions that begin Wednesday, October 18 [are now open](https://www.nersc.gov/users/training/spin). SpinUp is hands-on and interactive, so space is limited. Participants will attend an instructional session and a hack-a-thon to learn about the platform, create running services, and learn maintenance and troubleshooting techniques. Local and remote participants are welcome. If you can't make these upcoming sessions, never fear! More sessions are being planned for later in the year. See a video of Spin in action at the [Spin documentation](https://docs.nersc.gov/services/spin/) page. ### Training on the RAJA C++ Portability Suite, October 10 <a name="raja"/></a> As part of our Performance Portability training series, NERSC and OLCF are hosting a training session on the RAJA C++ Portability Suite on October 10. RAJA is a C++ library offering software abstractions that enable architecture and programming model portability for HPC application codes. RAJA offers portable, parallel loop execution by providing building blocks that extend the generally-accepted parallel for idiom. This is a 1-part session that will allow participants to learn from and interact directly with RAJA team members. The session will give a general overview of RAJA and cover the basics of using RAJA abstractions to offload work to the GPUs. Throughout the session, a variety of quiz-like puzzles will be used to engage the audience and reinforce concepts. For more information and to register, please see <https://www.nersc.gov/performance-portability-series-raja-oct2023/>. ### Join NERSC's GPUs for Science Day, October 12, in-person in Berkeley! <a name="gpu4sci"/></a> GPUs have been instrumental in ground-breaking innovations, from scientific simulations to generative AI. This year, NERSC is proud to host the annual GPUs for Science event **in person**. Our goal is to celebrate recent GPU-enabled scientific achievements and inspire future roadmaps. The day will start with an introduction to three DOE compute facilities (NERSC, ALCF, and OLCF), followed by a series of talks on GPU-accelerated scientific applications and emerging software programming models. The day will wrap up with a panel of leading industry experts discussing their vision for upcoming GPU ecosystems (NVIDIA, AMD, and Intel). With increasing diversity in GPU hardware and software, users interested in performance and portability across DOE supercomputers are highly encouraged to join. The event is **open to everyone** interested in learning about the exciting science in action and registration is free. For additional details, please visit <https://www.nersc.gov/gpus-for-science-day-2023>. Please **register by September 28, 2023**, by completing the [registration form](https://docs.google.com/forms/d/e/1FAIpQLSfAxN2XN6IcodcjMAwgz2eRYH9sShx5aVX3Eo0n05wkbSCtsw/viewform). ### Introduction to GPUs and HIP: HIP Training Series, Next Session Next Monday <a name="hipseries"/></a> HIP® is a parallel computing platform and programming model that extends C++ to allow developers to program GPUs with a familiar programming language and simple APIs. AMD will present a multi-part HIP training series intended to help new and existing GPU programmers understand the main concepts of the HIP programming model. Each part will include a 1-hour presentation and example exercises. The exercises are meant to reinforce the material from the presentation and can be completed during a 1-hour hands-on session following each lecture on OLCF Frontier and NERSC Perlmutter. **Part 1** of the HIP training session was held on Monday, August 14, with the topic of Introduction to HIP and GPU. This session introduced the basics of programming GPUs, and the syntax and API of HIP to transfer data to and from GPUs, write GPU kernels, and manage GPU thread groups. (See the [session webpage](https://www.nersc.gov/intro-gpus-and-hip-part-1-of-hip-training-series-aug-14-2023/) for slides, exercises, and a recording of the training.) **[Part 2](https://www.nersc.gov/porting-applications-to-hip-part2-hip-training-series-aug2023/)** was held on August 28, with the topic of Porting Applications to GPU. Porting applications from CUDA to HIP can transform an application to be portable across both Nvidia and AMD GPU hardware. This talk reviewed the AMD porting tools and how to use them. Portability for other GPU programming languages was also briefly discussed. **[Part 3](https://www.nersc.gov/amd-memory-hierarchy-part3-hip-training-series-sep2023/)** was held Monday, September 18 with the topic of AMD Memory Hierarchy. With an understanding of GPU memory systems and in particular the AMD GPU memory system, this talk will explore how to improve the performance of applications. This understanding is crucial to designing code to perform well on AMD GPUs. **[Part 4](https://www.nersc.gov/gpu-profiling-performance-timelines-rocprof-omnitrace-part4-hip-series-oct2023/)** will be held on Monday, October 2, with the topic of GPU Profiling (Performance Timelines: Rocprof and Omnitrace), two tools for AMD GPUs to collect application performance timelines data. **[Part 5](https://www.nersc.gov/gpu-profiling-performance-profile-omniperf-part5-hip-series-oct2023/)** will be held on Monday, October 16, with the topic of GPU Profiling (Performance Profile: Omniperf), a tool for getting application performance profiles on AMD GPUs. For more information and to register, please see the event webpages linked above. ### AI For Scientific Computing Bootcamp, October 18-20 <a name="ai4scicomp"/></a> NERSC, in collaboration with the OpenACC organization and NVIDIA, is hosting a virtual, three-day AI for Scientific Computing Bootcamp from Wednesday, October 18, through Friday, October 20, 2023. This bootcamp will provide a step-by-step overview of the fundamentals of deep neural networks and walk attendees through the hands-on experience of building and improving deep learning models for applications related to scientific computing and physical systems defined by differential equations. For more information and to register, please see <https://www.nersc.gov/ai-for-scientific-computing-oct-2023>. ### OLCF AI for Science at Scale Training on October 12 <a name="ai4sciolcf"/></a> OLCF is holding a series of training events on the topic of AI for Science at Scale, which are open to NERSC users. The second training in the series is scheduled for October 12, and will focus on how to train a model on multiple GPUs and model parallelism techniques and frameworks, such as DeepSpeed, FSDP, and Megatron For more information and to register, please see <https://www.nersc.gov/olcf-ai-training-series-ai-for-science-at-scale-part-2>. ### IDEAS-ECP Webinar on "Taking HACC into the Exascale Era: New Code Capabilities, and Challenges" October 11 <a name="ecpwebinar"/></a> The next webinar in the [Best Practices for HPC Software Developers](http://ideas-productivity.org/events/hpc-best-practices-webinars/) series is entitled "Taking HACC into the Exascale Era: New Code Capabilities, and Challenges", and will take place Wednesday, October 11, at 10:00 am Pacific time. This webinar, presented by Esteban Rangel (Argonne National Laboratory), will discuss lessons learned by the HACC (Hardware/Hybrid Accelerated Cosmology Code) development team in contending with novel programming modes while preparing HACC for multiple exascale systems, simultaneously adding new capabilities and focusing on performance. There is no cost to attend, but registration is required. Please register [at the event webpage](https://www.exascaleproject.org/event/hacc/). ([back to top](#top)) --- ## NERSC News <a name="section8"/></a> ## ### Come Work for NERSC! <a name="careers"/></a> NERSC currently has several openings for postdocs, system administrators, and more! If you are looking for new opportunities, please consider the following openings: - [Data Department Head](http://phxc1b.rfer.us/LBLyns7OB): Provide vision and guidance for NERSC's Data Department, which includes Data & AI Services, Data Science Engagement, and Storage Systems Groups. - [Data Science Workflows Architect](http://phxc1b.rfer.us/LBLl4072c): Work closely with application teams to help optimize their workflows on NERSC systems. - [NERSC Programming Environments and Models Group Lead](http://phxc1b.rfer.us/LBLnjA6ze): Lead the team to design, deploy and maintain the environments and software runtimes that enable current and future science on NERSC systems. - [Cyber Security Group Lead](http://phxc1b.rfer.us/LBL7FQ6xy): Lead the team of security engineers responsible for the security architecture and infrastructure of NERSC. - [HPC Systems Software Engineer](http://m.rfer.us/LBLSQh6ZH): Help architect, deploy, configure, and operate NERSC's large scale, leading-edge HPC systems. - [HPC Storage Systems Developer](http://m.rfer.us/LBLdsq5XB): Use your systems programming skills to develop the High Performance Storage System (HPSS) and supporting software. - [NESAP for Simulations Postdoctoral Fellow](http://m.rfer.us/LBLRUa4lS): Collaborate with computational and domain scientists to enable extreme-scale scientific simulations on NERSC's Perlmutter supercomputer. (**Note:** You can browse all our job openings on the [NERSC Careers](https://lbl.referrals.selectminds.com/page/nersc-careers-85) page, and all Berkeley Lab jobs at <https://jobs.lbl.gov>.) We know that NERSC users can make great NERSC employees! We look forward to seeing your application. ### About this Email <a name="about"/></a> You are receiving this email because you are the owner of an active account at NERSC. This mailing list is automatically populated with the email addresses associated with active NERSC accounts. In order to remove yourself from this mailing list, you must close your account, which can be done by emailing <accounts@nersc.gov> with your request. _______________________________________________ Users mailing list Users@nersc.gov