Close this window

Email Announcement Archive

[Users] NERSC Weekly Email, Week of September 18, 2023

Author: Rebecca Hartman-Baker <rjhartmanbaker_at_lbl.gov>
Date: 2023-09-18 15:34:00

# NERSC Weekly Email, Week of September 18, 2023<a name="top"></a> # ## Contents ## ## [Summary of Upcoming Events and Key Dates](#section1) ## - [Scheduled Outages](#outages) - [Key Dates](#dates) ## [This Week's Events and Deadlines](#section2) ## - [NERSC Globus service upgrade to Globus Connect Service v5, September 13 & 19](#globusupgrade) - [Supercomputing Spotlights: Ulrike Meier Yang, this Wednesday!](#scspotlights) - [(NEW/UPDATED) MySQL Database Maintenance this Wednesday](#mysql) - [New conda module and updated Python modules on Perlmutter](#modules) ## [Perlmutter](#section3) ## - [Perlmutter Machine Status](#perlmutter) - [Use Perlmutter GPUs with 50% Charging Discount through September!](#pmgpudiscount) - [Need Help with Perlmutter GPU Nodes? Virtual Office Hours Available Weekly through October 11](#pmgpuoh) ## [Updates at NERSC ](#section4) ## - [(NEW/UPDATED) NEWT API Deprecated October 1, Queue and Command Functions Removed](#newt) - [Register for the NERSC User Group Annual Meeting, September 26-28](#nugreg) - [ERCAP Allocations Requests Now Open!](#ercap) - [Questions about ERCAP? Attend ERCAP Office Hours!](#ercapoh) - [(NEW/UPDATED) NESAP for Workflows Call for Proposals Now Open!](#nesap4wf) ## [NERSC User Community ](#section5) ## - [Got a Tip or Trick to Share with Other Users? Post It in Slack or Add It to NERSC's Documentation!](#tipsntricks) - [Submit a Science Highlight Today!](#scihigh) ## [Conferences & Workshops](#section6) ## - [Registration for Confab23 Now Open!](#confab) ## [Calls for Submissions](#section7) ## - [Applications Open for the 2024 BSSw Fellowship Program](#bssw) ## [Training Events ](#section8) ## - [Join OpenMP Offload Training Series, September 29 & October 6](#ompoffload) - [Register for Tutorials Held During NUG Annual Meeting; Meeting Non-Attendees Welcome](#nugtutorials) - [Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts October 18!](#spinup) - [Training on the RAJA C++ Portability Suite, October 10](#raja) - [Join NERSC's GPUs for Science Day, October 12, in-person in Berkeley!](#gpu4sci) - [Introduction to GPUs and HIP: HIP Training Series, Next Session September 18](#hipseries) - [AI For Scientific Computing Bootcamp, October 18-20](#ai4scicomp) - [OLCF AI for Science at Scale Training on October 12](#ai4sciolcf) - [IDEAS-ECP Webinar on "Taking HACC into the Exascale Era: New Code Capabilities, and Challenges" October 11](#ecpwebinar) ## [NERSC News ](#section9) ## - [Come Work for NERSC!](#careers) - [About this Email](#about) ([back to top](#top)) --- ## Summary of Upcoming Events and Key Dates <a name="section1"/></a> ## ### Scheduled Outages <a name="outages"/></a> (See <https://www.nersc.gov/live-status/motd/> for more info): - **Perlmutter** - 09/20/23 06:00-10:00 PDT, Scheduled Maintenance - The system will be entirely unavailable. - 09/28/23 06:00-10:00 PDT, Scheduled Maintenance - The system will be entirely unavailable. - **Globus** - 09/19/23 09:30-17:00 PDT, Scheduled Maintenance - The NERSC SHARE Globus endpoint will be unavailable as we upgrade from GCSv4 to GCSv5. This includes all user created shared Globus endpoints. During the outage all transfers that are active, including paused transfers, will be terminated by the Globus Transfer service. Users with impacted transfers will be notified that their jobs were cancelled by Globus. - **HPSS Archive (User)** - 10/04/23 09:00-13:00 PDT, Scheduled Maintenance - Some retrievals may be delayed during library maintenance. - **MyNERSC** - 09/20/23 07:30-11:30 PDT, Scheduled Maintenance - A backing database for MyNERSC and MOTD web services may be intermittently unavailable throughout the window as it undergoes a migration. - **Spin** - 09/20/23 08:00-11:00 PDT, System Degraded - Workloads in Spin will be unavailable briefly (1-2 min) at least once within the window. ### Key Dates <a name="dates"/></a> September 2023 October 2023 November 2023 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 1 2 3 4 5 6 7 1 2 3 4 3 4 5 6 7 8 9 8 9 10 11 12 13 14 5 6 7 8 9 10 11 10 11 12 13 14 15 16 15 16 17 18 19 20 21 12 13 14 15 16 17 18 17 18 19 20 21 22 23 22 23 24 25 26 27 28 19 20 21 22 23 24 25 24 25 26 27 28 29 30 29 30 31 26 27 28 29 30 #### This Week - **September 19, 2023**: [Globus Guest Collection Globus Upgrade](#globusupgrade) - **September 20, 2023**: - [Supercomputing Spotlights Seminar](#scspotlights) - [MySQL Database Maintenance](#mysql) - [Python Default Module Update](#modules) #### Next Week - **September 25, 2023**: [Perlmutter GPU Virtual Office Hours](#pmgpuoh) - **September 26-28, 2023**: [NUG Annual Meeting](#nugreg) - **September 28, 2023**: [ERCAP Office Hours](#ercapoh) - **September 29, 2023**: - [BSSw Fellowship Application Due Date](#bssw) - [OpenMP Offload Training: Basics of Offload](#ompoffload) #### Future - **October 1, 2023**: [NEWT Deprecation](#newt) - **October 2, 2023**: - [ERCAP Office Hours](#ercapoh) - [ERCAP Submissions Due](#ercap) - [NESAP for Workflows Proposals Due](#nesap4wf) - **October 4-5, 2023**: Open Accelerated Computing Summit - **October 5, 2023**: [Perlmutter GPU Virtual Office Hours](#pmgpuoh) - **October 6, 2023**: [OpenMP Offload Training: Optimization & Data Movement](#ompoffload) - **October 10, 2023**: [RAJA Training](#raja) - **October 11, 2023**: - [Perlmutter GPU Virtual Office Hours](#pmgpuoh) - [IDEAS-ECP Monthly Webinar](#ecpwebinar) - **October 12, 2023**: - [GPUs for Science Day](#gpu4sci) - [OLCF AI for Science at Scale Training](#ai4sciolcf) - **October 16-18, 2023**: [Confab23 Conference](#confab) - **October 18, 2023**: [SpinUp Workshop](#spinup) - **October 18-20, 2023**: [AI for Scientific Computing Bootcamp](#ai4scicomp) - **November 23-24, 2023**: Thanksgiving Holiday (No Consulting or Account Support) ([back to top](#top)) --- ## This Week's Events and Deadlines <a name="section2"/></a> ## ### NERSC Globus service upgrade to Globus Connect Service v5, September 13 & 19 <a name="globusupgrade"/></a> NERSC will be upgrading its Globus service for the endpoint that handles the bulk of data transmission into and out of NERSC, NERSC DTN, as well as the endpoint used for serving Globus Guest Collections (also known as Globus Sharing) to Globus Connect Service v5. This will allow users to access many new features like https download and superior load balancing for transfers across clusters. This migration should be transparent to users as the UUID for the endpoint and all Guest Collection information will be automatically migrated over. **However, it will require an outage while the migration is taking place.** During the outage all transfers that are active, including paused transfers, will be terminated by the Globus Transfer service. Users with impacted transfers will be notified that their jobs were canceled by Globus. The NERSC DTN migration took place September 13, 2023, and the **Globus Guest Collection** endpoint migration takes place **tomorrow**, September 19, 2023 from 9:30 to 17:00 PDT. We will migrate the Globus Collaboration account endpoints at a later date. ### Supercomputing Spotlights: Ulrike Meier Yang, this Wednesday! <a name="scspotlights"/></a> **Supercomputing Spotlights** is a new webinar series, featuring short presentations that highlight the impact and successes of high-performance computing (HPC) throughout our world. Presentations, emphasizing achievements and opportunities in HPC, are intended for the broad international community, especially students and newcomers to the field. Join us for the next presentation, this Wednesday, September 20, 7:00-7:40 am (Pacific time) (30 min talk + 10 min questions)! Ulrike Meier Yang (Lawrence Livermore National Laboratory) will present on "xSDK: an Ecosystem of Interoperable, Independently Developed Math Libraries." **Abstract:** Emerging extreme-scale architectures provide developers of application codes—including multiphysics modeling and the coupling of simulations and data analytics—unprecedented resources for larger simulations achieving more accurate solutions than ever before. Achieving high performance on these new heterogeneous architectures requires substantial expertise and knowledge. To meet these challenges in a timely fashion and make the best use of these capabilities requires a variety of mathematical libraries that are developed by diverse, independent teams throughout the HPC community. It is not sufficient for these libraries individually to deliver high performance; they also need to work well when built and used in combination within applications. The extreme-scale scientific software development kit (xSDK) provides infrastructure for and interoperability of a collection of more than twenty complementary numerical libraries to support rapid and efficient development of high-quality applications. This presentation will summarize the elements that are needed to make the xSDK an effective ecosystem of interoperable math libraries that can support large-scale application codes. We will also discuss efforts to provide performance portability and sustainability, including xSDK testing strategies. **Bio:** Ulrike Meier Yang leads the Mathematical Algorithms and Computing group in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory (LLNL). She leads the xSDK4ECP (Extreme-scale Scientific Software Development Kit) for the U.S. Department of Energy’s Exascale Computing Project. She is a member of the Scalable Linear Solvers and hypre projects, and she is the Linear Solvers Topical Area Lead in the SciDAC FASTMath Institute. Her research interests are numerical algorithms, particularly iterative linear system solvers and algebraic multigrid methods, high-performance computing, parallel algorithms, performance evaluation, and scientific software design. She serves on the SIAM Board of Trustees and the editorial board of the SIAM Journal for Matrix Analysis and Applications. Prior to joining LLNL in 1998, she was a staff member in the Center for Supercomputing Research and Development at the University of Illinois at Urbana-Champaign (1985-1995) and in the Central Institute of Applied Mathematics at the Research Centre Jülich, Germany (1983-1985). She earned her Ph.D. in Computer Science at the University of Illinois at Urbana-Champaign in 1995 and a Diploma in mathematics at the Ruhr-University Bochum, Germany, in 1983. Participation is free, but [registration](https://siam.zoom.us/webinar/register/WN_XhIpF5qdShi9mpU5CkPktw) is required. ### (NEW/UPDATED) MySQL Database Maintenance this Wednesday <a name="mysql"/></a> There will be a maintenance to the servers that provide MySQL database services this Wednesday, September 20. From 7:30 to 11:30 am (Pacific time) on Wednesday, MySQL user databases will be intermittently inaccessible. ### New conda module and updated Python modules on Perlmutter <a name="modules"/></a> We’ve added a standalone conda module to Perlmutter: module load conda We’re also working on updating the default Python module to provide the latest version of Python. You can use the new module today with: module load python/3.11 We encourage you to try it out now. It will become the new default during Wednesday's Perlmutter maintenance. If you notice issues or have questions, please [submit a ticket](https://help.nersc.gov). ([back to top](#top)) --- ## Perlmutter <a name="section3"/></a> ## ### Perlmutter Machine Status <a name="perlmutter"/></a> Perlmutter is available to all users with an active NERSC account. Some helpful NERSC pages for Perlmutter users: * [Perlmutter queue information](https://docs.nersc.gov/jobs/policy/#qos-limits-and-charge) * [Timeline of major changes](https://docs.nersc.gov/systems/perlmutter/timeline/) * [Current known issues](https://docs.nersc.gov/current/#perlmutter) This section of the newsletter will be updated regularly with the latest Perlmutter status. ### Use Perlmutter GPUs with 50% Charging Discount through September! <a name="pmgpudiscount"/></a> Now is a great time to run jobs at NERSC and avoid the end-of-the-year crunch when queues are extremely long and job turnaround time is slow. Using your time now benefits the entire NERSC community and spreads demand more evenly throughout the year, so to encourage usage now, we are discounting all jobs run on the Perlmutter GPU nodes by 50% starting tomorrow and through the end of September. Any job (or portion of a job) that runs before the very start of October 1 at midnight (Pacific time), will be charged only half the usual charges, e.g., a 3-hour job on 7 nodes, which would normally incur a charge of 21 GPU node-hours, would be charged 10.5 GPU node-hours. ### Need Help with Perlmutter GPU Nodes? Virtual Office Hours Available Weekly through October 11 <a name="pmgpuoh"/></a> Are there roadblocks preventing you from getting started on Perlmutter’s GPU nodes? Are you interested in using the GPU nodes but concerned about inefficiently using your allocation? Did you get bad performance before and need some help making sure your script is set up correctly? Do you want to try out a new GPU-enabled code but aren’t sure where to start? We are offering additional help to users through [Perlmutter GPU Virtual Office Hours](https://www.nersc.gov/perlmutter-gpu-office-hours-sept/), which will be offered from 9-11 am (Pacific time) on the following dates: September 25, October 5, and October 11. Drop in anytime during the period with your questions on all things Perlmutter GPU: compiling, job scripts, job profiling, and more! ([back to top](#top)) --- ## Updates at NERSC <a name="section4"/></a> ## ### (NEW/UPDATED) NEWT API Deprecated October 1, Queue and Command Functions Removed <a name="newt"/></a> NERSC's legacy web API, NEWT, will be deprecated beginning October 1. User scripts calling REST endpoints to NERSC resources should now be using the Superfacility API instead. The latter should serve as a full replacement for all functionality previously handled by NEWT. For information and documentation about the Superfacility API, please point your browser to [https://api.nersc.gov](https://api.nersc.gov). The timing of this deprecation relates to the replacement of our older data transfer nodes (DTNs), which will cause NEWT to lose some functionality. The replacement nodes will not run calls to the `/newt/queue` or `/newt/command` endpoints. If this change presents a concern for you, please file a ticket at [help.nersc.gov](https://help.nersc.gov) as soon as possible and mention NEWT deprecation. The [NEWT API](https://ieeexplore.ieee.org/document/5676125) was created by Shreyas Cholia, David Skinner, and Joshua Boverhof. It was deployed in 2010, giving users their first taste of web API access to all NERSC's major compute and data systems. One of the first web APIs for scientific computing, NEWT spurred the growth of a new generation of web-based tools and laid the groundwork for its next-generation replacement, the Superfacility API. ### Register for the NERSC User Group Annual Meeting, September 26-28 <a name="nugreg"/></a> NERSC and the NUG Executive Committee have organized an outstanding annual meeting, which will be held next week, September 26-28 in Berkeley, California. In-person registration is closed, but it is still possible to register for online attendence. The event includes remarks from DOE allocation managers on their allocation priorities and perspectives; tutorials on the Superfacility API and Quantum Computing (which require additional pre-registration), and writing an effective ERCAP proposal; highlights from the past year and plans for the future (including NERSC-10, the next NERSC supercomputer); an invited talk on parallelizing simulation for neutrino detectors; and contributed presentations from NERSC users. For more information and to register, please see the [NUG Meeting Website](https://sites.google.com/lbl.gov/nug-2023/home). ### ERCAP Allocations Requests Now Open! <a name="ercap"/></a> The [Call for Proposals](https://www.nersc.gov/users/accounts/allocations/2024-call-for-proposals-to-use-nersc-resources/) for the 2024 Energy Research Computing Allocations Process (ERCAP) has been announced. Requests are being accepted **through Monday, October 2, 2023.** The majority of NERSC resources and compute time are allocated through the ERCAP process. Proposals are reviewed and awarded by Office of Science allocation managers and implemented by NERSC. While NERSC accepts proposals at any time during the year, applicants are encouraged to submit proposals by the above deadline in order to receive full consideration for Allocation Year 2024 (AY2024). All current projects (including Exploratory, Education, and Director's Reserve, but excluding ALCC) must be renewed for 2024 if you wish to continue using NERSC. New projects for AY2024 should be submitted at this time as well. In 2024, NERSC will allocate compute time based on the capacity of Perlmutter GPU and Perlmutter CPU nodes only. You will need to request time on each resource individually; hours are not transferrable between the two different architectures. **Join us at the [NERSC User Group Annual Meeting](https://sites.google.com/lbl.gov/nug-2023) for a session on writing an effective ERCAP proposal, on Tuesday, September 26 at 1:30 pm (Pacific time).** ### Questions about ERCAP? Attend ERCAP Office Hours! <a name="ercapoh"/></a> Are you working on your ERCAP proposal and have a question? Do you want to get started but don't know where to begin? If so, consider talking to an expert at ERCAP Office Hours, which will be held on the following dates, from 9 am to noon and 1-4 pm (Pacific time): - Thursday, September 28 - Monday, October 2 (ERCAP due date) ### (NEW/UPDATED) NESAP for Workflows Call for Proposals Now Open! <a name="nesap4wf"/></a> NERSC is now accepting applications from NERSC projects for the NERSC Science Acceleration Program (NESAP) for Workflows. Chosen science teams will work with NERSC for one year to prepare for and better utilize advanced workflow capabilities, such as hardware acceleration, reconfigurable storage, advanced scheduling, and integration with edge services. These project goals are in alignment with the DOE's Integrated Research Infrastructure (IRI) initiative. Workflows include, but are not limited to, the following areas: - High-performance simulation and modeling workflows - High-performance AI (HPAI) workflows - Cross-facility workflows - Hybrid HPC-HPAI-HPDA (high-performance data analysis) workflows - Scientific data lifecycle workflows - External event-triggered and API-driven workflow Accepted teams will be partnered with resources at NERSC and NERSC's technology vendors. For more information and to apply, please see the [Call for Proposals](https://www.nersc.gov/research-and-development/nesap/workflows). Proposals are due October 2, 2023. ([back to top](#top)) --- ## NERSC User Community <a name="section5"/></a> ## ### Got a Tip or Trick to Share with Other Users? Post It in Slack or Add It to NERSC's Documentation! <a name="tipsntricks"/></a> Do you have a handy tip or trick that you think other NERSC users might be able to benefit from? Something that helps make your use of NERSC resources more efficient, or saves you from needing to remember some obscure command? Share it with your fellow NERSC users in one of the following ways: - A new `#tips-and-tricks` channel on the [NERSC Users Slack](https://www.nersc.gov/users/NUG/nersc-users-slack/) (login required -- you may also join the NERSC Users Slack at this link) has been started and provides a daily tip or trick. Feel free to share it there! - Add it to the NERSC documentation -- NERSC's technical documentation pages are in a [Gitlab repository](https://gitlab.com/NERSC/nersc.gitlab.io/), and we welcome merge requests and issues. - Speak up during the "Today-I-Learned" portion of the [NUG Monthly Meeting](https://www.nersc.gov/users/NUG/teleconferences/). ### Submit a Science Highlight Today! <a name="scihigh"/></a> Doing cool science at NERSC? NERSC is looking cool, scientific and code development success stories to highlight to NERSC users, DOE Program Managers, and the broader scientific community in Science Highlights. If you're interested in your work being considered to be used as a featured Science highlight, please let us know via our [highlight form](https://docs.google.com/forms/d/e/1FAIpQLScP4bRCtcde43nqUx4Z_sz780G9HsXtpecQ_qIPKvGafDVVKQ/viewform). ([back to top](#top)) --- ## Conferences & Workshops <a name="section6"/></a> ## ### Registration for Confab23 Now Open! <a name="confab"/></a> Registration is now open for Confab23, ESnet's second annual gathering of scientists, network engineers from national labs and universities, and the research networking professionals who partner with us to move data across the world. We hope to see you in Gaithersburg, MD, Oct. 16-18! Want more information? Please see: [go.lbl.gov/confab23](https://go.lbl.gov/confab23)! ([back to top](#top)) --- ## Calls for Submissions <a name="section7"/></a> ## ### Applications Open for the 2024 BSSw Fellowship Program <a name="bssw"/></a> The Better Scientific Software (BSSw) Fellowship Program provides recognition and funding for leaders and advocates of high-quality scientific software who foster practices, processes, and tools to improve scientific software productivity and sustainability. Each 2024 BSSw Fellow will receive up to $25,000 for an activity that promotes better scientific software. Activities can include organizing a workshop, preparing a tutorial, or creating content to engage the scientific software community, including broadening participation or promoting diversity, equity, and inclusion. **Application deadline:** Applications for the 2024 BSSw Fellowship Program are being accepted through **September 29, 2023**. Those interested in applying were encouraged to participate in an informational fellowship webinar and Q&A session, scheduled for Tuesday, September 12, 2023, 2:00-3:00 pm EDT. Please subscribe to our mailing list (<https://bssw.io/pages/receive-our-email-digest>) for teleconference details and other updates on the program. Details of the program can be found at <https://bssw.io/fellowship> and in a new [blog post](https://bssw.io/blog_posts/applications-open-for-the-2024-bssw-fellowship-program). **Requirements:** We encourage applicants at all career stages, ranging from students through early-career, mid-career, and senior professionals, especially those from underrepresented groups, including people who are Black or African American, Hispanic/Latinx, American Indian, Alaska Native, Native Hawaiian, Pacific Islanders, women, persons with disabilities, and first generation scholars. The BSSw Fellowship is sponsored by the U.S. Department of Energy and National Science Foundation. Please share this announcement with interested colleagues. ([back to top](#top)) --- ## Training Events <a name="section8"/></a> ## ### Join OpenMP Offload Training Series, September 29 & October 6 <a name="ompoffload"/></a> The OpenMP API is a scalable model that gives parallel programmers a simple and flexible interface for developing portable parallel applications in C/C++ and Fortran. The four-part OpenMP Offload training series, offered jointly by NERSC and OLCF, will enable application teams and developers to accelerate their code with the use of GPUs, as well as exploiting the latest OpenMP functionality to program multi-core platforms like Perlmutter and Frontier. For [Part 1](https://www.nersc.gov/openmp-offload-2023-training-part1-basics-of-offload/) on September 29, we will give a general overview of the OpenMP programming model and cover the basics of using OpenMP directives to offload work to GPUs. The hands-on sessions will be performed on OLCF Frontier and NERSC Perlmutter. [Part 2](https://www.nersc.gov/openmp-offload-training-part2-optimization-data-movement-oct2023/) on October 6 will cover optimization strategies and show how efficient data movement and a better understanding of the hierarchy of parallelism available can lead to improved performance. We will also cover best practices for OpenMP Offload. The hands-on sessions will be performed on OLCF Frontier and NERSC Perlmutter. ### Register for Tutorials Held During NUG Annual Meeting; Meeting Non-Attendees Welcome <a name="nugtutorials"/></a> As part of the NERSC User Group Annual Meeting, NERSC is holding two tutorials of interest to the general community. Anyone is welcome to attend the tutorials, even if they are not attending the three-day meeting, but **all attendees must register for their chosen tutorial(s)**. The [Superfacility API](https://www.nersc.gov/sfapi-nug-2023/) training, on September 27 from 9 am to noon (Pacific time) will introduce users to a new way to interact with NERSC systems in a programmatic way via REST interface. The API can be used to enable complex scientific workflows to monitor and run jobs on Perlmutter, build interactive apps, and more! The [Quantum Computing Training](https://www.nersc.gov/xanadu-nug-2023/), on September 27 from 1:30-5 pm (Pacific time) will introduce the PennyLane open-source quantum computing software from Xanadu.ai. ### Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts October 18! <a name="spinup"/></a> Spin is a service platform at NERSC based on Docker container technology. It can be used to deploy science gateways, workflow managers, databases, and all sorts of other services that can access NERSC systems and storage on the back end. New large-memory nodes have been added to the platform, increasing the potential of the platform for new memory-constrained applications. To learn more about how Spin works and what it can do, please listen to the NERSC User News podcast on Spin: <https://anchor.fm/nersc-news/episodes/Spin--Interview-with-Cory-Snavely-and-Val-Hendrix-e1pa7p>. Attend an upcoming SpinUp workshop to learn to use Spin for your own science gateway projects! Applications for sessions that begin Wednesday, October 18 [are now open](https://www.nersc.gov/users/training/spin). SpinUp is hands-on and interactive, so space is limited. Participants will attend an instructional session and a hack-a-thon to learn about the platform, create running services, and learn maintenance and troubleshooting techniques. Local and remote participants are welcome. If you can't make these upcoming sessions, never fear! More sessions are being planned for later in the year. See a video of Spin in action at the [Spin documentation](https://docs.nersc.gov/services/spin/) page. ### Training on the RAJA C++ Portability Suite, October 10 <a name="raja"/></a> As part of our Performance Portability training series, NERSC and OLCF are hosting a training session on the RAJA C++ Portability Suite on October 10. RAJA is a C++ library offering software abstractions that enable architecture and programming model portability for HPC application codes. RAJA offers portable, parallel loop execution by providing building blocks that extend the generally-accepted parallel for idiom. This is a 1-part session that will allow participants to learn from and interact directly with RAJA team members. The session will give a general overview of RAJA and cover the basics of using RAJA abstractions to offload work to the GPUs. Throughout the session, a variety of quiz-like puzzles will be used to engage the audience and reinforce concepts. For more information and to register, please see <https://www.nersc.gov/performance-portability-series-raja-oct2023/>. ### Join NERSC's GPUs for Science Day, October 12, in-person in Berkeley! <a name="gpu4sci"/></a> GPUs have been instrumental in ground-breaking innovations, from scientific simulations to generative AI. This year, NERSC is proud to host the annual GPUs for Science event **in person**. Our goal is to celebrate recent GPU-enabled scientific achievements and inspire future roadmaps. The day will start with an introduction to three DOE compute facilities (NERSC, ALCF, and OLCF), followed by a series of talks on GPU-accelerated scientific applications and emerging software programming models. The day will wrap up with a panel of leading industry experts discussing their vision for upcoming GPU ecosystems (NVIDIA, AMD, and Intel). With increasing diversity in GPU hardware and software, users interested in performance and portability across DOE supercomputers are highly encouraged to join. The event is **open to everyone** interested in learning about the exciting science in action and registration is free. For additional details, please visit <https://www.nersc.gov/gpus-for-science-day-2023>. Please **register by September 28, 2023**, by completing the [registration form](https://docs.google.com/forms/d/e/1FAIpQLSfAxN2XN6IcodcjMAwgz2eRYH9sShx5aVX3Eo0n05wkbSCtsw/viewform). ### Introduction to GPUs and HIP: HIP Training Series, Next Session September 18 <a name="hipseries"/></a> HIP® is a parallel computing platform and programming model that extends C++ to allow developers to program GPUs with a familiar programming language and simple APIs. AMD will present a multi-part HIP training series intended to help new and existing GPU programmers understand the main concepts of the HIP programming model. Each part will include a 1-hour presentation and example exercises. The exercises are meant to reinforce the material from the presentation and can be completed during a 1-hour hands-on session following each lecture on OLCF Frontier and NERSC Perlmutter. **Part 1** of the HIP training session was held on Monday, August 14, with the topic of Introduction to HIP and GPU. This session introduced the basics of programming GPUs, and the syntax and API of HIP to transfer data to and from GPUs, write GPU kernels, and manage GPU thread groups. (See the [session webpage](https://www.nersc.gov/intro-gpus-and-hip-part-1-of-hip-training-series-aug-14-2023/) for slides, exercises, and a recording of the training.) **[Part 2](https://www.nersc.gov/porting-applications-to-hip-part2-hip-training-series-aug2023/)** was held on August 28, with the topic of Porting Applications to GPU. Porting applications from CUDA to HIP can transform an application to be portable across both Nvidia and AMD GPU hardware. This talk reviewed the AMD porting tools and how to use them. Portability for other GPU programming languages was also briefly discussed. **[Part 3](https://www.nersc.gov/amd-memory-hierarchy-part3-hip-training-series-sep2023/)** was held Monday, September 18 with the topic of AMD Memory Hierarchy. With an understanding of GPU memory systems and in particular the AMD GPU memory system, this talk will explore how to improve the performance of applications. This understanding is crucial to designing code to perform well on AMD GPUs. **[Part 4](https://www.nersc.gov/gpu-profiling-performance-timelines-rocprof-omnitrace-part4-hip-series-oct2023/)** will be held on Monday, October 2, with the topic of GPU Profiling (Performance Timelines: Rocprof and Omnitrace), two tools for AMD GPUs to collect application performance timelines data. **[Part 5](https://www.nersc.gov/gpu-profiling-performance-profile-omniperf-part5-hip-series-oct2023/)** will be held on Monday, October 16, with the topic of GPU Profiling (Performance Profile: Omniperf), a tool for getting application performance profiles on AMD GPUs. For more information and to register, please see the event webpages linked above. ### AI For Scientific Computing Bootcamp, October 18-20 <a name="ai4scicomp"/></a> NERSC, in collaboration with the OpenACC organization and NVIDIA, is hosting a virtual, three-day AI for Scientific Computing Bootcamp from Wednesday, October 18, through Friday, October 20, 2023. This bootcamp will provide a step-by-step overview of the fundamentals of deep neural networks and walk attendees through the hands-on experience of building and improving deep learning models for applications related to scientific computing and physical systems defined by differential equations. For more information and to register, please see <https://www.nersc.gov/ai-for-scientific-computing-oct-2023>. ### OLCF AI for Science at Scale Training on October 12 <a name="ai4sciolcf"/></a> OLCF is holding a series of training events on the topic of AI for Science at Scale, which are open to NERSC users. The second training in the series is scheduled for October 12, and will focus on how to train a model on multiple GPUs and model parallelism techniques and frameworks, such as DeepSpeed, FSDP, and Megatron For more information and to register, please see <https://www.nersc.gov/olcf-ai-training-series-ai-for-science-at-scale-part-2>. ### IDEAS-ECP Webinar on "Taking HACC into the Exascale Era: New Code Capabilities, and Challenges" October 11 <a name="ecpwebinar"/></a> The next webinar in the [Best Practices for HPC Software Developers](http://ideas-productivity.org/events/hpc-best-practices-webinars/) series is entitled "Taking HACC into the Exascale Era: New Code Capabilities, and Challenges", and will take place Wednesday, October 11, at 10:00 am Pacific time. This webinar, presented by Esteban Rangel (Argonne National Laboratory), will discuss lessons learned by the HACC (Hardware/Hybrid Accelerated Cosmology Code) development team in contending with novel programming modes while preparing HACC for multiple exascale systems, simultaneously adding new capabilities and focusing on performance. There is no cost to attend, but registration is required. Please register [at the event webpage](https://www.exascaleproject.org/event/hacc/). ([back to top](#top)) --- ## NERSC News <a name="section9"/></a> ## ### Come Work for NERSC! <a name="careers"/></a> NERSC currently has several openings for postdocs, system administrators, and more! If you are looking for new opportunities, please consider the following openings: - [Data Department Head](http://phxc1b.rfer.us/LBLyns7OB): Provide vision and guidance for NERSC's Data Department, which includes Data & AI Services, Data Science Engagement, and Storage Systems Groups. - [Data Science Workflows Architect](http://phxc1b.rfer.us/LBLl4072c): Work closely with application teams to help optimize their workflows on NERSC systems. - [NERSC Programming Environments and Models Group Lead](http://phxc1b.rfer.us/LBLnjA6ze): Lead the team to design, deploy and maintain the environments and software runtimes that enable current and future science on NERSC systems. - [Cyber Security Group Lead](http://phxc1b.rfer.us/LBL7FQ6xy): Lead the team of security engineers responsible for the security architecture and infrastructure of NERSC. - [HPC Systems Software Engineer](http://m.rfer.us/LBLSQh6ZH): Help architect, deploy, configure, and operate NERSC's large scale, leading-edge HPC systems. - [HPC Storage Systems Developer](http://m.rfer.us/LBLdsq5XB): Use your systems programming skills to develop the High Performance Storage System (HPSS) and supporting software. - [NESAP for Simulations Postdoctoral Fellow](http://m.rfer.us/LBLRUa4lS): Collaborate with computational and domain scientists to enable extreme-scale scientific simulations on NERSC's Perlmutter supercomputer. (**Note:** You can browse all our job openings on the [NERSC Careers](https://lbl.referrals.selectminds.com/page/nersc-careers-85) page, and all Berkeley Lab jobs at <https://jobs.lbl.gov>.) We know that NERSC users can make great NERSC employees! We look forward to seeing your application. ### About this Email <a name="about"/></a> You are receiving this email because you are the owner of an active account at NERSC. This mailing list is automatically populated with the email addresses associated with active NERSC accounts. In order to remove yourself from this mailing list, you must close your account, which can be done by emailing <accounts@nersc.gov> with your request. _______________________________________________ Users mailing list Users@nersc.gov

Close this window