Close this window

Email Announcement Archive

[Users] NERSC Weekly Email, Week of October 16, 2023

Author: Kevin Gott <kngott_at_lbl.gov>
Date: 2023-10-16 14:56:37

# NERSC Weekly Email, Week of October 16, 2023<a name="top"></a> # ## Contents ## ## [Summary of Upcoming Events and Key Dates](#section1) ## - [(NEW/UPDATED) Scheduled Outages](#outages) - [(NEW/UPDATED) Key Dates](#dates) ## [This Week's Events and Deadlines](#section2) ## - [(NEW/UPDATED) Allocation Reduction Takes Effect Wed, Oct 18th.](#allocreduce) - [Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts October 18!](#spinup) - [AI For Scientific Computing Bootcamp, October 18-20](#ai4scicomp) - [(NEW/UPDATED) Upcoming Changes to Globus Authentication](#globus) - [(NEW/UPDATED) Introduction to GPUs and HIP: HIP Training Series](#hipseries) ## [Perlmutter](#section3) ## - [Perlmutter Machine Status](#perlmutter) - [Matplotlib and NERSC Python Kernel Update: October 31st](#matplotlib) ## [Events at NERSC ](#section4) ## - [Save the Date: NERSC Quantum for Science Day on November 2](#quantum4sci) ## [NERSC User Community ](#section5) ## - [Got a Tip or Trick to Share with Other Users? Post It in Slack or Add It to NERSC's Documentation!](#tipsntricks) - [Submit a Science Highlight Today!](#scihigh) ## [Calls for Submissions](#section6) ## - [PASC24 Call for Submissions Open Through December 1](#pasc) ## [Webinars ](#section7) ## - [(NEW/UPDATED) IDEAS ECP Webinar, Nov 8 -- A cast of thousands](#ideasecp) ## [NERSC News ](#section8) ## - [(NEW/UPDATED) Come Work for NERSC!](#careers) - [About this Email](#about) ([back to top](#top)) --- ## Summary of Upcoming Events and Key Dates <a name="section1"/></a> ## ### (NEW/UPDATED) Scheduled Outages <a name="outages"/></a> (See <https://www.nersc.gov/live-status/motd/> for more info): - **Perlmutter** - 10/18/2023 14:00-22:00 PDT New Hardware Integration - Login nodes and scratch will be available. Jobs will be submitable but no jobs will start. There is a chance access may be intermittently interrupted during the integration. - **HPSS Archive (User)** - 10/18/23 9:00-12:00 PDT Scheduled Maintenance - Some retrievals may be delayed during library maintenance. ### (NEW/UPDATED) Key Dates <a name="dates"/></a> October 2023 November 2023 December 2023 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 1 2 3 4 1 2 8 9 10 11 12 13 14 5 6 7 8 9 10 11 3 4 5 6 7 8 9 15 16 17 18 19 20 21 12 13 14 15 16 17 18 10 11 12 13 14 15 16 22 23 24 25 26 27 28 19 20 21 22 23 24 25 17 18 19 20 21 22 23 29 30 31 26 27 28 29 30 24 25 26 27 28 29 30 31 #### This Week - **October 16, 2023**: [HIP Training Series, Part 5](#hipseries) - **October 16-18, 2023**: Confab23 Conference, Gaithersburg MD - **October 18, 2023**: [Allocation Reduction Occurs](#allocreduce) - **October 18, 2023**: [SpinUp Workshop](#spinup) - **October 18-20, 2023**: [AI for Scientific Computing Bootcamp](#ai4scicomp) - **October 19, 2023**: [Globus Identity Provider Update](#globus) #### Next Week #### Future - **October 31, 2023**: [Matplotlib and NERSC Python Kernel Update](#matplotlib) - **November 1, 2023**: [Quantum for Science Day](#quantum4sci) - **November 8, 2023**: [IDEAS ECP Webinar: A cast of thousands](#ideasecp) - **November 23-24, 2023**: Thanksgiving Holiday (No Consulting or Account Support) - **December 1, 2023**: [PASC Submission Deadline](#pasc) ([back to top](#top)) --- ## This Week's Events and Deadlines <a name="section2"/></a> ## ### (NEW/UPDATED) Allocation Reduction Takes Effect Wed, Oct 18th. <a name="allocreduce"/></a> Thrice annually, NERSC performs allocation reductions on DOE Mission Science projects. The allocation reduction process takes unused compute-time and allows DOE allocation managers to redistribute that time to other projects. The third allocation reduction for 2023 will be performed on **Wednesday, October 18**. On this date, projects that have been charged under the following limits will have their remaining allocations reduced as specified: * Less than 10% charged – reduce remaining allocation by 90% * Greater than 10% but less than 20% charged – reduce remaining allocation by 75% * Greater than 20% but less than 40% charged – reduce remaining allocation by 25% This reduction scheme will apply separately to both CPU and GPU allocations. (e.g., a project that has used 21% of its CPU allocation and 50% of its GPU allocation will have its CPU allocation reduced by 25%.) For this round of reductions, Project Principal Investigators, PI Proxies or Project Resource Managers can request exemptions to this policy, but must make the request at least one week before the reductions are scheduled to take effect (Wed, October 11th). Please make a request by creating a ticket at [https://help.nersc.gov](https://help.nersc.gov) and include justification for the exemption. For details, please look at the allocation reduction website: [https://www.nersc.gov/users/accounts/allocations/allocation-reductions/](https://www.nersc.gov/users/accounts/allocations/allocation-reductions/) Please direct any questions to NERSC allocations experts via a ticket at [https://help.nersc.gov](https://help.nersc.gov). ### Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts October 18! <a name="spinup"/></a> Spin is a service platform at NERSC based on Docker container technology. It can be used to deploy science gateways, workflow managers, databases, and all sorts of other services that can access NERSC systems and storage on the back end. New large-memory nodes have been added to the platform, increasing the potential of the platform for new memory-constrained applications. To learn more about how Spin works and what it can do, please listen to the NERSC User News podcast on Spin: <https://anchor.fm/nersc-news/episodes/Spin--Interview-with-Cory-Snavely-and-Val-Hendrix-e1pa7p>. Attend an upcoming SpinUp workshop to learn to use Spin for your own science gateway projects! Applications for sessions that begin Wednesday, October 18 [are now open](https://www.nersc.gov/users/training/spin). SpinUp is hands-on and interactive, so space is limited. Participants will attend an instructional session and a hack-a-thon to learn about the platform, create running services, and learn maintenance and troubleshooting techniques. Local and remote participants are welcome. If you can't make these upcoming sessions, never fear! More sessions are being planned for later in the year. See a video of Spin in action at the [Spin documentation](https://docs.nersc.gov/services/spin/) page. ### AI For Scientific Computing Bootcamp, October 18-20 <a name="ai4scicomp"/></a> NERSC, in collaboration with the OpenACC organization and NVIDIA, is hosting a virtual, three-day AI for Scientific Computing Bootcamp from Wednesday, October 18, through Friday, October 20, 2023. This bootcamp will provide a step-by-step overview of the fundamentals of deep neural networks and walk attendees through the hands-on experience of building and improving deep learning models for applications related to scientific computing and physical systems defined by differential equations. For more information and to register, please see <https://www.nersc.gov/ai-for-scientific-computing-oct-2023>. Please apply by Wednesday if you wish to attend the bootcamp! ### (NEW/UPDATED) Upcoming Changes to Globus Authentication <a name="globus"/></a> On **Thursday October 19, 2023 at 9am** the Globus services at NERSC will switch identity providers to the same NERSC identity provider used by Jupyter and many other NERSC services. This will not affect existing transfers or existing sessions, but users who need to reactivate their Globus sessions will see a new web page (for Globus). ### (NEW/UPDATED) Introduction to GPUs and HIP: HIP Training Series <a name="hipseries"/></a> The final HIP Series training session current planned was completed today. For slides, exercises and a recording of any of the trainings, see each part's unique website, linked below. HIP® is a parallel computing platform and programming model that extends C++ to allow developers to program GPUs with a familiar programming language and simple APIs. AMD will present a multi-part HIP training series intended to help new and existing GPU programmers understand the main concepts of the HIP programming model. Each part will include a 1-hour presentation and example exercises. The exercises are meant to reinforce the material from the presentation and can be completed during a 1-hour hands-on session following each lecture on OLCF Frontier and NERSC Perlmutter. **[Part 1](https://www.nersc.gov/intro-gpus-and-hip-part-1-of-hip-training-series-aug-14-2023/)** of the HIP training session was held on Monday, August 14, with the topic of Introduction to HIP and GPU. This session introduced the basics of programming GPUs, and the syntax and API of HIP to transfer data to and from GPUs, write GPU kernels, and manage GPU thread groups. (See the [session webpage](https://www.nersc.gov/intro-gpus-and-hip-part-1-of-hip-training-series-aug-14-2023/) for slides, exercises, and a recording of the training.) **[Part 2](https://www.nersc.gov/porting-applications-to-hip-part2-hip-training-series-aug2023/)** was held on August 28, with the topic of Porting Applications to GPU. Porting applications from CUDA to HIP can transform an application to be portable across both Nvidia and AMD GPU hardware. This talk reviewed the AMD porting tools and how to use them. Portability for other GPU programming languages was also briefly discussed. **[Part 3](https://www.nersc.gov/amd-memory-hierarchy-part3-hip-training-series-sep2023/)** was held Monday, September 18 with the topic of AMD Memory Hierarchy. With an understanding of GPU memory systems and in particular the AMD GPU memory system, this talk will explore how to improve the performance of applications. This understanding is crucial to designing code to perform well on AMD GPUs. **[Part 4](https://www.nersc.gov/gpu-profiling-performance-timelines-rocprof-omnitrace-part4-hip-series-oct2023/)** was held on Monday, October 2, with the topic of GPU Profiling (Performance Timelines: Rocprof and Omnitrace), two tools for AMD GPUs to collect application performance timelines data. **[Part 5](https://www.nersc.gov/gpu-profiling-performance-profile-omniperf-part5-hip-series-oct2023/)** was held on Monday, October 16, with the topic of GPU Profiling (Performance Profile: Omniperf), a tool for getting application performance profiles on AMD GPUs. For more information, please see the event webpages linked above. ([back to top](#top)) --- ## Perlmutter <a name="section3"/></a> ## ### Perlmutter Machine Status <a name="perlmutter"/></a> Perlmutter is available to all users with an active NERSC account. Some helpful NERSC pages for Perlmutter users: * [Perlmutter queue information](https://docs.nersc.gov/jobs/policy/#qos-limits-and-charge) * [Timeline of major changes](https://docs.nersc.gov/systems/perlmutter/timeline/) * [Current known issues](https://docs.nersc.gov/current/#perlmutter) This section of the newsletter will be updated regularly with the latest Perlmutter status. ### Matplotlib and NERSC Python Kernel Update: October 31st <a name="matplotlib"/></a> Users of the Matplotlib Jupyter integration package, ipympl, should be aware that on Tuesday, October 31, NERSC will update its JupyterLab deployment to use ipympl version 0.9.3 (from 0.8.6). On this date, the "NERSC Python" kernel also will be changed to match the new default Python module (python/3.11). If you have a custom kernel with ipympl 0.8.6 installed and want it to continue working with JupyterLab after this date, you will want to upgrade it to use ipympl version 0.9.3 and Matplotlib version 3.4.0 or later. If you upgrade or create new kernels and want to test them with the new JupyterLab deployment ahead of time, you may do so on our test hub. Open a ticket at https://help.nersc.gov for access to the test hub. ([back to top](#top)) --- ## Events at NERSC <a name="section4"/></a> ## ### Save the Date: NERSC Quantum for Science Day on November 2 <a name="quantum4sci"/></a> NERSC will hold a Quantum for Science Day event on November 2, 2023. This hybrid, single-day conference will highlight work that has been done on Perlmutter from some of our quantum users, and provide tutorials on quantum software and hardware from our industry collaborators. Additionally, there will be a panel discussion with participation from a range of national lab scientists and engineers who will discuss their plans for integrating quantum into their computational workloads. ([back to top](#top)) --- ## NERSC User Community <a name="section5"/></a> ## ### Got a Tip or Trick to Share with Other Users? Post It in Slack or Add It to NERSC's Documentation! <a name="tipsntricks"/></a> Do you have a handy tip or trick that you think other NERSC users might be able to benefit from? Something that helps make your use of NERSC resources more efficient, or saves you from needing to remember some obscure command? Share it with your fellow NERSC users in one of the following ways: - A new `#tips-and-tricks` channel on the [NERSC Users Slack](https://www.nersc.gov/users/NUG/nersc-users-slack/) (login required -- you may also join the NERSC Users Slack at this link) has been started and provides a daily tip or trick. Feel free to share it there! - Add it to the NERSC documentation -- NERSC's technical documentation pages are in a [Gitlab repository](https://gitlab.com/NERSC/nersc.gitlab.io/), and we welcome merge requests and issues. - Speak up during the "Today-I-Learned" portion of the [NUG Monthly Meeting](https://www.nersc.gov/users/NUG/teleconferences/). ### Submit a Science Highlight Today! <a name="scihigh"/></a> Doing cool science at NERSC? NERSC is looking cool, scientific and code development success stories to highlight to NERSC users, DOE Program Managers, and the broader scientific community in Science Highlights. If you're interested in your work being considered to be used as a featured Science highlight, please let us know via our [highlight form](https://docs.google.com/forms/d/e/1FAIpQLScP4bRCtcde43nqUx4Z_sz780G9HsXtpecQ_qIPKvGafDVVKQ/viewform). ([back to top](#top)) --- ## Calls for Submissions <a name="section6"/></a> ## ### PASC24 Call for Submissions Open Through December 1 <a name="pasc"/></a> The Platform for Advanced Scientific Computing (PASC) invites research paper submissions for PASC23, co-sponsored by the Association for Computing Machinery (ACM) and SIGHPC, which will be held at ETH Zurich, HCI Campus Honggerberg, Switzerland, from June 3-5, 2024. The PASC Conference series is an international platform for the exchange of competences in scientific computing and computational science, with a strong focus on methods, tools, algorithms, application challenges, and novel techniques and usage of high performance computing. The 2024 technical program is centered around seven scientific domains: Chemistry and Materials; Climate, Weather, and Earth Sciences; Computational Methods and Applied Mathematics; Applied Social Sciences and Humanities; Engineering; Life Sciences; and Physics. PASC24 solicits high-quality contributions of original research related to scientific computing in all of these domains. Papers that emphasize the theme of PASC24 – "Synthesizing Applications Through Learning and Computing" – are particularly welcome. The final deadline for submissions is December 1, 2023. For more information on PASC23, including submissions, please see <https://pasc24.pasc-conference.org>. ([back to top](#top)) --- ## Webinars <a name="section7"/></a> ## ### (NEW/UPDATED) IDEAS ECP Webinar, Nov 8 -- A cast of thousands <a name="ideasecp"/></a> The next webinar in the Best Practices for [HPC Software Developers series](https://ideas-productivity.org/events/hpc-best-practices-webinars/) will take place on Nov 8th at 10PM PDT: "A cast of thousands: How the IDEAS Productivity project has advanced software productivity and sustainability", by David Bernholdt (Oak Ridge National Laboratory). The webinar will describe strategies that the IDEAS Productivity Project has used to help software teams across the Exascale Computing Project “up their game” with respect to their software practices. The webinar will wrap up with lessons learned by the IDEAS team and briefly consider possible futures for the DOE scientific software community. Go to the [event page](https://www.exascaleproject.org/event/ideas-ecp/) for details and to get your ticket to the online event. ([back to top](#top)) --- ## NERSC News <a name="section8"/></a> ## ### (NEW/UPDATED) Come Work for NERSC! <a name="careers"/></a> NERSC currently has several openings for postdocs, system administrators, and more! If you are looking for new opportunities, please consider the following openings: - [Data Science Workflows Architect](http://phxc1b.rfer.us/LBL3eO7sI): Support scientists at experimental facilities using supercomputing resources at NERSC. - [Data Science Workflows Architect](http://phxc1b.rfer.us/LBLl4072c): Work closely with application teams to help optimize their workflows on NERSC systems. - [HPC Systems Software Engineer](http://m.rfer.us/LBLSQh6ZH): Help architect, deploy, configure, and operate NERSC's large scale, leading-edge HPC systems. - [NESAP for Simulations Postdoctoral Fellow](http://m.rfer.us/LBLRUa4lS): Collaborate with computational and domain scientists to enable extreme-scale scientific simulations on NERSC's Perlmutter supercomputer. (**Note:** You can browse all our job openings on the [NERSC Careers](https://lbl.referrals.selectminds.com/page/nersc-careers-85) page, and all Berkeley Lab jobs at <https://jobs.lbl.gov>.) We know that NERSC users can make great NERSC employees! We look forward to seeing your application. ### About this Email <a name="about"/></a> You are receiving this email because you are the owner of an active account at NERSC. This mailing list is automatically populated with the email addresses associated with active NERSC accounts. In order to remove yourself from this mailing list, you must close your account, which can be done by emailing <accounts@nersc.gov> with your request. _______________________________________________ Users mailing list Users@nersc.gov

Close this window