Close this window

Email Announcement Archive

[Users] NERSC Weekly Email, Week of February 26, 2024

Author: Rebecca Hartman-Baker <rjhartmanbaker_at_lbl.gov>
Date: 2024-02-26 15:36:40

# NERSC Weekly Email, Week of February 26, 2024<a name="top"></a> # ## Contents ## ## [Summary of Upcoming Events and Key Dates](#section1) ## - [Scheduled Outages](#outages) - [Key Dates](#dates) ## [This Week's Events and Deadlines](#section2) ## - [Apply for Argonne Training Program on Extreme-Scale Computing by Wednesday!](#atpesc) - [Join Us for the NUG Community Call this Thursday!](#nug) - [Brief Disruptions from Science Gateways Maintenance on Thursday](#scigatemaint) - [Apply for Quantum Information Science Research Allocations by Friday!](#qis) ## [Perlmutter](#section3) ## - [Perlmutter Machine Status](#perlmutter) - [Perlmutter Scratch Purge Now Enforced](#scratchpurge) - [Podman-HPC Available Again on Perlmutter](#podmanreturns) ## [NERSC@50](#section4) ## - [Save the Date: Celebrate 50 Years of NERSC with Us October 22-24](#nug50) - [Read about NERSC History](#nerscfirsts) ## [NERSC Updates](#section5) ## - [Attention Students: NERSC Summer Internships Available!](#nerscinterns) - [Announcing 2024 NESAP Pathfinding Projects!](#nesappath) - [Attention VASP Users: VASP 6 for GPUs Available at NERSC!](#vasp) ## [NERSC User Community ](#section6) ## - [Need Help? Check out NERSC Documentation, Send in a Ticket or Consult Your Peers!](#gettinghelp) - [Submit a Science Highlight Today!](#scihigh) ## [Calls for Proposals & Nominations](#section7) ## - [Nominations for George Michael Memorial HPC Fellowship Now Open!](#gmichael) ## [Upcoming Training Events](#section8) ## - [Join NERSC for a Training on the Forge Toolset for Debugging & Profiling, March 13](#forgetrain) - [Performance Portability Series: AMReX Tutorial, March 14](#amrex) - [Learn Parallel Programming in Fortran, March 26-27](#ppfortran) - [(NEW/UPDATED) Debugging Challenging Memory and GPU Problems with TotalView, May 13, 2024](#tvtraining) ## [NERSC News ](#section9) ## - [Come Work for NERSC!](#careers) - [About this Email](#about) ([back to top](#top)) --- ## Summary of Upcoming Events and Key Dates <a name="section1"/></a> ## ### Scheduled Outages <a name="outages"/></a> (See <https://www.nersc.gov/live-status/motd/> for more info): - **Perlmutter** - 03/20/24 06:00-16:00 PST, Scheduled Maintenance - Perlmutter will be unavailable during the listed times due to scheduled maintenance. - 04/17/24 06:00-16:00 PST, Scheduled Maintenance - Perlmutter will be unavailable during the listed times due to scheduled maintenance. - 05/15/24 06:00-16:00 PST, Scheduled Maintenance - Perlmutter will be unavailable during the listed times due to scheduled maintenance. - **HPSS Archive (User)** - 03/13/24 09:00-13:00 PST, Scheduled Maintenance - System down for quarterly maintenance - **HPSS Regent (Backup)** - 03/06/24 09:00-13:00 PST, Scheduled Maintenance - System down for quarterly maintenance - **Science Gateway Services** - 02/29/24 7:00-9:00 PST Scheduled Maintenance - During this maintenance window there may be brief periods of downtime to portal.nersc.gov and other Science Gateway Services. ### Key Dates <a name="dates"/></a> February 2024 March 2024 April 2024 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 1 2 1 2 3 4 5 6 4 5 6 7 8 9 10 3 4 5 6 7 8 9 7 8 9 10 11 12 13 11 12 13 14 15 16 17 10 11 12 13 14 15 16 14 15 16 17 18 19 20 18 19 20 21 22 23 24 17 18 19 20 21 22 23 21 22 23 24 25 26 27 25 26 27 28 29 24 25 26 27 28 29 30 28 29 30 31 #### This Week - **February 28, 2024**: [Applications Due for ATPESC](#atpesc) - **February 29, 2024**: - [Science Gateways Maintenance](#scigatemaint) - [NERSC User Group Monthly Community Call](#nug) - **March 1, 2024**: [QIS@Perlmutter Proposals Due](#qis) #### Next Week #### Future - **March 13, 2024**: [Linaro Forge Debugger/Profiler Training](#forgetrain) - **March 14, 2024**: [AMReX Tutorial](#amrex) - **March 26-27, 2024**: [Introduction to Parallel Programming in Fortran](#ppfortran) - **May 1, 2024**: [George Michael Memorial HPC Fellowship Nominations Due](#gmichael) - **May 13, 2024**: [TotalView Debugger Training](#tvtraining) - **October 22-24, 2024**: [NERSC 50th Anniversary Celebration & NUG Meeting](#nug50) ([back to top](#top)) --- ## This Week's Events and Deadlines <a name="section2"/></a> ## ### Apply for Argonne Training Program on Extreme-Scale Computing by Wednesday! <a name="atpesc"/></a> Are you a doctoral student, postdoc, or computational scientist looking for advanced training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future? If so, consider applying for the Argonne Training Program on Extreme-Scale Computing (ATPESC) program. The core of the two-week program focuses on programming methodologies that are effective across a variety of supercomputers and applicable to exascale systems. Additional topics to be covered include computer architectures, mathematical models and numerical algorithms, approaches to building community codes for HPC systems, and methodologies and tools relevant for Big Data applications. This year's program will be held July 28-August 9 in the Chicago area. There is no cost to attend. Domestic airfare, meals, and lodging are provided. For more information and to apply, please see <https://extremecomputingtraining.anl.gov/>. **The application deadline is this Wednesday, February 28, 2024**. ### Join Us for the NUG Community Call this Thursday! <a name="nug"/></a> The monthly NUG Community Call is a regular opportunity for NERSC users to show off what they've done, for NERSC to get feedback from our users, and for users to exchange ideas. Our next call is **this Thursday, February 29, 2024 at 11 am** (Pacific Time/UTC-7). Join us for an interesting discussion, help NERSC do some Tree Testing of our documentation, and stay for a presentation by one of NERSC's early-career award winners! For more information, including the agenda and connection info, please see the [call webpage](https://www.nersc.gov/users/NUG/teleconferences/nug-community-call-february-29-2024/). ### Brief Disruptions from Science Gateways Maintenance on Thursday <a name="scigatemaint"/></a> This Thursday, February 29, between 7am - 9 am the science gateways will undergo maintenance. This will affect portal.nersc.gov. During the maintenance window that may be brief periods of downtime. ### Apply for Quantum Information Science Research Allocations by Friday! <a name="qis"/></a> NERSC is soliciting new project proposals to conduct research in the area of quantum information science using the Perlmutter supercomputer. Projects in all areas of quantum information science are encouraged to apply. Successful proposals can expect to be awarded from 1,000 to 20,000 GPU node hours. This is an open call and is not limited to current NERSC users. For full consideration, please apply by March 1, following the instructions in the [call for proposals](https://www.nersc.gov/research-and-development/quantum-information-science/qisperlmutter/quantum-information-science-perlmutter-2024/). ([back to top](#top)) --- ## Perlmutter <a name="section3"/></a> ## ### Perlmutter Machine Status <a name="perlmutter"/></a> Perlmutter is available to all users with an active NERSC account. Some helpful NERSC pages for Perlmutter users: * [Perlmutter queue information](https://docs.nersc.gov/jobs/policy/#qos-limits-and-charge) * [Timeline of major changes](https://docs.nersc.gov/systems/perlmutter/timeline/) * [Current known issues](https://docs.nersc.gov/current/#perlmutter) This section of the newsletter will be updated regularly with the latest Perlmutter status. ### Perlmutter Scratch Purge Now Enforced <a name="scratchpurge"/></a> NERSC began enforcing its Perlmutter [scratch purge policy](https://docs.nersc.gov/filesystems/perlmutter-scratch/#file-system-purging) on February 1. As part of the policy, **NERSC reserves the right to delete any files on the scratch file system that have not been accessed for 8 weeks or longer.** File purging is a tool NERSC uses to clean up unwanted files and improve the performance of the file system. **Please remember that this process benefits everyone by keeping a performant scratch file system available for users, and do not attempt to circumvent the purge process.** Performance decreases as a file system becomes very full. Our plan is to use the purge process to prevent the file system from filling up above 70%. NERSC invites you to delete unnecessary files and move important data to longer-term storage (such as CFS, HPSS, or externally). NERSC recommends using [Globus](https://docs.nersc.gov/services/globus/) for migrating data, and following best practices when using [HPSS](https://docs.nersc.gov/filesystems/archive/). Assistance in migrating files from SCRATCH to another location is available via a [ticket](https://help.nersc.gov), and [quota increases](https://docs.nersc.gov/filesystems/quotas/#increases) on HPSS and the Community File System (CFS) are also available, should you need more space on non-purged storage resources. ### Podman-HPC Available Again on Perlmutter <a name="podmanreturns"/></a> In December, NERSC identified an issue that adversely impacted the performance of Podman (as well as some other container environments) on Perlmutter, so Podman was disabled. The rolling reboot of Perlmutter last month included a fix for the issue, and Podman has been re-enabled. We appreciate your patience as we resolved the issue. ([back to top](#top)) --- ## NERSC@50 <a name="section4"/></a> ## ### Save the Date: Celebrate 50 Years of NERSC with Us October 22-24 <a name="nug50"/></a> In honor of NERSC's fiftieth anniversary, we are planning an exciting program of anniversary-related events culminating with the annual NERSC User Group meeting, to be held October 22-24, 2024. Please join us to enjoy blasts from the past as well as fun in the future during this one-of-a-kind event! ### Read about NERSC History <a name="nerscfirsts"/></a> NERSC history experts are publishing articles and timelines in celebration of NERSC's golden anniversary. The most recent addition to the list is an article in the "In Their Own Words" series, on NERSC veteran [Jackie Scoggins](https://www.nersc.gov/news-publications/nersc-news/nersc50/page-9904/). For more on the NERSC 50th anniversary, please see the [NERSC 50th Anniversary](https://www.nersc.gov/news-publications/nersc-news/nersc50/) page. ([back to top](#top)) --- ## NERSC Updates <a name="section5"/></a> ## ### Attention Students: NERSC Summer Internships Available! <a name="nerscinterns"/></a> Are you an undergraduate or graduate student who will be enrolled as a student in the fall? Are you interested in working with NERSC staff on interesting technical projects? NERSC is looking for motivated students to join us for the summer in a paid internship role. Qualifications vary depending on the project, and pay is based on years of education completed. We have created a [list of summer internship projects on our website](https://www.nersc.gov/about/work-at-nersc/internships/nersc-summer-internship-projects/). Projects are still being added to the list so please check back for further additions. ### Announcing 2024 NESAP Pathfinding Projects! <a name="nesappath"/></a> NERSC is pleased to announce the start of the [2024 NESAP Pathfinding Projects](https://www.nersc.gov/research-and-development/nesap/). Fifty-four teams applied to the call for proposals in October 2023, and the proposals were reviewed by a team from NERSC and across the DOE leadership computing facilities. From these, 23 teams were selected as "pathfinding teams" for 2024. These teams will have access to a total of 25 NERSC staff, 10 Postdocs, and 200K GPU node hours on the Perlmutter supercomputer. NESAP pathfinding projects are renewed on an annual basis. At the end of 2024 we will solicit proposals for NESAP N10 Strategic Partners. These five-year projects are intended to evaluate the N10 system using early science workflows. If you’re interested in applying to future NESAP programs, please watch this space in Q4 2024. Or contact Johannes Blaschke (<jpblaschke@lbl.gov>) or Neil Mehta (<NeilMehta@lbl.gov>) to be put on the mailing list. ### Attention VASP Users: VASP 6 for GPUs Available at NERSC! <a name="vasp"/></a> Are you a user of the VASP application for performing ab initio quantum-mechanical molecular dynamics (MD) using pseudopotentials and a plane wave basis set? Did you know that you can use NERSC-installed binaries that are optimized for Perlmutter? Anyone with a VASP license can gain access to NERSC's VASP binaries by filling out the [VASP License Confirmation Request form](https://nersc.servicenowservices.com/sp?id=sc_cat_item&sys_id=d2935b561b032c106c44ebdbac4bcbb6&sysparm_category=e15706fc0a0a0aa7007fc21e1ab70c2f) (login required). License-holders for VASP version 6 can access a GPU-accelerated version in addition to the CPU-only version 5.4 and a version 6 build for CPU only. For more information about VASP, please see the [VASP documention](https://docs.nersc.gov/applications/vasp/) on NERSC's documentation website. ([back to top](#top)) --- ## NERSC User Community <a name="section6"/></a> ## ### Need Help? Check out NERSC Documentation, Send in a Ticket or Consult Your Peers! <a name="gettinghelp"/></a> Are you confused about setting up your MFA token? Is there something not quite right with your job script that causes the job submission filter to reject it? Are you struggling to understand the performance of your code on the GPU nodes? There are many ways that you can get help with issues at NERSC: - First, we recommend the NERSC [documentation](https://docs.nersc.gov) (<https://docs.nersc.gov/>). Most of the time, the answers for simpler issues, such as setting up your MFA token using Google Authenticator, can be found there. (The answers to more complex issues can usually be found in the documentation too!) - For more complicated issues, or issues that leave you unable to work, submitting a [ticket](https://help.nersc.gov) is a good way to get help fast. NERSC's consulting team will get back to you within four business-hours (8 am - 5 pm, Monday through Friday, except holidays) with a response. To submit a ticket, log in to <https://help.nersc.gov> (or, if the issue prevents you from logging in, send an email to <accounts@nersc.gov>). - For queries that might require some back-and-forth, NERSC provides an [appointment service](https://docs.nersc.gov/getting-started/#appointments-with-nersc-user-support-staff). Sign up for an appointment on a variety of topics, including "NERSC 101", Containers at NERSC, NERSC File Systems, GPU Basics, and GPUs in Python. - The **NERSC Users Group Slack**, while not an official channel for help, is a place where NERSC users often answer each others' questions, such as whether anyone else is seeing something strange, or how to get better job throughput. You can join the NUG Slack by following [this link](https://www.nersc.gov/users/NUG/nersc-users-slack/) (login required) - Sometimes, a **colleague** can figure out the issue faster than NERSC, because they already understand your workflow. It's possible that they know what flag you need to add to your Makefile for better performance, or how to set up your job submission script just so. ### Submit a Science Highlight Today! <a name="scihigh"/></a> Doing cool science at NERSC? NERSC is looking for cool, scientific and code development success stories to highlight to NERSC users, DOE Program Managers, and the broader scientific community in Science Highlights. If you're interested in your work being considered to be used as a featured Science highlight, please let us know via our [highlight form](https://docs.google.com/forms/d/e/1FAIpQLScP4bRCtcde43nqUx4Z_sz780G9HsXtpecQ_qIPKvGafDVVKQ/viewform). ([back to top](#top)) --- ## Calls for Proposals & Nominations <a name="section7"/></a> ## ### Nominations for George Michael Memorial HPC Fellowship Now Open! <a name="gmichael"/></a> The George Michael Memorial HPC Fellowship award committee is seeking nominations for this fellowship, which honors exceptional PhD students throughout the world whose research focus is on high-performance computing applications, networking, storage, or large-scale data analysis using the most powerful computers currently available. The Fellowship includes a $5000 honorarium, recognition on the ACM, IEEE CS, and ACM SIGHPC websites, and paid travel expenses to attend SC24, where recipients will be honored at the SC Conference Awards Ceremony. Candidates must be enrolled in a full-time PhD program at an accredited college or university and must meet the minimum scholastic requirements at their institution. They are expected to have completed at least one year of study and to have at least one year remaining between the application deadline and their expected graduation. Applications from women, minorities, international students, and all who contribute to diversity are encouraged. Nominations are in the form of self-nominations, submitted online. For more information and to nominate, please see the [fellowship webpage](https://awards.acm.org/hpc-fellows/nominations). Nominations are due May 1, 2024. ([back to top](#top)) --- ## Upcoming Training Events <a name="section8"/></a> ## ### Join NERSC for a Training on the Forge Toolset for Debugging & Profiling, March 13 <a name="forgetrain"/></a> Linaro Forge combines DDT for parallel high-performance application debugging, MAP for performance profiling and optimization advice, and Performance Reports for summarizing and characterizing both scalar and MPI application performance. The goal of this training is to teach useful features of these tools and demonstrate how to use them for debugging and profiling tasks on Perlmutter. For more information, please see the [event page](https://www.nersc.gov/users/training/events/2024/forge-training-for-debugging-and-profiling-march-13-2024/). ### Performance Portability Series: AMReX Tutorial, March 14 <a name="amrex"/></a> The AMReX framework supports the development of Block-Structured Adaptive Mesh Refinement (AMR) algorithms for solving systems of partial differential equations (PDEs) that require structured mesh and/or particle discretizations. The Mar 14 hands-on AMReX training session, as part of the OLCF/NERSC/ALCF [Performance Portability training series](https://www.nersc.gov/users/training/events/2024/performance-portability-series-2023-2024/), will cover an overview of AMReX and its applications, focusing on features to solve multiphysics problems. In particular, the training will describe how one can use AMReX to develop simulation codes that will work for both CPU and GPU systems. For more information and to register, please see <https://www.nersc.gov/performance-portability-series-amrex-mar2024/>. ### Learn Parallel Programming in Fortran, March 26-27 <a name="ppfortran"/></a> If you are a Fortran user, we would like to encourage you to attend a two-day, virtual, hands-on training, "Introduction to Parallel Programming in Fortran," March 26th-27th from 9:00 am to 1:30 pm (Pacific Time) each day, hosted by the Fortran Users of NERSC (FUN) Special Interest Group. Fortran is adopted in many large scientific applications running on high-performance computers. This training will provide different methods for taking advantage of multi-node and multi-core hardware for performing calculations in parallel using the Fortran programming language and some common libraries and extensions. We'll look at examples and work through exercises on Perlmutter. For more information and to register please see <https://www.nersc.gov/fun-training-march-2024-introduction-to-parallel-programming-in-fortran/>. ### (NEW/UPDATED) Debugging Challenging Memory and GPU Problems with TotalView, May 13, 2024 <a name="tvtraining"/></a> NERSC is hosting a training event on effectively using TotalView to debug challenging memory and NVIDIA GPU problems. TotalView from Perforce Software is a parallel debugger for complex C, C++, and Fortran applications. Using live demonstrations running on Perlmutter, you'll learn how to leverage TotalView's powerful memory debugging technology to find memory errors in parallel codes, and to debug CUDA, OpenMP, and OpenACC code running on NVIDIA GPUs. For more information and to register, please see the [event page](https://www.nersc.gov/users/training/events/2024/debugging-challenging-memory-and-gpu-problems-with-totalview-may-13-2024/). ([back to top](#top)) --- ## NERSC News <a name="section9"/></a> ## ### Come Work for NERSC! <a name="careers"/></a> NERSC currently has several openings for postdocs, system administrators, and more! If you are looking for new opportunities, please consider the following openings: - [Network Group Lead](http://phxc1b.rfer.us/LBLlcr8u1): Lead a team of network engineers responsible for the NERSC network architecture and infrastructure. - [NERSC HPC Department Head](http://phxc1b.rfer.us/LBLqbK819): Lead and provide vision and strategic direction for the NERSC High-Performance Computing department. (**Note:** You can browse all our job openings on the [NERSC Careers](https://lbl.referrals.selectminds.com/page/nersc-careers-85) page, and all Berkeley Lab jobs at <https://jobs.lbl.gov>.) We know that NERSC users can make great NERSC employees! We look forward to seeing your application. ### About this Email <a name="about"/></a> You are receiving this email because you are the owner of an active account at NERSC. This mailing list is automatically populated with the email addresses associated with active NERSC accounts. In order to remove yourself from this mailing list, you must close your account, which can be done by emailing <accounts@nersc.gov> with your request. _______________________________________________ Users mailing list Users@nersc.gov

Close this window