Close this window

Email Announcement Archive

[Users] A few upcoming NERSC users training opportunities in June 2024

Author: Helen He <nersc-training_at_lbl.gov>
Date: 2024-05-23 10:09:25

Dear NERSC users, Below are some upcoming training events happening in June (or applications due) that you are encouraged to attend: GPU Hackathon, Spin for Science Gateways, OpenMP tasking, New User Training, Julia, SYCL, HPC crash course. 1) August NERSC/OLCF/NVIDIA Hackathon, Aug 13, 20-22 (Application due May 24, TOMORROW) NERSC, in conjunction with NVIDIA and OLCF, will be hosting an Open Hackathon from August 20th-22nd with an opening day on August 13th as part of the annual Open Hackathon Series. The Hackathon will be hosted as a hybrid event. Hackathons combine teams of developers with mentors to either prepare their own application(s) to run on GPUs or optimize their application(s) that currently run on GPUs. Teams should consist of at least three developers who are intimately familiar with (some part of) their application, and they will work alongside two mentors with GPU programming expertise. If you want/need to get your code running/optimized on a GPU-accelerated system, these hackathons offer a unique opportunity to set aside 4 days, surround yourself with experts in the field, and push toward your development goals. During the event, teams will have access to compute resources provided by NERSC, and OLCF. *Please note the deadline to submit a proposal is 11:59 PM Pacific, May 24, 2024, TOMORROW.* So apply <https://www.openhackathons.org/s/siteevent/a0C5e000008dWi2EAE/se000287> now! For more information, event agenda, or to submit a short proposal form, please visit the Open Hackathon’s event page <https://www.openhackathons.org/s/siteevent/a0C5e000008dWi2EAE/se000287> or NERSC's event page <https://sites.google.com/lbl.gov/august-2024-gpu-hackathon/home>.2) Learn to Use Spin to Build Science Gateways at NERSC: Next SpinUp Workshop Starts June 5 Spin is a service platform at NERSC based on Docker container technology. It can be used to deploy science gateways, workflow managers, databases, and all sorts of other services that can access NERSC systems and storage on the back end. New large-memory nodes have been added to the platform, increasing the potential of the platform for new memory-constrained applications. To learn more about how Spin works and what it can do, please listen to the NERSC User News podcast on Spin <https://anchor.fm/nersc-news/episodes/Spin--Interview-with-Cory-Snavely-and-Val-Hendrix-e1pa7p> or see a video of Spin in action on the Spin documentation <https://docs.nersc.gov/services/spin/> page. Attend an upcoming SpinUp workshop to learn to use Spin for your own science gateway projects! *Applications for sessions that begin Wednesday, June 5 are now open <https://www.nersc.gov/spinup-workshop-jun2024/>.* SpinUp is hands-on and interactive, so space is limited. Participants will attend an instructional session and a hackathon to learn about the platform, create running services, and learn maintenance and troubleshooting techniques. 3) OpenMP Training Series, May - October, Session 2 on June 10 The OpenMP API is the de facto standard for writing parallel applications for shared memory computers supported by multiple scientific compilers on CPU and GPU architectures. MPI+OpenMP for CPUs and OpenMP device offload for GPUs are recommended portable programming models on Perlmutter. Calling all programmers! Whether you're new to parallel programming, new to OpenMP or OpenMP device offload, or want a refresher on the basics or explore advanced features, our OpenMP monthly training series is for you. *The series runs from May to October 2024, and you're welcome to attend all sessions or just the ones that interest you most, and review slides and videos of past sessions on your own.* Join the training series led by OpenMP experts Michael Klemm (AMD OpenMP ARB) and Christian Terboven (RWTH Aachen University)! A wide range of topics to be covered includes OpenMP basics, parallel worksharing, tasking, memory management and affinity, vectorization, correctness, performance, GPU offloading, and MPI/OpenMP hybrid programming. The format of each training session will be presentations followed by homework assignments using Perlmutter. Homework solutions will be reviewed at the start of the next session. Session 1 on OpenMP Introduction took place on May 6, and slides <https://www.nersc.gov/users/training/events/2024/openmp-training-series-may-oct-2024/#toc-anchor-5> and video <https://tinyurl.com/2jpmzdua> are available. Session 2 on tasking will be held on June 10. For detailed session topics and to register, please visit https://www.nersc.gov/openmp-training-series-may-oct-2024/. 4) NERSC New User Training, June 12-13 NERSC is hosting a two half-day virtual training event for new users and existing users on efficiently using NERSC resources and Perlmutter. The goal is to provide users new to NERSC with the basics on our computational systems; accounts and allocations; programming environment, running jobs, tools, and best practices; and data ecosystem. This also allows for our existing users to learn more about best practices for using Perlmutter as well. This virtual event will occur on Wednesday and Thursday June 12-13, 2024. This event will be presented online only using Zoom. For more information, please visit the event page <https://www.nersc.gov/new-user-trainingjune2024> for the draft event agenda and to register. 5) Julia for HPC and Intro to Julia for Science Training, June 2024 Julia proposes to fill a gap in the high-performance plus high-productivity space being a dynamic language built on top of LLVM with lightweight interoperability with C and Fortran code, and a unified ecosystem for data science and reproducibility. The Julia training presented by OLCF, NERSC, and ORNL Neutron Sciences, are part of the Performance Portability training series. - Session 1: Tuesday, June 18, 10:00 am - 1:00 pm (Pacific time), Julia for HPC - Session 2: Friday, June 21, 10:00 am - 1:00 pm (Pacific time), Introduction to Julia for Science Odo, a Frontier-like system with AMD GPUs at OLCF, and Perlmutter with Nvidia GPUs will be used for hands-on for this training. NERSC users who are also interested in working on AMD GPUs (such as for performance portability among different GPU vendors) can apply for a training project with access to Odo. The application deadline for Odo access is June 7. Please see the application details on the OLCF Julia training event page <https://www.olcf.ornl.gov/calendar/julia-for-hpc-and-intro-to-julia-for-science/> for Session 1 under the "Compute Resources for the Event" section. For more information and to register, please visit the training event page at https://www.nersc.gov/julia-training-jun2024/. 6) Portable SYCL Code Using oneMKL on AMD, Intel, and Nvidia GPUs, June 20 The portable SYCL code using oneMKL on AMD, Intel, and Nvidia GPUs training, presented by CodePlay and Intel on June 20, is part of the Performance Portability training series <https://www.nersc.gov/users/training/events/2024/performance-portability-series-2023-2024/>. With supercomputers such as Aurora, Polaris, Perlmutter, and Frontier deployed across DOE national laboratories, there is now a range of GPU-based architectures for researchers to use from different vendors. The good news is that it is possible to write code using a single programming model and language deployable across all these supercomputers and other systems. Register now for a webinar that will show you how to achieve portable code using the oneMKL library. oneMKL is based on the oneAPI specification and can be used to target multi-vendor and multi-architecture accelerators from a single code base. It is also now governed by the Unified Acceleration Foundation (UXL), an open governance body that is part of the Linux Foundation. We will also talk about the GROMACS and NWChem projects that are benefitting from using the oneMKL library to target Intel, Nvidia, and AMD GPUs, with discrete Fourier transforms as an example. The oneMKL interface library makes this possible with minimal overhead using SYCL backend interoperability. For more information and to register, please visit https://www.nersc.gov/portable-sycl-training-jun2024/. 7) Crash Course in Supercomputing, June 28 This hybrid training, held as part of the 2024 Berkeley Lab Computational Sciences Summer Student Program <https://cs.lbl.gov/careers/summer-student-and-faculty-program/2024-csa-summer-program/summer-program/>, is also open to NERSC, OLCF, and ALCF users. This training is geared towards novice parallel programmers with prior programming experience in a compiled programming language. In this course, students will learn to write parallel programs that can be run on a supercomputer. We begin by discussing the concepts of parallelization before introducing MPI and OpenMP, the two leading parallel programming libraries. Finally, the students will put together all the concepts from the class by programming, compiling, and running a parallel code on one of the NERSC supercomputers. Training accounts will be provided for students who have not yet set up a NERSC account. For more information and to register, please visit https://www.nersc.gov/hpc-crash-course-jun2024/. Best Regards, Helen He NERSC Training _______________________________________________ Users mailing list Users@nersc.gov

Close this window