Argonne’s HPC training program wraps up fourth year; lecture videos now available online

Group2016While only 65 participants are able to attend the intensive Argonne Training Program on Extreme-Scale Computing (ATPESC) each year, the entire high-performance computing (HPC) community can now tap into the program’s broad curriculum via YouTube. In an effort to extend the reach of ATPESC beyond the classroom, program organizers have captured and posted videos of more than 80 lectures from some of the world’s foremost experts in extreme-scale computing.

With its fourth year now in the books, the rigorous training program brought 65 graduates, postdocs, and researchers together at the Pheasant Run Resort in St. Charles, Illinois in August for two weeks of instruction on HPC codes, software, and architecture. With optional collaborative sessions sometimes lasting until 10 p.m., participants had full days of technical lectures, dinner talks, and hands-on exercises led by the top researchers in the field.

David Eriksson, a PhD candidate for Applied Mathematics at Cornell University, says his experience at the program helped to better prepare him for the next generation of supercomputing. “I was impressed by the number of things we were able to cover in two weeks,” he says. “It’s a valuable experience to hear leaders in the HPC field speak about what they know best and what’s coming up next. Even though I work with high performance computing, there are still so many things I haven’t seen or used.”

Xin “Cindy” Wang, a postdoc at the U.S. Army Research Laboratory’s Computational and Information Sciences Directorate, picked up several pointers that will help in her efforts to develop algorithms for materials design for large-scale systems. “I’m a mechanical engineer so I didn’t have much background in computer science or computer architectures,” Wang says. “I came here to get an introductory view of what HPC is about and to learn the tools I need to program for HPC systems.”

In her first year as ATPESC program director, Marta García, a computational scientist at the Argonne Leadership Computing Facility (ALCF), continued the program’s mission to help grow the HPC user community by filling a gap in the training most computational scientists receive early in their careers. (The ALCF is a U.S. Department of Energy Office of Science User Facility).

“Supercomputers are extremely powerful research tools for a wide range of science domains, but using them efficiently requires a unique skill set,” García says. “With ATPESC, we aim to touch on all of the key skills and approaches a researcher needs to take advantage of the world’s most powerful computing systems.”

ATPESC achieves this goal with a broad curriculum organized around seven core program tracks: hardware architectures; programming models and languages; numerical algorithms and FASTMath; software engineering/community codes; visualization and data analysis; toolkits and frameworks; and data-intensive computing and I/O.

In addition, participants were provided access to hundreds of thousands of cores of computing power on some of today’s most powerful supercomputing resources, including the ALCF’s Mira and Vesta systems, the Oak Ridge Leadership Computing Facility’s Titan system, and the National Energy Research Scientific Computing Center’s Cori and Edison systems.

The motivated and engaged group of participants received instruction from an impressive roster of lecturers. In his third year as an ATPESC lecturer, Sean Couch, a computational physicist and a professor at Michigan State University, presented a talk on how community codes impact astrophysics research.

He emphasizes, “There’s an enormous lack of rigorous and professional training at this scale for students and postdocs, so we’re often learning this on the job, we’re figuring these things out on the fly while we’re trying to do our science. To have a formal program where you can get trained to do these things and do them right is critical and special.”

Another important aspect of ATPESC is the opportunity to get involved in the HPC community. Program participants from fields as diverse as physics, chemistry, materials science, computational fluid dynamics, climate modeling, and biology gathered together under the common ground of their interest in using supercomputers to advance their research goals.

Some of the participants also came from industry. Joshua Strodtbeck of Convergent Science in Madison, Wisconsin, works on a computational fluid dynamics tool to simulate engine-related processes. He says the program introduced him to the vast array of available tools and libraries that can help accelerate efforts to scale and optimize code for extreme-scale systems. “I came here to learn from the best about the tools and methods we can use to get high-performance, highly scalable HPC code, what kinds of traps and problems and bottlenecks you’ll run into on your way there, and how to work around them. And I feel like I’ve learned that,” Strodtbeck says.

ATPESC is funded by the Exascale Computing Project, which is supported by the DOE Office of Science’s Advanced Scientific Computing Research Program.

For more information on ATPESC, visit http://extremecomputingtraining.anl.gov.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.