1 2 3 4 5 6 7 8
 
  • Valladolid

    March 2021

    Parallel computing conference
  • Parallel Computing

    Parallel computing conference
  • Distributed and Network-Based Computing

    Parallel computing conference
  • Big Data

    Parallel computing conference
  • Programming Models and Tools

    Parallel computing conference
  • Concurrent Algorithms

    Parallel computing conference
  • Advances Algorithms and Applications

    Parallel computing conference
  • Concurrent Algorithms

    Parallel computing conference

Keynote talks

 

Tools and Techniques for Driving Performance, Portability, and Productivity

John Pennycook, Intel Corporation.

John Pennycook is a Software Enabling and Optimization Architect at Intel Corporation. His research is focused on improving application performance portability and programmer productivity. He received a Ph.D. in computer science from the University of Warwick in 2013.

Abstract:

Increasing diversity in hardware solutions is increasing the complexity of software development. Configuring and building an application for multiple architecture types (e.g. CPUs, GPUs, FPGAs), from multiple hardware vendors, using different combinations of compilers and libraries is already a formidable undertaking -- ensuring that the application meets performance targets without sacrificing portability and while keeping development/maintenance costs manageable poses an even greater challenge. How can developers manage these conflicting goals, and successfully balance the trade-offs between performance, portability, and productivity?

This presentation outlines a methodology for quantifying, understanding, and visualizing these "three Ps" using objective metrics and associated tooling. We explore how this methodology can inform the design of applications, libraries and programming languages, drawing experience from real-life case studies. By tracking the average performance efficiency achieved across platforms of interest (i.e. "performance portability") alongside the amount of code specialized for each platform (i.e. "code divergence", an approximation of maintenance cost and programmer productivity) we can gain deeper insight into the current state of software and how to improve it.

 

Accelerating HPC codes on FPGAs: Will 2022 be the breakthrough year?

Nicholas Brown, The University of Edinburgh

Dr Nick Brown is a Research Fellow at EPCC the University of Edinburgh with interests in HPC application development, novel heterogeneous architectures, data science, programming language design, and compilers. He is involved with running the UK's FPGA testbed system, which aims to encourage HPC developers to experiment with exploring FPGAs for their scientific and engineering workloads. Nick is a course organizer on EPCC's MSc in HPC and data science courses, as well as supervising MSc and PhD students.

Abstract:

Scientists and engineers are ever demanding the ability to model larger, more complex simulations at reduced time to solution. This drives much of the continued development of High Performance Computing (HPC) and an important aspect is an exploration of the role that hardware technologies, hitherto less commonly used in HPC, might play in the future. Field Programmable Gate Arrays (FPGAs) are one such technology where the tailoring of the electronics to the code means that, because we bypass the general purpose micro-architecture of CPUs and GPUs, one can organise aspects such as the logic and cache memory to entirely suit what is being executed. This can provide increased flexibility to the developer and enable them address bottlenecks present in their code on other architectures. However FPGAs have not yet gained popularity in HPC but in the past couple of years there has been massive investments made by vendors in FPGA hardware and software ecosystems, making them a much more attractive choice than ever before and worth reconsidering.

In this talk I will use real-world HPC applications and kernels to describe the role we see for FPGAs complimenting other hardware technologies in future supercomputers. Using these examples I will explore the challenges and opportunities faced by software developers when targetting their HPC codes onto FPGAs, and the key dataflow algorithmic structures that must be exploited in order to gain good performance.

 

Congestion in High-Performance Interconnection Networks of the Exascale era: Impact and Solutions

Jesús Escudero-Sahuquillo, Pedro J. García, Universidad de Castilla La Mancha.

Jesús Escudero-Sahuquillo is an Associate Professor at the Computing Systems Department (DSI) of the Universidad de Castilla-La Mancha (UCLM), Spain. In 2011, he received the PhD degree in Computer Science from UCLM. His research is focused on high-performance interconnection networks for HPC and Datacenter systems, network topologies, routing algorithms, congestion management and simulation tools. He has published over 45 peer-reviewed papers in international journals and conferences. He has participated in research projects funded by Spanish and European institutions, and R&D agreements with different companies. He has served as program committee, guest editor and reviewer in several conferences, such as HiPINEB, ICPP, CCGrid or HoTI, and journals, such as TPDS, IEEE Micro or JPDC.

Pedro J. García, PhD in Computer Science, is currently an Associate Professor at Universidad de Castilla-La Mancha (UCLM), Spain. His research focuses mainly on high-performance interconnection networks for HPC and Datacenter systems, especially congestion management schemes and routing algorithms. He has published more than 70 refereed papers in ranked journals and conferences. He has guided 5 doctoral theses. He has coordinated 5 research projects funded by public bodies (from EU, Spain, and Castilla-La Mancha) and 6 R&D agreements between UCLM and different companies. In addition, he has participated in other 40 research projects. He has organized several international conferences and workshops. He has been also a guest editor of several journals.

     

Abstract:

Congestion is a usual phenomenon in the interconnection networks of HPC systems and Datacenters, that appears mainly under high traffic loads and/or under traffic patterns leading to oversubscribed links or destinations (hot spots). In these scenarios, several network ports are likely to be clogged, thus hindering traffic flowing and eventually degrading network performance. This problem is especially worrying in the systems of the Exascale era, where a huge number of computing and storage nodes (in the order of tens or hundreds of thousands) must be interconnected through a network that must perform optimally to support the exigent communication requirements of the system, even under congestion scenarios. This keynote offers an overview of the dynamics of congestion in current interconnection networks and the main negative effects derived from congestion situations, then analyzing the main approaches followed by the solutions proposed to avoid, reduce or eliminate network congestion and/or its negative effects, as well as the suitability of such solutions for modern HPC systems and Datacenters.