Skip to main content Skip to main navigation

Project

HP-DLF

High Performance Deep Learning Framework

High Performance Deep Learning Framework

  • Duration:
  • Application fields
    Other

The goal of HP-DLF is to provide researchers and developers in the "deep learning" domain an easy access to current and future high-performance computing systems. For this purpose, a new software framework will be developed, which automates the highly complex parallel training of large neural networks on heterogeneous computing clusters. The focus is on scaling and energy efficiency, as well as high portability and user transparency. The goal is to scale the training of networks designed in existing frameworks, without additional user effort, over a three-digit number of compute nodes.

Partners

  • Fraunhofer-Institut für Techno- und Wirtschaftsmathematik (ITWM: Konsortialleitung)
  • Technische Universität Dresden, Zentrum für Informationsdienste und Hochleistungsrechnen (ZIH)
  • Universität Heidelberg, Visual Learning Lab (VLL)
  • Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI)

Sponsors

BMBF - Federal Ministry of Education and Research

BMBF - Federal Ministry of Education and Research

Publications about the project

Manuela Schuler; Richard Membarth; Philipp Slusallek

In: David Kaeli (Hrsg.). ACM Transactions on Architecture and Code Optimization (TACO), Vol. 20, No. 1, Pages 17:1-17:25, ACM, 12/2022.

To the publication

Puya Amiri; Arsène Pérard-Gayot; Richard Membarth; Philipp Slusallek; Roland Leißa; Sebastian Hack

In: Proceedings of the 2021 International Conference on Field Programmable Technology (ICFPT). International Conference on Field Programmable Technology (FPT-2021), December 6-10, Auckland, New Zealand, Pages 1-9, IEEE, 12/2021.

To the publication

Rafael Ravedutti Lucio Machado; Jonas Schmitt; Sebastian Eibl; Jan Eitzinger; Roland Leißa; Sebastian Hack; Arsène Pérard-Gayot; Richard Membarth; Harald Köstler

In: Journal of Computational Science (JOCS), Vol. 54, No. 101425, Pages 1-11, Elsevier, 7/2021.

To the publication