Skip to main content Skip to main navigation

Publication

TrustDDL: A Privacy-Preserving Byzantine-Robust Distributed Deep Learning Framework

René Klaus Nikiel; Meghdad Mirabi; Carsten Binnig
In: 2024 54rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W 2024). Workshop on Dependable and Secure Machine Learning (DSML-2024), located at DSN 2024, June 24, Brisbane, Queensland, Australia, ISBN 979-8-3503-9572-3, IEEE, 2024.

Abstract

This paper introduces a distributed deep learning framework called TrustDDL crafted to address privacy and Byzantine robustness concerns across the training and inference phases of deep learning models. The framework incorporates additive secret-sharing-based protocols, a commitment phase, and redundant computation to identify Byzantine parties and shield the system from their detrimental effects during both deep learning model training and inference. It ensures uninterrupted protocol execution, guaranteeing reliable output delivery in both phases. Our security analysis affirms the efficacy of the proposed framework against both honest-but-curious and malicious adversaries for learning and inference tasks. Furthermore, we evaluate the proposed framework against existing open-source distributed machine learning frameworks, underscoring its practicality for developing and deploying distributed deep learning systems.

More links