Skip to main content Skip to main navigation

Publikation

MACSQ: Massively Accelerated DeepQ Learning on GPUs using on-the-fly State Construction

Marcel Köster; Julian Groß; Antonio Krüger
In: Parallel and Distributed Computing, Applications and Technologies. International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT-2021), 22nd, December 17-19, Guangzhou, China, Springer, 2022.

Zusammenfassung

The current trend of using artificial neural networks to solve computationally intensive problems is omnipresent. In this scope, DeepQ learning is a common choice for agent-based problems. DeepQ combines the concept of Q-Learning with (deep) neural networks to learn different Q-values/matrices based on environmental conditions. Unfortunately, DeepQ learning requires hundreds of thousands of iterations/Q-samples that must be generated and learned for large-scale problems. Gathering data sets for such challenging tasks is extremely time consuming and requires large data-storage containers. Consequently, a common solution is the automatic generation of input samples for agent-based DeepQ networks. However, a usual workflow is to create the samples separately from the training process in either a (set of) pre-processing step(s) or interleaved with the training process. This requires the input Q-samples to be materialized in order to be fed into the training step of the attached neural network. In this paper, we propose a new GPU-focused method for on-the-fly generation of training samples tightly coupled with the training process itself. This allows us to skip the materialization process of all samples (e.g. avoid dumping them disk), as they are (re)constructed when needed. Our method significantly outperforms usual workflows that generate the input samples on the CPU in terms of runtime performance and memory/storage consumption.

Projekte