Log in first for optimal use of all functions!
Year of publication
Type of media
Source
Subject
Type of material
Language
-
Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability
Free accessArXiv | 2016| -
Balancing the Communication Load of Asynchronously Parallelized Machine Learning Algorithms
Free accessArXiv | 2015| -
Asynchronous parallel stochastic gradient descent: A numeric core for scalable distributed machine learning algorithms
Free accessFraunhofer Publica | 2015| -
Balancing the communication load of asynchronously parallelized machine learning algorithms
Free accessFraunhofer Publica | 2015| -
Using GPI-2 for distributed memory paralleliziation of the caffe toolbox to speed up deep neural network training
Free accessFraunhofer Publica | 2017|