Beyond data and model parallelism for deep neural networks
The Morning Paper
JUNE 11, 2019
Beyond data and model parallelism for deep neural networks Jia et al., The goal here is to reduce the training times of DNNs by finding efficient parallel execution strategies, and even including its search time, FlexFlow is able to increase training throughput by up to 3.3x SysML’2019. Expanding the search space.
Let's personalize your content