Benchmarking Contemporary Deep Learning Hardware and Frameworks:A Survey
of Qualitative Metrics
Wei Dai
and
Daniel Berleant
arXiv e-Print archive - 2019 via arXiv
Keywords:
cs.DC, cs.LG, cs.PF
First published: 2019/07/05 (4 years ago) Abstract: This paper surveys benchmarking principles, machine learning devices
including GPUs, FPGAs, and ASICs, and deep learning software frameworks. It
also reviews these technologies with respect to benchmarking from the
perspectives of a 6-metric approach to frameworks and an 11-metric approach to
hardware platforms. Because MLPerf is a benchmark organization working with
industry and academia, and offering deep learning benchmarks that evaluate
training and inference on deep learning hardware devices, the survey also
mentions MLPerf benchmark results, benchmark metrics, datasets, deep learning
frameworks and algorithms. We summarize seven benchmarking principles,
differential characteristics of mainstream AI devices, and qualitative
comparison of deep learning hardware and frameworks.
Benchmarking Deep Learning Hardware and Frameworks: Qualitative Metrics
Previous papers on benchmarking deep neural networks offer knowledge of deep learning hardware devices and software frameworks. This paper introduces benchmarking principles, surveys machine learning devices including GPUs, FPGAs, and ASICs, and reviews deep learning software frameworks. It also qualitatively compares these technologies with respect to benchmarking from the angles of our 7-metric approach to deep learning frameworks and 12-metric approach to machine learning hardware platforms.
After reading the paper, the audience will understand seven benchmarking principles, generally know that differential characteristics of mainstream artificial intelligence devices, qualitatively compare deep learning hardware through the 12-metric approach for benchmarking neural network hardware, and read benchmarking results of 16 deep learning frameworks via our 7-metric set for benchmarking frameworks.