July 31, 2018
The MLPerf benchmark suite aims to provide a consistent way to measure the performance of machine learning hardware platforms, software frameworks, and cloud services. The MLPerf effort is supported by AMD, Baidu, Cerebras, Google, Intel, SambaNova, and Wave Computing and by researchers from Harvard University, Stanford University, University of California, University of Minnesota, and University of Toronto. The initial version includes seven training benchmarks with datasets and models drawn from major application areas: image classification, object detection, speech to text, translation, recommendation, sentiment analysis, and reinforcement learning. The first result submission deadline is July 31st. Because the field is changing rapidly, we expect to follow a “launch and iterate” approach. We welcome community involvement in methodology, benchmark selection, benchmarking inference, and ensuring that results on different systems are comparable.