Call for Papers:

FastPath: Workshop on Performance Analysis of Machine Learning Systems

Abstract or Paper Registration Deadline
February 8, 2019
Final Submission Deadline
February 8, 2019

FastPath: International Workshop on Performance Analysis of Machine Learning Systems
in conjunction with ISPASS 2019
Madison, Wisconsin, USA
March 24, 2019

Submission: February 8, 2019
Notification: February 22, 2019
Final Materials / Workshop: March 24, 2019

FastPath 2019 brings together researchers and practitioners involved in cross-stack hardware/software performance analysis, modeling, and evaluation for efficient machine learning systems. Machine learning demands tremendous amount of computing. Current machine learning systems are diverse, including cellphones, high performance computing systems, database systems, self-driving cars, robotics, and in-home appliances. Many machine-learning systems have customized hardware and/or software. The types and components of such systems vary, but a partial list includes traditional CPUs assisted with accelerators (ASICs, FPGAs, GPUs), memory accelerators, I/O accelerators, hybrid systems, converged infrastructure, and IT appliances. Designing efficient machine learning systems poses several challenges.

These include distributed training on big data, hyper-parameter tuning for models, emerging accelerators, fast I/O for random inputs, approximate computing for training and inference, programming models for a diverse machine-learning workloads, high-bandwidth interconnect, efficient mapping of processing logic on hardware, and cross system stack performance optimization. Emerging infrastructure supporting big data analytics, cognitive computing, large-scale machine learning, mobile computing, and internet-of-things, exemplify system designs optimized for machine learning at large.

FastPath seeks to facilitate the exchange of ideas on performance optimization of machine learning/AI systems and seeks papers on a wide range of topics including, but not limited to:
– Workload characterization, performance modeling and profiling of machine learning applications
– GPUs, FPGAs, ASIC accelerators
– Memory, I/O, storage, network accelerators
– Hardware/software co-design
– Efficient machine learning algorithms
– Approximate computing in machine learning
– Power/Energy and learning acceleration
– Software, library, and runtime for machine learning systems
– Workload scheduling and orchestration
– Machine learning in cloud systems
– Large-scale machine learning systems
– Emerging intelligent/cognitive system
– Converged/integrated infrastructure
– Machine learning systems for specific domains, e.g., financial, biological, education, commerce, healthcare

Prospective authors must submit a 2-4 page extended abstract electronically at:

Authors of selected abstracts will be invited to give a 30-min presentation at the workshop.

Sarah Bird, Facebook
Mark D. Hill, University of Wisconsin-Madison
Tushar Krishna, Georgia Tech
Peter Mattson, Google
Vijay Janapa Reddi, Harvard
Michael Stern, IBM / Weather Company

General Chair:
Erik Altman

Program Committee Chairs:
Zehra Sura
Parijat Dube

Program Committee:
Christophe Dubach, University of Edinburgh
Lieven Eeckhout, Ghent University, Belgium
David Gregg, Trinity College Dublin
Omer Khan, University of Connecticut
Andrew Mundy, ARM Research
Michael Papamichael, Microsoft
Ana Varbanescu, TU Delft