Call for Participation:

Tutorial on Accelerating Big Data Processing and Associated Deep Learning on Modern Datacenters

Early Registration Deadline
May 24, 2019

Tutorial on Accelerating Big Data Processing and Associated Deep Learning on Modern Datacenters
in conjunction with ISCA 2019
Phoenix, Arizona, USA
June 23, 2019

The convergence of HPC, Big Data, and Deep Learning is becoming the next game-changing business opportunity. Apache Hadoop, Spark, gRPC/TensorFlow, Kafka, and Memcached are becoming standard building blocks in handling Big Data oriented processing and mining. Modern HPC bare-metal systems and data center platforms have been fueled with the advances in multi-/many-core architectures, RDMA-enabled networking, NVRAMs, and NVMe-SSDs during the last decade. However, Big Data and Deep Learning middleware (such as Hadoop, Spark, Kafka, and gRPC) have not embraced such technologies fully. Recent studies have shown that default designs of these components can not efficiently leverage the features of modern HPC clusters, like Remote Direct Memory Access (RDMA) enabled high-performance interconnects, high-throughput parallel storage systems (e.g. Lustre), Non-Volatile Memory (NVM).

In this tutorial, we will provide an in-depth overview of the architecture of Hadoop, Spark, Kafka, gRPC/TensorFlow, and Memcached. We will examine the challenges in re-designing networking and I/O components of these middleware with modern interconnects, protocols (such as InfiniBand, RoCE) and storage architectures. Using the publicly available software packages in the High-Performance Big Data project (HiBD,, we will provide case studies of the new designs for several Hadoop/Spark/Kafka/gRPC/TensorFlow/Memcached components and their associated benefits. Through these, we will also examine the interplay between high-performance interconnects, storage (HDD, NVM, and SSD), and multi-core platforms (e.g., Xeon x86, OpenPOWER) to achieve the best solutions for these components and applications on modern HPC clusters and data center systems. We also present in-depth case-studies with modern Deep Learning tools (e.g., Caffe, TensorFlow, DL4J, BigDL) with RDMA-enabled Hadoop, Spark, and gRPC.