« Back to Events

Call for Participation: ASPLOS 2013 Tutorial — Using Queuing Theory to Model Data Center Systems

iCal Import
Start:
March 17, 2013 1:30 pm
End:
March 17, 2013 5:00 pm
Venue:
at ASPLOS, Houston, TX

Submitted by Thomas Wenisch
http://www.eecs.umich.edu/BigHouse/

Using Queuing Theory to Model Data Center Systems
Tutorial with ASPLOS 2013
3/17 1:30-5:00pm

David Meisner, Facebook, meisner@fb.com
Mor Harchol-Balter, Carnegie Mellon University, harchol@cs.cmu.edu
Thomas Wenisch, University of Michigan, twenisch@umich.edu

Recently, there has been an explosive growth in Internet services, greatly
increasing the importance of data center systems. Applications served from
ìthe cloudî are driving data center growth and quickly overtaking traditional
workstations. Although there are many analytic and simulation tools for
evaluating components of desktop and server architectures in detail, scalable
modeling tools are noticeably missing.

We believe that stochastic methods and queueing theory together provide an
avenue to answer important questions about data center systems. In the first
half of this tutorial, we present a crash-course (or perhaps, a refresher for
some) on the essential elements of queueing theory with particular applications
to modeling data center systems. We also illustrate how queueing theory can be
used to solve problems related to the design and analysis of computer systems.

In the second part of the tutorial, we describe BigHouse, a simulation
infrastructure that combines queuing theory and stochastic methods to model
data centers systems. Instead of simulating servers using detailed
microarchitectural models, BigHouse raises the level of abstraction using the
tools of queuing theory, enabling simulation at 1000-server scale in less
than an hour. We include brief background on data center power modeling, a
description of the statistical methods used by BigHouse, parallelization
techniques, a tour of the simulator code, and a case study of using BigHouse
to model data center power capping.