The LA downtown became a little more vibrant in the first week of June, as computer architects from all over the world descended in numbers in the brand new Intercontinental Hotel and its funky elevator design, for ISCA’18. The enthusiasm was palpable, as apart from an exciting program, ISCA this year was host to the highly-anticipated 2017 Turing Award Lecture by John Hennessy and David Patterson; the first Turing Award Laureates in computer architecture since Chuck Thacker in 2009.
In typical ISCA fashion, the weekend preceding the conference was dedicated to workshops and tutorials, with an unusually diverse selection this year ranging from the inescapable architectures for machine learning and hardware acceleration to open-source hardware, security, big data and datacenter design, all the way to recent advances in quantum computing.
Special mention should go to the extremely timely and highly-attended Bias Busting Workshop, spearheaded by Google and CMU, sponsored by Google, and hosted for the first time in ISCA. The workshop highlighted the importance of diversity in the computer architecture community, how implicit and explicit biases hinder it, and provided guidelines for those interested in bringing similar Bias Busting programs to their institutions. A groundbreaking event, and hopefully the first of many! In a similar spirit, ISCA hosted again – after ASPLOS earlier in the year – the CARES program, with several high-profile members of the community serving as contact persons for reporting cases of discrimination and harassment at the conference. Many thanks to Joel Emer, Margaret Martonosi, and Lieven Eeckhout for their service, and for raising awareness for an important issue in the field!
An ISCA of Firsts
The main conference opened on Monday morning to welcome remarks by Murali Annavaram and Timothy Pinkston of USC who bore the weight of the organization, and Babak Falsafi who headed the paper reviewing process as Program Chair. It was an ISCA of records. The final registration tally was nearly 800 attendants, a significant boost from ISCA 2016’s previous record of 717 registrations. A similar picture was also reflected in paper submissions, with 378 papers submitted in total, a 17% increase from last year, and 64 papers accepted, solidifying the Turing Laureates’ message later in the day that a renaissance in computer architecture is (should be?) coming.
Kim Hazelwood opened up the technical part of the conference, delivering an exciting first keynote on applied machine learning at the scale of Facebook. Hazelwood went over the much-loved laws of computer architecture, with Moore’s, Amdahl’s Laws at their forefront, and added a couple of newer ones to the collection, including LeCun’s “law” on the all-too-familiar exponential increase of deep learning. Highlighting that deep learning runs on very diverse systems, from mobile to large-scale clouds, she underlined that computer architects often fall in the trap of only optimizing for the compute side of machine learning, ignoring the implications to storage and network. While the exact breakdown of opportunity between the three depends on the system and workload at hand, Hazelwood emphasized the importance of targeting end-to-end solutions, including tools, compilers, and software design when optimizing for machine learning.
Hazelwood further discussed Facebook’s unique approach in building cloud servers using specialized configurations for different major services, based on each application’s resource requirements, instead of the sea-of-resources model made popular in other cloud infrastructures, as well as how GPUs have been integrated into their systems over the past 4-5 years. She highlighted that when designing systems for machine learning, researchers often forget where these applications run, showing a breakdown of the end users’ mobile devices and network connectivity capabilities (including the eye-opening fact that half of the mobile devices Facebook runs on use 2012-era phones). Finally, Hazelwood gave a nod to the popular open-source tools that Facebook has released as part of the Open Compute project, including Pytorch and Caffe2.
Then it was off to the technical sessions, which for the first day included presentations on cloud computing and hardware acceleration, virtualization, and memory systems, among others.
The Turing Lecture
The capstone of the first day was by all accounts the Turing Award Lecture. The award was presented to John Hennessy and David Patterson for their pioneering research in creating a faster and lower power “reduced instruction set computer” (RISC), which constitutes 99% of the 16 billion microprocessors found in mobile devices today. Patterson and Hennessy took to the stage in a tag team fashion, with Patterson first taking us on a trip down memory lane revisiting how the RISC/MIPS work came to be, while Hennessy looked to the challenges of the future, highlighting that the next wave of performance, efficiency, and security gains will come from hardware-software co-design and domain-specific architectures. Finally, David Patterson emphasized the need for free and open architectures and open-source implementations for hardware, which has helped propel software design to extreme popularity in the past few years.
Day 2 opened up with a keynote by Kunle Olukotun on the need for a vertical design approach for machine learning systems that spans algorithm design, domain-specific languages, and reconfigurable hardware. On the algorithmic side, Olukotun showed the benefits of parallelization for popular ML techniques like stochastic gradient descent (SGD), as well as the ability of SGD to tolerate relaxed consistency, reduced communication, and relaxed precision, all of which greatly simplifies system design. Olukotun also described prototypes of domain-specific languages (DSL) that aid programmability for machine learning, among other applications, and abstract a lot of system complexity from the end – often non-expert – programmer. Finally, on the hardware front, he discussed the need for reconfigurable accelerators which can cope with the changing nature of ML algorithms, and use advanced compilation techniques to optimize parameter tuning and allow the programmers to instead focus on high-level concepts, such as dataflow.
The day continued with a set of technical sessions, which as is customary, were shorter compared to Day 1 to allow for ample – off-site– frivolity around LA, first on top of the tourist-staple open buses both for LA first-timers and not, and then at the Dorothy Chandler Pavilion, part of the LA Music Center complex, for the conference banquet. The reception was topped off with a performance by USC graduate soprano Monica Beal, and a short but decisive guest appearance by the USC marching band.
After the fun of the previous night, it was up to Doug Burger to take everyone back to the technical content. He delivered a particularly insightful talk on the secret life of TRIPS since its seeming completion in UT Austin back in the mid-00s. Burger highlighted the need for clean new abstractions that can take advantage of fine-grain parallelism at low energy cost, and discussed how the extensions to the original TRIPS architecture enable energy-efficient execution for single threads compared to traditional out-of-order designs.
Burger was joined on stage by Aaron Smith of Microsoft, who headed the system’s compiler design, and Greg Wright, Senior Director of Engineering at Qualcomm, who talked about the hardware challenges of this endeavor and showed early, competitive results.
The last of the technical sessions on machine learning (x2), GPUs, and interconnection networks followed to close off the conference’s third day, and brought another ISCA to an end.
Before renewing our date for ISCA’19, we’d be amiss not to mention the several, and in some cases groundbreaking, awards presented during this year’s conference. While the Turing Award took center stage, the Awards Lunch on Day 2 of the meeting highlighted contributions by those both in the early and later stages of their careers. Aasheesh Kolli from the University of Michigan won the SIGARCH CS TCCA Outstanding Dissertation award, while Matt Sinclair and Yuhao Zhu received honorable mentions. Hadi Esmaeilzadeh from UCSD received the Young Computer Architect award, Reetu Das from the University of Michigan won the CRA-W Borg Early Career award, and Gabe Loh received the prestigious Maurice Wilkes award for his contributions to the advancement of die-stacked architectures. Kevin Skadron, Mircea Stan, Karthik Sankaranarayanan, Wei Huang, Sivakumar Velusamy, and David Tarjan received the ISCA Most Influential Paper award for their 2003 paper “Temperature-aware microarchitecture: Modeling and implementation“; the award will be formally presented in next year’s ISCA when the authors are present. An especially moving moment was dedicated to those the architecture community lost this year, namely Nathan Binkert and Burton Smith, as well as a touching speech from May Berenbaum, sister of Alan Berenbaum, when she presented Koen De Bosschere with the ACM SIGARCH Alan D. Berenbaum Distinguished Service Award, named in honor of her late brother.
Finally, and for the first time in ISCA’s history, the Eckert-Mauchly, the most prestigious award in the computer architecture community, was awarded to a female recipient, Susan Eggers, professor emerita of the University of Washington, for her outstanding contributions to simultaneous multithreaded processor architectures and multiprocessor sharing and coherency. Eggers gave a stirring speech on the challenges and opportunities she faced in her decades-long career, her unconventional start with the field of computer architecture, as well as her newfound gardening-related joys of retirement.
Until next time
ISCA’19, as part of FCRC, will take place in Arizona! General chair Bobbie Manne and the organizing committee are already hard at work to keep the high bar set by this year’s conference. In the meantime, and in the words of Hennessy and Patterson, “go forth and do great things, nobody said it was easy!“
Who would disagree?
About the Author: Christina Delimitrou is an Assistant Professor in Electrical and Computer Engineering at Cornell University.