This is the 1st December 2020 digest of SIGARCH Messages.

In This Issue

Call for Nominations: MICRO Test of Time Award 2020
Submitted by Saugata Ghose

MICRO Test of Time (ToT) Award 2020

Nominations Due: November 1, 2020

The MICRO Test of Time (ToT) Award Committee is soliciting nominations for the seventh MICRO ToT Award. This award recognizes the most influential papers published in past MICRO conferences that have had significant impact in the field.

The award will recognize an influential MICRO paper whose influence is still felt 18-22 years after its initial publication. In other words, the award will be given to at most one paper that was published at MICRO conferences in any of the years N-22, N-21, N-20, N-19, or N-18. This year, N = 2020, so only papers published at MICRO conferences held in 1998, 1999, 2000, 2001, or 2002 are eligible. An eligible paper that has received at least 100 citations (according to Google Scholar) is automatically nominated, but explicit nominations of such papers are still encouraged.


To nominate a paper, send an email to by November 1, 2020, with the following:

  1. The title, the author list, and publication year of the nominated paper
  2. A 100-word (maximum) nomination statement, describing why the paper deserves the Test of Time Award
  3. The name, title, affiliation of the nominator, and if appropriate, the relationship of the nominator to the authors

Only one paper can be nominated in a single email. There is a maximum of five nominations per person. You cannot nominate a paper that you are a co-author on. One paper will be selected as the award winner from the pool of nominees by the award committee.

For more information on the nomination and selection process, a list of all eligible papers, this year’s committee members, prior award winners, and other information, please visit

Call for Participation: Machine Learning in Science & Engineering – MLSE 2020
Submitted by Alexis Avedisian

MLSE 2020 will feature the latest research in artificial intelligence and machine learning that are advancing science, engineering, and technology fields at large. Through keynotes, conversations, demonstrations, and networking, this two day virtual conference will explore how data-driven approaches can help solve emerging challenges. MLSE 2020 will convene 11 tracks to highlight new research and innovation from a diverse range of disciplines.

Use code MLSE2020 for 30% off tickets. Register Here.

Benefits of attending include:

  • Opportunities to engage with 100+ speakers across 35+ national and international universities and organizations. 
  • Job and research opportunity listings.
  • Networking opportunities within MLSE 2020 discussion boards and mobile app. 
  • Access to the MLSE Research Pavilion, including posters, abstracts, videos, and papers.

MLSE 2020 Tracks Include: Astronomy, Astrophysics, and Physics; Biology; Health Sciences; Chemistry, Chemical Engineering, and Materials Science; Computing Systems; Earth and Environmental Sciences; Mechanical/Civil Engineering; Methods and Algorithms; Neuroscience; Quantum; Transportation.

Hosted by The Data Science Institute at Columbia University. 

Call for Papers: tinyML 2021
Submitted by Vijay Janapa Reddi

tinyML Research Symposium 2021
New academic & commercial research symposium as part of the 3rd annual tinyML Summit
Monday, March 22, 2021

Submissions Due:  Nov 23, 2020

Tiny machine learning (tinyML) is a fast-growing field of machine learning technologies and applications including algorithms, hardware, and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery-operated devices. tinyML systems are becoming “good enough” for (i) many commercial applications and new systems on the horizon; (ii) significant progress is being made on algorithms, networks, and models down to 100 kB and below; and (iii) initial low power applications in vision and audio are becoming mainstream and commercially available. There is growing momentum demonstrated by technical progress and ecosystem development.

The first annual tinyML research symposium serves as a flagship venue for research at the intersection of machine learning applications, algorithms, software, and hardware in deeply embedded machine learning systems. We solicit papers from academia and industry combining cross-layer innovations across topics. Submissions must describe tinyML innovations that intersect and leverage synergy between at least two of the following subject areas:

tinyML Datasets

  • Public release of new datasets to tinyML
  • Frameworks that automate dataset development
  • Survey and analysis of existing tiny datasets that can be used for research

tinyML Applications

  • Novel applications across all fields and emerging use cases
  • Discussions about real-world use cases
  • User behavior and system-user interaction
  • Survey on practical experiences

tinyML Algorithms

  • Federated learning or stream-based active learning methods
  • Deep learning and traditional machine learning algorithms
  • Pruning, quantization, optimization methods
  • Security and privacy implications

tinyML Systems

  • Profiling tools for measuring and characterizing system performance and power
  • Solutions that involve hardware and software co-design
  • Characterization of tiny real-world embedded systems
  • In-sensor processing, design, and implementation

tinyML Software

  • Interpreters and code generator frameworks for tiny systems
  • Optimizations for efficient execution
  • Software memory optimizations
  • Neural architecture search methods

tinyML Hardware 

  • Power management, reliability, security, performance
  • Circuit and architecture design
  • Ultra-low-power memory system design
  • MCU and accelerator architecture design and evaluation

tinyML Evaluation

  • Measurement tools and techniques
  • Benchmark creation, assessment and validation
  • Evaluation and measurement of real production systems

Accepted papers will be published in the form of peer-reviewed online proceedings. An author of an accepted paper must attend the research symposium to give a presentation.


Vijay Janapa-Reddi, Harvard Univ.

Boris Murmann, Stanford Univ.



Please see detailed CFP.



  • Papers Due:  Nov 23rd, 2020
  • Author Notification: Jan 15th, 2021
  • Camera Ready: Feb 15th, 2021


Please visit OpenReview.

Call for Papers: NAS 2021
Submitted by Ashutosh Pattnaik

The 15th IEEE International Conference on Networking, Architecture and Storage (NAS 2021)
Riverside Convention Center, CA, May 3-5, 2021

Submissions Due: Dec 6, 2020


The International Conference on Networking, Architecture, and Storage provide a high-quality international forum to bring together researchers and practitioners from academia and industry to discuss cutting-edge research on networking, high-performance computer architecture, and parallel and distributed data storage technologies.

NAS 2021 will expose participants to the most recent developments in the interdisciplinary areas. We invite submissions on a wide range of research topics, including, but not limited to:

  • Accelerator-based architectures
  • Ad hoc and sensor networks
  • Application-specific, reconfigurable or embedded architectures
  • Architecture for handheld or mobile devices
  • Architecture, networking or storage modeling and simulation methodologies
  • Big Data infrastructure
  • Big Data services and analytics
  • Cloud and grid computing
  • Cloud storage
  • Data-center scale architectures
  • Effects of circuits and emerging technology on architecture
  • Energy-aware storage
  • File systems, object-based storage
  • GPU architecture and programming
  • HW/SW co-design and trade offs
  • Mobile and wireless networks
  • Network applications and services
  • Network architecture and protocols
  • Network information theory
  • Network modeling and measurement
  • Network security
  • Non-volatile memory technologies
  • Parallel and multi-core architectures
  • Parallel I/O
  • Power and energy efficient architectures and techniques
  • Processor, cache, memory system architectures
  • Software defined networking
  • Software defined storage
  • SSD architecture and applications
  • Storage management
  • Storage performance and scalability
  • Storage virtualization and security
  • Virtual and overlay networks


Each submission can have up to 8 pages, including all text, figures, tables, footnotes, appendices, references, etc. Authors are invited to submit previously unpublished work for possible presentation at the conference. The program committee will nominate the best papers for recognition in the three conference topic areas. All papers will be evaluated based on their novelty, fundamental insight, experimental evaluation, and potential for long-term impact; new-idea papers are encouraged. All accepted papers will be published in the IEEE digital library. Selected and extended papers will be recommended for journal publications.

Important Dates:

  • Paper Submission Deadline: Dec 6, 2020
  • Author Notification: Feb 15, 2021
  • Camera-ready Paper: March 15, 2021

Submission Site:



Organizing Committee

General Chair:
Laxmi N. Bhuyan, University of California, Riverside

Program Co-Chairs:
Chita Das, Penn State University
Qing Yang, University of Rhode Island
Dipak Ghosal, University of California, Davis Hong Jiang, University of Texas, Arlington

Call for Papers: Championship Value Prediction 2 Workshop (CVP2)
Submitted by Arthur Perais

The second edition of the Championship Value Prediction workshop (CVP2)
in conjunction with the IEEE International Symposium on High-Performance Computer Architecture (HPCA)
February 2021

Submissions Due: January 27, 2021

The organizing committee is happy to announce the second edition of the Championship Value Prediction workshop (CVP2) to be held online in February 2021 in conjunction with the IEEE International Symposium on High-Performance Computer Architecture (HPCA). CVP2 invites contestants to submit their value prediction code to participate in this competition. Contestants will be given a fixed storage budget to implement their best predictors on a common evaluation framework provided by the organizing committee.

Objective: The goal of this competition is to compare different value prediction algorithms in a common framework. Value prediction algorithms must be implemented within a fixed storage budget of 32KB as specified in the competition rules. The simple and transparent evaluation process enables dissemination of results and techniques to the larger computer architecture community and allows independent verification of results.

General Rules: The championship has three tracks. All the tracks have the same storage budget of 32KB for the value predictor. However, each track is directed toward predicting different type of instructions. The first track considers all instructions that produce a register value. The second and third track only consider load instructions, with the third track providing contestants with oracle hit/miss information in the different cache levels (L1D, L2 and L3). Competitors can choose not to compete in a particular track. In each track an additional side buffer of unbounded size is allowed for tracking additional information used by the predictor (e.g., global history).

Authors are required to provide a 4-page write-up describing their contribution and how storage was allocated to the different hardware structures. Note that novelty is not a strict requirement, for example, a contestant may submit his/her previously published design or make incremental enhancements to a previously proposed design, on the implicit condition that the write-up contains new insight regarding why performance has increased.

All source code, write-ups and performance results will be made publicly available through the leaderboard (


Important Dates:

  • Submission Date: January 27th 2021 at 11:59PM CST
  • Acceptance Notification: February 6th 2021
  • Camera Ready Due: February 13th 2021
  • Results: At HPCA’21

Call for Papers: 3rd AccML Workshop at HiPEAC 2021
Submitted by José Cano

3rd Workshop on Accelerated Machine Learning (AccML)
Co-located with the HiPEAC 2021 Conference
January 18, 2021
Virtual Event

Submissions Due: November 30, 2020 (Extended)

Given the current COVID-19 situation, HiPEAC 2021 will be a virtual conference


The remarkable performance achieved in a variety of application areas (natural language processing, computer vision, games, etc.) has led to the emergence of heterogeneous architectures to accelerate machine learning workloads. In parallel, production deployment, model complexity and diversity pushed for higher productivity systems, more powerful programming abstractions, software and system architectures, dedicated runtime systems and numerical libraries, deployment and analysis tools. Deep learning models are generally memory and computationally intensive, for both training and inference. Accelerating these operations has obvious advantages, first by reducing the energy consumption (e.g. in data centers), and secondly, making these models usable on smaller devices at the edge of the Internet. In addition, while convolutional neural networks have motivated much of this effort, numerous applications and models involve a wider variety of operations, network architectures, and data processing. These applications and models permanently challenge computer architecture, the system stack, and programming abstractions. The high level of interest in these areas calls for a dedicated forum to discuss emerging acceleration techniques and computation paradigms for machine learning algorithms, as well as the applications of machine learning to the construction of such systems.

Links to the Workshop pages


Invited Speakers
– Jem Davies (ARM)
– David Lacey (Graphcore)
– Danilo Pau (STMicroelectronics)

Another invited speaker will be announced before the paper submission deadline.

Topics of interest include (but are not limited to):

– Novel ML systems: heterogeneous multi/many-core systems, GPUs, FPGAs;
– Software ML acceleration: languages, primitives, libraries, compilers and frameworks;
– Novel ML hardware accelerators and associated software;
– Emerging semiconductor technologies with applications to ML hardware acceleration;
– ML for the construction and tuning of systems;
– Cloud and edge ML computing: hardware and software to accelerate training and inference;
– Computing systems research addressing the privacy and security of ML-dominated systems.

Papers will be reviewed by the workshop’s technical program committee according to criteria regarding the submission’s quality, relevance to the workshop’s topics, and, foremost, its potential to spark discussions about directions, insights, and solutions in the context of accelerating machine learning. Research papers, case studies, and position papers are all welcome.

In particular, we encourage authors to submit work-in-progress papers: To facilitate sharing of thought-provoking ideas and high-potential though preliminary research, authors are welcome to make submissions describing early-stage, in-progress, and/or exploratory work in order to elicit feedback, discover collaboration opportunities, and spark productive discussions.

The workshop does not have formal proceedings.

Important Dates
Submission deadline: November 30, 2020
Notification of decision: December 4, 2020

José Cano (University of Glasgow)
Valentin Radu (University of Sheffield)
José L. Abellán (Universidad Católica de Murcia)
Marco Cornero (DeepMind)
Albert Cohen (Google)
Dominik Grewe (DeepMind)
Alex Ramirez (Google)

Call for Papers: NVMW 2021
Submitted by Jishen Zhao

12th Annual Non-Volatile Memories Workshop (Online Event)
University of California, San Diego
March 2021 (Dates TBD)

Submissions Due: December 4, 2020

The Annual Non-Volatile Memories Workshop (NVMW) provides a unique showcase for outstanding research on solid-state, non-volatile memories, including devices, error coding, architectures, systems, and applications. The 12th NVMW will be an online event. The dates of the event will be announced soon.

The organizing committee is soliciting presentations on any topic related to non-volatile, solid-state memories.

Presentations may include new results or work that has already been published during the 18 months prior to the submission deadline. In lieu of printed proceedings, we will post the slides and extended abstracts of the presentations online.  The presentation of new work at the workshop does not preclude future publication.

The 12th NVMW will also present four awards: We will award inaugural Persistent Impact Prize to recognize high-impact research published more than 5 years ago. There will be two awards (one in systems and one in information theory). We will also award two Memorable Paper Awards recognizing the best work published in the last 18 months.  The awards each include a $1000 cash prize.

Submissions to the workshops are 2-page extended abstracts.  They are due December 4th, 2020, with an automatic one-week extension until December 11th, 2020.

More information is available at the workshop website:

NVMW 2021

General Chairs:
Eitan Yaakobi (Technion)
Jishen Zhao (UCSD)

Program Chairs:
Lara Dolecek (UCLA)
Samira Khan (UVA)

Steering Committee:
Paul Siegel (UCSD)
Steven Swanson (UCSD)
Hung-Wei Tseng (UCR)
Eitan Yaakobi (Technion)
Jishen Zhao (UCSD)

IEEE Computer Architecture Letters Seeks Next Editor-in-Chief
Submitted by Dan Sorin

The IEEE Computer Architecture Letters seeks applicants for the position of editor-in-chief (EIC). The EIC term begins on 1 January 2022 and is for three years, renewable for two years.

The application deadline is 1 March 2021. 

More information can be found here.

New Book – Deep Learning Systems
Submitted by Brent Beckley

Morgan & Claypool is proud to announce a recently published book in our Computer Architecture series.

Deep Learning Systems
Algorithms, Compilers, and Processors for Large-Scale Production

Andres Rodriguez, Intel
ISBN: 9781681739663 | PDF ISBN: 9781681739670 | Hardcover ISBN: 9781681739687
Copyright © 2021 | 265 Pages

This book describes deep learning systems: the algorithms, compilers, and processor components to efficiently train and deploy deep learning models for commercial applications. The exponential growth in computational power is slowing at a time when the amount of compute consumed by state-of-the-art deep learning (DL) workloads is rapidly growing. Model size, serving latency, and power constraints are a significant challenge in the deployment of DL models for many applications. Therefore, it is imperative to codesign algorithms, compilers, and hardware to accelerate advances in this field with holistic system-level and algorithm solutions that improve performance, power, and efficiency.

Advancing DL systems generally involves three types of engineers: (1) data scientists that utilize and develop DL algorithms in partnership with domain experts, such as medical, economic, or climate scientists; (2) hardware designers that develop specialized hardware to accelerate the components in the DL models; and (3) performance and compiler engineers that optimize software to run more efficiently on a given hardware. Hardware engineers should be aware of the characteristics and components of production and academic models likely to be adopted by industry to guide design decisions impacting future hardware. Data scientists should be aware of deployment platform constraints when designing models. Performance engineers should support optimizations across diverse models, libraries, and hardware targets.

The purpose of this book is to provide a solid understanding of (1) the design, training, and applications of DL algorithms in industry; (2) the compiler techniques to map deep learning code to hardware targets; and (3) the critical hardware features that accelerate DL systems. This book aims to facilitate co-innovation for the advancement of DL systems. It is written for engineers working in one or more of these areas who seek to understand the entire system stack in order to better collaborate with engineers working in other parts of the system stack.

The book details advancements and adoption of DL models in industry, explains the training and deployment process, describes the essential hardware architectural features needed for today’s and future models, and details advances in DL compilers to efficiently execute algorithms across various hardware targets.

New Book – Parallel Processing, 1980 to 2020
Submitted by Brent Beckley

Morgan & Claypool is proud to announce a recently published book in our Computer Architecture series.

Parallel Processing, 1980 to 2020

Robert Kuhn, Retired (formerly Intel Corporation),
David Padua, University of Illinois at Urbana-Champaign
ISBN: 9781681739755 | PDF ISBN: 9781681739762 | Hardcover ISBN: 9781681739779
Copyright © 2021 | 190 Pages

This historical survey of parallel processing from 1980 to 2020 is a follow-up to the authors’ 1981 Tutorial on Parallel Processing, which covered the state of the art in hardware, programming languages, and applications. Here, we cover the evolution of the field since 1980 in: parallel computers, ranging from the Cyber 205 to clusters now approaching an exaflop, to multicore microprocessors, and Graphic Processing Units (GPUs) in commodity personal devices; parallel programming notations such as OpenMP, MPI message passing, and CUDA streaming notation; and seven parallel applications, such as finite element analysis and computer vision. Some things that looked like they would be major trends in 1981, such as big Single Instruction Multiple Data arrays disappeared for some time but have been revived recently in deep neural network processors. There are now major trends that did not exist in 1980, such as GPUs, distributed memory machines, and parallel processing in nearly every commodity device. This book is intended for those that already have some knowledge of parallel processing today and want to learn about the history of the three areas.

Introducing Computer Architecture Podcasts

Submitted by Lisa Hsu

Introducing the Computer Architecture Podcasts: a series of conversations on cutting-edge work in computer architecture and the remarkable people behind it.

Listen to the first episode with Dr. Kim Hazelwood from Facebook on Systems for ML and having an agile career at:

Also available on your favorite podcast player — iTunes, Spotify, Stitcher, etc.

Undergrad Architecture Mentoring (uArch) Workshop, Virtual Event
Submitted by Divya Mahajan

The Undergrad Architecture Mentoring (uArch) Workshop will be held in conjunction with MICRO-53.

The goal of the workshop is to provide an avenue for undergraduate and early Master’s students to explore Computer Architecture. We are bringing together undergraduate students from the more than 20 countries, especially from the EMEA region, ranging from Ghana to Bosnia to Scotland to attend uArch workshop and MICRO.

We are excited to be able to foster and direct the interest of young students, thus enabling them to understand more about Computer Architecture. We believe to increase diversity and inclusion from all fronts we need to start handing the issue at a younger age and this workshop is just one step towards it.


uArch Team

Srilatha (Bobbie) Manne
Lena Olson
Newsha Ardalani
Joshua San Miguel
Divya Mahajan

Please view the SIGARCH website for the latest postings, to submit new posts, and for general SIGARCH information. We also encourage you to visit the Computer Architecture Today Blog.

- Samira Khan
SIGARCH Content Editor