Archive of Announcements
Announcements of book and tool releases, calls for award nominations, SIGARCH-focused announcements. Ordered by date posted on this website.
Established in memory of Dr. B. (Bob) Ramakrishna Rau, the Rau award recognizes his distinguished career in promoting and expanding the use of innovative computer microarchitecture techniques, including his innovation in compiler technology, his leadership in academic and industrial computer architecture, and his extremely high personal and ethical standards.
Call for Nominations: for the 2018 IEEE TCCA Young Computer Architect Award
2018 ACM/IEEE-CS Eckert-Mauchly Award Submission deadline: March 30, 2018 ACM and the IEEE Computer Society co-sponsor the Eckert-Mauchly Award, which was initiated in 1979. The award is known as the computer architecture community’s most prestigious aDetails…
IEEE Micro Seeks Editor-in-Chief for 2019-2021 Term Application deadline: March 30, 2018 IEEE Micro, a bimonthly publication of the IEEE Computer Society, reaches an international audience of microcomputer and microprocessor designers, system integratoDetails…
Most emerging applications in imaging and machine learning must perform immense amounts of computation while holding to strict limits on energy and power. To meet these goals, architects are building increasingly specialized compute engines tailored for these specific tasks. The resulting computer systems are heterogeneous, containing multiple processing cores with wildly different execution models. Unfortunately, the cost of producing this specialized hardware—and the software to control it—is astronomical. Moreover, the task of porting algorithms to these heterogeneous machines typically requires that the algorithm be partitioned across the machine and rewritten for each specific architecture, which is time consuming and prone to error. Over the last several years, the authors have approached this problem using domain-specific languages (DSLs): high-level programming languages customized for specific domains, such as database manipulation, machine learning, or image processing. By giving up generality, these languages are able to provide high-level abstractions to the developer while producing high-performance output. The purpose of this book is to spur the adoption and the creation of domain-specific languages, especially for the task of creating hardware designs.
ACM SIGARCH Maurice Wilkes Award Nominations Deadline: March 15, 2018 The award of $2,500 is given annually for an outstanding contribution to computer architecture made by an individual whose computer-related professional career (graduate school or fuDetails…
2018 ACM SIGARCH/IEEE CS TCCA Outstanding Dissertation Award Nominations Deadline: February 15, 2018 The SIGARCH/TCCA Outstanding Dissertation award recognizes excellent thesis research by doctoral candidates in the field of computer architecture. DissDetails…
The IEEE TCCA Young Computer Architect Award recognizes outstanding research contributions by an individual in the field of Computer Architecture, who received his/her PhD degree within the last 6 years.
Grad Cohort event San Francisco, USA April 13-14, 2018 Application deadline: January 20, 2018 Since 2004, CRA-W (Computing Research Association Committee on Women) has been running the very successful Grad Cohort program. Focused on graduate students,Details…
The 6th edition of Hennessy and Patterson is now available
The 1st edition of the RISC-V Reader by David Patterson and Andrew Waterman is available.
A large trace of the Microsoft Azure workload is now available in GitHub.
The Persistent Impact Prize (presented this year by Toshiba) will recognize outstanding research on non-volatile memories published at least five years ago.
Morgan & Claypool Publishers is proud to announce the publication of the first technical book on autonomous vehicles for a general computer science audience. Creating Autonomous Vehicle Systems is written by four leading research and development experts in the field and provides both underlying theory and practical applications for this fast-growing technology area. http://www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?products_id=1090 This book will be useful to hardware and software engineers, students, and autonomous vehicle researchers and practitioners. Students interested in autonomous driving will find this a comprehensive overview of the entire autonomous vehicle technology stack. Researchers will find plenty of references for an effective, deeper exploration of the various technologies, and practitioners will find many practical techniques used successfully by the authors. Autonomous driving is not one single technology; it is an integration of many technologies. It demands innovations in algorithms, system integrations, and cloud platforms. Creating Autonomous Vehicle Systems covers each of these subsystems in detail: algorithms for localization, perception, and planning and control; client systems, such as the robotics operating system and hardware platform; and the cloud platform, which includes data storage, simulation, high-definition (HD) mapping, and deep learning model training. Review copies (eBook) are available for academic and professional courses as well as for media. Please contact Brent Beckley (email@example.com)
WICARCH (Women in Computer Architecture) are looking for more ways to engage our members. Please follow us on Facebook (https://www.facebook.com/wicarch/) and twitter (https://twitter.com/WICARCH).
The IEEE Technical Committee on Computer Architecture has a new website and re-booted mailing list! All SIGARCH members are invited to subscribe to the mailing list: http://ieeetcca.org/subscription/
This book provides computer engineers, academic researchers, new graduate students, and seasoned practitioners an end-to-end overview of virtual memory. We begin with a recap of foundational concepts and discuss not only state-of-the-art virtual memory hardware and software support available today, but also emerging research trends in this space. The span of topics covers processor microarchitecture, memory systems, operating system design, and memory allocation. We show how efficient virtual memory implementations hinge on careful hardware and software cooperation, and we discuss new research directions aimed at addressing emerging problems in this space. Virtual memory is a classic computer science abstraction and one of the pillars of the computing revolution. It has long enabled hardware flexibility, software portability, and overall better security, to name just a few of its powerful benefits. Nearly all user-level programs today take for granted that they will have been freed from the burden of physical memory management by the hardware, the operating system, device drivers, and system libraries. However, despite its ubiquity in systems ranging from warehouse-scale datacenters to embedded Internet of Things (IoT) devices, the overheads of virtual memory are becoming a critical performance bottleneck today. Virtual memory architectures designed for individual CPUs or even individual cores are in many cases struggling to scale up and scale out to today’s systems which now increasingly include exotic hardware accelerators (such as GPUs, FPGAs, or DSPs) and emerging memory technologies (such as non-volatile memory), and which run increasingly intensive workloads (such as virtualized and/or “big data” applications). As such, many of the fundamental abstractions and implementation approaches for virtual memory are being augmented, extended, or entirely rebuilt in order to ensure that virtual memory remains viable and performant in the years to come.
Machine learning, and specifically deep learning, has been hugely disruptive in many fields of computer science. The success of deep learning techniques in solving notoriously difficult classification and regression problems has resulted in their rapid adoption in solving real-world problems. The emergence of deep learning is widely attributed to a virtuous cycle whereby fundamental advancements in training deeper models were enabled by the availability of massive datasets and high-performance computer hardware. This text serves as a primer for computer architects in a new and rapidly evolving field. We review how machine learning has evolved since its inception in the 1960s and track the key developments leading up to the emergence of the powerful deep learning techniques that emerged in the last decade. Next we review representative workloads, including the most commonly used datasets and seminal networks across a variety of domains. In addition to discussing the workloads themselves, we also detail the most popular deep learning tools and show how aspiring practitioners can use the tools with the workloads to characterize and optimize DNNs. The remainder of the book is dedicated to the design and optimization of hardware and architectures for machine learning. As high-performance hardware was so instrumental in the success of machine learning becoming a practical solution, this chapter recounts a variety of optimizations proposed recently to further improve future designs. Finally, we present a review of recent research published in the area as well as a taxonomy to help readers understand how various contributions fall in context.
The HPCA Test of Time (ToT) Award Committee is soliciting nominations (due Oct 15, 2017) for the first HPCA ToT Award to be given at the International Symposium on High Performance Computer Architecture in February 2018, to be held in Vienna, Austria. This award recognizes the most influential papers published in past HPCA conferences that have had significant impact in the field.
Please join the new WICARCH listserv. Steps to enrolment: Join SIGARCH/SIGMICRO, specify gender as female in myACM profile, accept to receive emails from ACM.
ACM History Committee: Fellowships in ACM History Proposals due: March 1, 2018 The Association for Computing Machinery, founded in 1947, is the oldest and largest educational and scientific society dedicated to the computing profession, and today has mDetails…
New concise book on RISC-V architecture available
Book Release: Blocks and Chains: Introduction to Bitcoin, Cryptocurrencies, and Their Consensus Mechanisms
Blocks and Chains: Introduction to Bitcoin, Cryptocurrencies, and Their Consensus Mechanisms: The new field of cryptographic currencies and consensus ledgers, commonly referred to as blockchains, is receiving increasing interest from various different communities. These communities are very diverse and amongst others include: technical enthusiasts, activist groups, researchers from various disciplines, startups, large enterprises, public authorities, banks, financial regulators, business men, investors, and also criminals. The scientific community adapted relatively slowly to this emerging and fast-moving field of cryptographic currencies and consensus ledgers. This was one reason that, for quite a while, the only resources available have been the Bitcoin source code, blog and forum posts, mailing lists, and other online publications. Also the original Bitcoin paper which initiated the hype was published online without any prior peer review. Following the original publication spirit of the Bitcoin paper, a lot of innovation in this field has repeatedly come from the community itself in the form of online publications and online conversations instead of established peer-reviewed scientific publishing. On the one side, this spirit of fast free software development, combined with the business aspects of cryptographic currencies, as well as the interests of today’s time-to-market focused industry, produced a flood of publications, whitepapers, and prototypes. On the other side, this has led to deficits in systematization and a gap between practice and the theoretical understanding of this new field. This book aims to further close this gap and presents a well-structured overview of this broad field from a technical viewpoint. The archetype for modern cryptographic currencies and consensus ledgers is Bitcoin and its underlying Nakamoto consensus. Therefore we describe the inner workings of this protocol in great detail and discuss its relations to other derived systems.
On-Chip Networks, Second Edition (Jerger/Krishna/Peh): This book targets engineers and researchers familiar with basic computer architecture concepts who are interested in learning about on-chip networks. This work is designed to be a short synthesis of the most critical concepts in on-chip network design. It is a resource for both understanding on-chip network basics and for providing an overview of state-of-the-art research in on-chip networks. We believe that an overview that teaches both fundamental concepts and highlights state-of-the-art designs will be of great value to both graduate students and industry engineers. While not an exhaustive text, we hope to illuminate fundamental concepts for the reader as well as identify trends and gaps in on chip network research.
The MICRO Test of Time (ToT) Award Committee is soliciting nominations for the third MICRO ToT Award to be given at the International Symposium on Microarchitecture in October 2017, to be held in Boston, MA, USA. This recognizes the most influential papers published in past MICRO conferences that have had significant impact in the field.
Understanding and implementing the brain’s computational paradigm is the one true grand challenge facing computer researchers. Not only are the brain’s computational capabilities far beyond those of conventional computers, its energy efficiency is truly remarkable. This book, written from the perspective of a computer designer and targeted at computer researchers, is intended to give both background and lay out a course of action for studying the brain’s computational paradigm. It contains a mix of concepts and ideas drawn from computational neuroscience, combined with those of the author. As background, relevant biological features are described in terms of their computational and communication properties. The brain’s neocortex is constructed of massively interconnected neurons that compute and communicate via voltage spikes, and a strong argument can be made that precise spike timing is an essential element of the paradigm Drawing from the biological features, a mathematics-based computational paradigm is constructed. The key feature is spiking neurons that perform communication and processing in space-time, with emphasis on time. In these paradigms, time is used as a freely available resource for both communication and computation. Neuron models are first discussed in general, and one is chosen for detailed development. Using the model, single-neuron computation is first explored. Neuron inputs are encoded as spike patterns, and the neuron is trained to identify input pattern similarities. Individual neurons are building blocks for constructing larger ensembles, referred to as “columns”. These columns are trained in an unsupervised manner and operate collectively to perform the basic cognitive function of pattern clustering. Similar input patterns are mapped to a much smaller set of similar output patterns, thereby dividing the input patterns into identifiable clusters. Larger cognitive systems are formed by combining columns into a hierarchical architecture. These higher level architectures are the subject of ongoing study, and progress to date is described in detail in later chapters. Simulation plays a major role in model development, and the simulation infrastructure developed by the author is described.
DEADLINE: Travel grant application form must be received by 11:59pm EDT, May 8, 2017 With generous support from the U.S. National Science Foundation, ACM SIGARCH, and IEEE TC on Computer Architecture, ISCA-44 will offer travel grants for students to deDetails…
To put future research in Persistent Memory software and hardware on a firmer footing, researchers from Wisconsin and HP Labs have developed a PM benchmark suite called WHISPER whose open source is available now. They took benchmarks that used PM directly (echo, n-store), via libraries (reds, c-tree, hashmap, vacation, memcached) and as a file system (nfs, exim, mysql) and modified them so that accesses use volatile memory except as necessary to provide consistent state for crash recovery.
The IEEE TCCA Young Computer Architect Award recognizes outstanding research contributions by an individual in the field of Computer Architecture, who received his/her PhD degree within the last 6 years. The Award will be presented at the Awards Banquet at the 2017 International Symposium on Computer Architecture (ISCA). The IEEE Computer Society administers the award.
Call for Nominations: ACM SIGARCH Student Scholarships for ACM’s Celebration of 50 Years of the ACM Turing Award
ACM SIGARCH has a limited number of scholarships to award to worthy students to attend the Celebration of 50 Years of the ACM Turing Award 23-24 June 2017 at the Westin St. Francis in San Francisco, CA. Each scholarship includes 2 nights at the Westin St. Francis Hotel in San Francisco and up to $900 to help offset the cost of travel and subsistence (Students will receive a check from ACM upon submission of receipts for travel).
SIGARCH expresses concerns about the presidential executive order restricting entry into the USA.