Computer Architecture Today

Informing the broad computing community about current activities, advances and future directions in computer architecture.

All hardware companies face a conundrum. Should they build a riskier product that has a higher probability of failure, or should they continue the evolutionary trend of their current products? The safe thing to do, and one that many customers may ask for, is the latter. However, as Clayton Christensen notes in The Innovator’s Dilemma, “most companies with a practiced discipline of listening to their best customers and identifying new products that promise greater profitability and growth are rarely able to build a case for investing in disruptive technologies until it is too late.”  Many have argued that Intel is facing such a dilemma with the growth of ARM based products which initially took root in the mobile space and are now encroaching into the server space with ARM based offerings from Cavium, Qualcomm and others.

So how does a Computer Architect predict what is needed? How do we innovate in a rapidly changing technology landscape? Bill Gates once noted that “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”  How many of us would have predicted the existence or success of smart phones in the 1990s, or envisioned the dominance of cloud computing services such as AWS or Azure in the early 2000s? For that matter, how many of us can predict which paper from ISCA 2017 will win the Test of Time Award in 2032? Maybe the answer is we can’t.

The Case for Agility

About a decade ago, I heard Dirk Meyer, then CEO of AMD, speak at an internal AMD conference. He was asked what characteristic or trait he would like to see at AMD. I assumed his answer would be something like “technical prowess” or “CPU/GPU integration” given that AMD had acquired ATI a couple of years before. Instead, his answer was agility. He believed that given the fast paced changes in the marketplace, the only way for AMD or any other hardware company to survive was to be agile enough to move and change quickly with the market.

Fortunately, industry in general has become more agile in the last two decades. First, CAD tools have made enormous strides to the point where a significant portion, if not the whole core, is fully synthesized in many processors. Second, modular design methodologies where core clusters and/or accelerators can be added or removed to meet the needs of the market have enabled the same non-recoverable engineering (NRE) cost and design time to be leveraged for multiple products. Finally, post silicon verification and management infrastructures, such as emulation methodologies and on-die controllers, have enabled processors to reach production faster than ever. The combination of these techniques has resulted in faster design times and more nimble product development.

So What Next?

Processor architectures based on x86 continue to improve performance, and the availability of ARM architectural licenses or fully designed ARM IP means other companies can design complex products using the ARM ISA. But these are incremental changes that follow the current trend.  In the meantime, companies such as Microsoft and Google are creating solutions based on FPGA acceleration or custom ASICs such as the TPU in order to improve specific tasks.  Does this mean that the baseline processor becomes a commodity and each company plugs its own accelerator into the system? Or is it that we will see more products such as Broadwell-D where Facebook and Intel worked closely to develop a product that meets Facebook’s specific requirements? And what does this mean for architecture research in industry or academia?

Industry, by nature, is more focused on the here and now, looking one or two generations out. Therefore, I would argue that the research community should be pushing for innovation that will make architecture more nimble and agile to deal with the revolutionary changes of the next two decades. There are many avenues of research that enable more nimble processors, such as the development of a post-silicon debug infrastructure that can quickly identify and potentially rectify bugs without requiring any changes to silicon. Even with the innovations of the past two decades, it still requires significant time, effort and money to debug and fix new silicon due to the increased complexity of SOC designs with heterogeneous cores, many core designs, integrated components (memory controllers, NICs and other accelerators), and complex power management protocols.  Plug-and-play architectures using “chiplets” or stacked architectures is another interesting area of research that could enable rapid customization.  At the software level, a tightly woven hardware/software co-design approach that addresses complex applications in nested virtual environments, flexible co-processor ISAs and infrastructures, and libraries that insulate the user from software complexities are all areas that can leverage existing and future hardware to produce disruptive solutions with significant upside.

A Fringe Focus

These research topics require major effort and incur significant risk because they skirt the traditional areas of architecture research. Just as research is needed to address processor agility, conferences and program committee members (me included), need to be more agile and reward research with higher risks in order to promote greater innovation. In some senses, the “Wild and Crazy Ideas” tradition at ASPLOS, rather than being a sideline event, needs to be embedded in the way we approach research and conferences in general.  As Clayton Christensen pointed out in The Innovator’s Dilemma, “Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value.” Maybe architecture in general, and the research community in particular, need to start focusing more on the fringe in order to innovate for the future.

About the author: Dr. Srilatha (Bobbie) Manne has worked in the computer industry for over two decades in both industrial labs and product teams. She is currently a Principal Hardware Architect at Cavium, working on ARM server processors.

Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.