[Editor’s Note: This article originally appeared on the CCC blog (part 1 and part 2) and is re-posted here with permission.]
A new episode of the Computing Community Consortium‘s (CCC) official podcast, Catalyzing Computing, is now available. In this episode, Khari Douglas (CCC Senior Program Associate) interviews Dr. Mark D. Hill, the Gene M. Amdahl and John P. Morgridge Professor Emeritus of Computer Sciences at the University of Wisconsin-Madison and the Chair Emeritus of the CCC Council. This episode was recorded prior to Dr. Hill joining Microsoft as a Partner Hardware Architect with Azure. His research interests include parallel computer system design, memory system design, computer simulation, deterministic replay and transactional memory. In this episode Hill discusses the importance of computer architecture, the 3C model of cache behavior, and overcoming the end of Moore’s law.
Below is a transcription from part of the discussion about Mark’s work with Vijay Janapa Reddi on the Gables model for modeling accelerator use on mobile systems-on-a-chip. This is followed by a highlight from the discussion of the impact of AI on the future of computing architecture. These are lightly edited for readability and the full transcript for part 1 can be found here and the transcript for part 2 can be found here.
[Catalyzing Computing Episode 35 – starting at 29:30]
Mark: I think we need a better science for designing chips with multiple accelerators and accelerator-level parallelism. And we tried to help a little bit with a model called Gables.
Khari: Could you discuss the Gables model? What is that?
Mark: Ok, so when you have a chip with 42 accelerators, how do you decide which ones to have, which ones to select, and how to size them and things like that? It’s just very complicated. It would be nice to have a simplified picture to get a first answer, not a final answer. Gables was our attempt to do that and it builds on something called Roofline. So the Roofline model was for a homogeneous multicore chip, which is a chip that every processor core is the same, and it modeled the chip with a peak computation performance and a peak off-chip communication bandwidth in a plot that looks like a roofline. And so what Gables did is it said, we can’t model our 42 accelerators that way, but we can perhaps do a roofline for each accelerator, because each accelerator is different, they’re heterogeneous, and then find a way to combine them to model the whole chip. That’s what Gables does, and Gables gets its name by a roof that has many rooflines.
Khari: Ok. So have you used this model on any products or technologies that have shown promise?
Mark: I have not used it on products. However, I’m aware of at least two instances, one where it was incorporated in the tool chain of a major tool provider — I’m not sure I’m allowed to say the name — and I know it was used for the preliminary development of some products at a major IT company. And they’re still using it, but I think that’s confidential.
I will say it, I think it’s better than the community thinks it is at this point, because it’s not like, you know, super popular.
Khari: Yeah, maybe now people will think more about it. Anything else you want to say about the Gables model?
Mark: I’ll just say this, I think it’s important when you encounter a complicated system to try to figure out a way to get your head around it, and Gables was our attempt to do that. I think that model has value even if you don’t believe the numbers, because it gives you a way to frame your thinking about the system and pay attention to these communication and these performance things and how they interact. That’s invaluable even if you don’t believe the output of the number.
[Catalyzing Computing Episode 36 – starting at 8:50]
Khari: So what would you say is your highlight of the time you spent with the CCC?
Mark: Well, the highlight probably has to be the 2018 AAAI/CCC 20-year roadmap for artificial intelligence, even though I was not…I mean, I was helping to catalyze this, I was chair of the organization and I played bad cop to help get things out, but other people did more of the work from the CCC side, Liz Bradley and Ann Drobnis and others. But this was a really big deal and it has already catalyzed and is referenced in some pretty significant NSF programs. CRA government affairs shopped it around the government and I expect the biggest impact is to come.
The key trick with AI was…well AI is pretty hot in the industry, so what do we need this roadmap for? It turns out there are things from academia that can complement industry and create a sum greater than its parts. These often include things that are a longer-term focus, and they can be issues that are maybe not industry’s number one concern. Like, social justice may not be industry’s number one concern. Maybe fairness is, maybe fairness isn’t? We could address things like that, and I think it’s a very nice, albeit longer than I would like, document.
Khari: Yeah, I think it’s over 100 pages. But people that are interested should check that out and there will be links on the podcast webpage if you want to read more [read the full report here].
Mark: There is an executive summary that’s way shorter.
Khari: That’s true. So how do you think the proliferation of AI has impacted the hardware space?
Mark: So, artificial intelligence has the potential to change a lot in society, hopefully mostly good. The current way it’s done is…the greatest successes have been in a part of machine learning — which is a part of AI — called deep neural networks. These currently analyze a tremendous amount of data with a tremendous amount of computation, and if we could do that even more effectively then machine learning could be used in even more situations. A big step to greater effectiveness was moving from regular processing cores to general purpose GPUs (Graphics Processing Unit), which did the data-level parallelism that we discussed before.
Now there are efforts afoot to do very specialized accelerators, as we’ve discussed before, for machine learning, such as Google’s Tensor Processing Unit (TPU). I think we’re going to see much more of that for deep neural networks. As AI starts expanding to other things, not just deep neural networks, I think it’s important enough that hardware will be developed for that.
Interestingly, there is a feedback path — we also have to design the hardware. So there are some small new efforts on trying to take machine learning and apply it back to the design and optimization of hardware to maybe exceed…the human designers instead of doing the design, they’re doing the configuration of the AI to do the design.
I think we could get a really nice synergy. I mean, you hear all this talk about AI, you might think it’s hype, but it’s pretty real.
Listen to the full interview with Dr. Hill below or find it on ApplePodcasts | Spotify | Soundcloud | Stitcher | Blubrry | Google Podcasts | iHeartRadio | Youtube. If you prefer to read rather than listen, the transcript of the interview is available here: part 1 and part 2.
If you are interested in appearing in an episode of the Catalyzing Computing podcast or want to contribute a guest post to the CCC blog, please complete this survey through Google Forms.
If you listen to the podcast, please take a moment to complete this listener survey – this survey will help us learn more about you and better tailor the show to the interests of our listeners.
About the Author: Khari Douglas is a Senior Program Associate for the Computing Research Association’s (CRA) Computing Community Consortium (CCC) and the host of the CCC’s Catalyzing Computing podcast. You can find more of his work on the CCC blog.
Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.