March 20, 2022
March 20, 2022
RAGE 2022 @ DAC 2022 Call for Papers
The 1st Real-time And intelliGent Edge computing workshop
Held in conjunction with the 59th Design and Automation Conference (DAC 2022)
Submission deadline: March 20th, 2022, anywhere on Earth
Physical event, San Francisco, CA, USA, July 10th, 2022
Daniel Casini, Assistant Professor at Scuola Superiore Sant’Anna, Pisa, Italy
Dakshina Dasari, Researcher at Robert Bosch GmbH, Germany
Matthias Becker, Assistant Professor at KTH Royal Institute of Technology, Sweden
Submission deadline: March 20th, 2022, anywhere on Earth
Notification to authors: April 11th 2022, end of the day
Workshop date: July 10th, 2022
Submission link: https://easychair.org/conferences/?conf=rage2022
Call for Papers
The edge computing paradigm is becoming increasingly popular as it facilitates real-time computation, reduces energy consumption and carbon footprint, and fosters security and privacy preservation by processing the data closer to its origin, thereby drastically reducing the amount of data sent to the cloud. On the application side, there is a growing interest in using edge computing as a key pillar to support decentralized artificial intelligence by implementing federated learning and adaptive deep learning inference at the edge. However, many edge applications tightly interact with the surrounding environment and are required to deliver a result (e.g., perform actuation or send a message through a 5G network) within a predefined deadline. Therefore, a key requirement in edge computing is the need to be predictable across the edge-to-cloud continuum while also efficiently utilizing the system resources.
However, meeting the above requirements is non-trivial. Modern edge devices can be very diverse, ranging from hand-held devices to large in-premise servers, and can include complex embedded platforms with multiple heterogeneous cores and hardware accelerators such as GPUs, TPUs, and FPGAs. This complexity introduces considerable challenges when trying to guarantee timing requirements of real-time applications: for example, due to scheduling policies implemented by the hardware accelerators (often not publicly disclosed by vendors), or due to the memory contention experienced by the cores when accessing the main memory concurrently. Secondly, the network transmission time (TSN over Ethernet to 5G links) can lead to variability in the end-to-end latencies incurred by edge applications.
Furthermore, the operating system (OS) also plays a crucial role in enabling the edge computing paradigm, but quite often at the price of increasing the difficulty in deriving timing guarantees: for example, think of a complex deep neural network that needs to leverage a Linux-based OS (which is far more complicated than a real-time operating system), since it provides all the software stacks (e.g., TensorRT) and device drivers to interact with NVIDIA GPUs.
The complexity of the problem is further increased by the usage of middleware frameworks, which simplify the development of applications, but at the cost of introducing additional scheduling policies that add to those implemented by the underlying operating system, hindering predictability. Some relevant examples are ROS, in the context of robotics, TensorFlow for artificial intelligence, TensorRT for efficient deep neural network inference on GPUs, and others. Virtualization technologies are also becoming crucial in implementing the edge paradigm, but again, at the expense of creating a more complex operating environment, where guaranteeing temporal properties is a challenging endeavor. These problems are common to many application domains, including cyber-physical systems, future generation autonomous driving applications, robotics, Industry 4.0, smart buildings, and more.
In this workshop, we solicit the submission of short papers. Workshop topics include, but are not limited to:
- Real-time edge computing
- QoS mechanisms for temporal isolation in light-weight virtualization mechanisms (Docker, WebAssembly)
- Mechanisms for end-to-end latency guarantees in the edge-to-cloud continuum
- Methods for functional decomposition between the edge and cloud
- Predictability in middleware frameworks (ROS, TensorFlow, TensorRT, and more)
- Real-time edge computing use cases
- Real-time network protocols for edge computing
- Real-time distributed artificial intelligence
- Resource scheduling and allocation in embedded real-time systems
- Predictable and efficient parallel applications
- Energy- and power-aware allocation in the edge-to-cloud Continuum
- Timing predictability for artificial intelligence
The workshop will include the following three types of contributions:
- Submissions of workshop papers, which will be peer-reviewed by the workshop’s technical program committee. Accepted papers will be presented with an envisioned time slot of 10 minutes for the talk and 5 minutes for the Q&A.
- Invited talks from high-end expert speakers from both academia and industry, with an envisioned time slot for each presentation of 20-30 minutes including the Q&A.
- An open panel discussion that will include experts both from academia and industry to further support the discussion and community building in the workshop.
The workshop follows a double-blind peer-review process. The body of each submitted paper will be limited to 4 pages of content, including the bibliography and acknowledgments. Shorter submissions are also welcome. Submissions must comply with the IEEE conference papers guidelines, i.e., 10pt font, with default line-spacing and margins. Templates are available at IEEE templates.
Journal Special Issue
A selection of papers from the accepted submissions is planned to be invited to submit an extended journal version.
The organizers will select a shortlist of TPC members for a sub-committee aimed at evaluating the Best Paper Award and the Best Presentation Award of the workshop (for presentations from the open call for paper).
The workshop will feature high-end invited speakers from both industry and academia, and it will be an excellent opportunity to visit the 59th Design and Automation Conference in San Francisco.
- Prof. Mohammad Al Faruque, University of California Irvine (UCI), USA
Title: Low power Machine Learning Techniques for Edge-AI
- Dr. Daniel Bristot De Oliveira, Red Hat, Italy
Title: Real-time Linux analysis tools: finding the sources of OS latencies
- Prof. Tam Chantem, Virginia Tech, USA
Title: Deadline-Aware Task Offloading for Vehicular Edge Computing Networks
- Dr. Arne Hamman, Bosch Corporate Research, Germany
Title: Industrial use-cases for real-time edge-computing
- Prof. Anthony Rowe, Carnegie Mellon University, USA
Title: Lightweight virtualization for giving the cloud an edge
- Dr. Arpan Gujarati, University of British Columbia, Canada
Title: Serving DNNs like Clockwork: Performance Predictability from the Bottom Up
- Giorgiomaria Cicero, Accelerat S.R.L., Italy
Title: The role of virtualization at the edge for mixed-criticality applications
- Takuya Azumi, Saitama University, Japan
- Alessio Balsini, Google, UK
- Soroush Bateni, University of Texas at Dallas, USA
- Tobias Blass, Apex.AI
- Giorgio Buttazzo, Scuola Superiore Sant’Anna, Italy
- Albert Cheng, University of Houston, USA
- Hyunjong Choi, University of California at Riverside, USA
- Xiaotian Dai, University of York, UK
- Zheng Dong, Wayne State University, USA
- Pierfrancesco Foglia, Universita di Pisa, Italy
- Miguel Gutiérrez Gaitán, CISTER, Portugal
- Naresh Nayak, Robert Bosch GmbH, Germany
- Alessandro Vittorio Papadopoulos, Mälardalen University, Sweden
- Gabriel Parmer, George Washington University
- Paolo Pazzaglia, Universität des Saarlandes, Germany
- Carlo Puliafito. University of Pisa, Italy
- Francesco Restuccia, UC San Diego, CA, USA
- Juan M. Rivas, Universidad de Cantabria, Spain
- Claudio Scordino, Evidence Srl, Italy
- Biruk B. Seyoum, Columbia University, USA