This is the 1st December 2023 digest of SIGARCH Messages.

In This Issue


Call for Papers: FCCM 2024
https://www.fccm.org/call-for-papers-2024/
Submitted by Peipei Zhou

FCCM 2024 Call for Papers
https://www.fccm.org/call-for-papers-2024/
The 32nd IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM)

Feel free to follow FCCM Linkedin Page and join the FCCM Linkedin Group!

** New for FCCM 2024 – Journal Track submission.  See details below. **

The IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM) is the original and premier forum for presenting and discussing new research related to computing that exploits the unique features and capabilities of FPGAs and other reconfigurable hardware.

FCCM 2024 is planned to be an in-person event. Please refer to the FCCM website (https://www.fccm.org/) for updates and details. At least one author will be required to register and attend the conference. Failure to present at the conference may result in the removal of the submission from IEEE Xplore.

Submissions are solicited on the following topics related to Field-Programmable Custom Computing Machines (FCCMs) including, but not limited to:

Architectures

  • Novel reconfigurable architectures, including overlay architectures
  • Architectures for high performance and/or low power computing
  • Security assessment and enhancements for reconfigurable computing
  • Specialized memory systems including volatile, non-volatile, and hybrid memory subsystems
  • Emerging technologies with in-field reconfiguration abilities
  • Clusters, data centers, or large systems of reconfigurable devices
  • Heterogeneous programmable architectures

Abstractions, Programming Models, and Tools

  • Abstractions, programming models, interfaces, and runtimes, including virtualization
  • New languages and design frameworks for spatial or heterogeneous applications
  • High-level synthesis and designer productivity in general
  • Software-Defined-systems (e.g. radio, networks, frameworks for new domains)
  • Customizable soft processors systems

Reconfiguration

  • Run-time management of reconfigurable hardware
  • System resilience/fault tolerance for reconfigurable hardware
  • Evolvable, adaptable, or autonomous reconfigurable computing systems
  • Security assessment and enhancement of run-time reconfiguration

Applications

  • Datacenter or cluster with reconfigurable applications
  • New uses of run-time reconfiguration in applications-specific systems
  • Applications that utilize reconfigurable technology for performance and efficiency, and particularly submissions that make comparisons with other highly parallel architectures such as GPUs or DSPs
  • Novel use of state-of-the-art commercial FPGAs

_Journal Track Submission_

For the first time, FCCM introduces an exciting Journal Track working with the ACM Transactions on Reconfigurable Technology and Systems (TRETS). The Journal Track is specifically intended for original contributions (i.e., no conference-paper extensions are allowed) that would benefit from the longer articles possible in TRETS (up to 32 ACM-style single-column pages).

Submission Procedure: To submit to the Journal Track, please use the ACM TRETS submission system (https://mc.manuscriptcentral.com/trets) to submit to the “FCCM 2024 Journal Track”, which you can select after logging into your TRETS account. To prepare a TRETS manuscript for such a submission, please follow the ACM TRETS Author Guidelines (https://dl.acm.org/journal/trets/author-guidelines).

Review Process: If your submission is not rejected during the first TRETS review process, you can submit it for a second round of review after adequately addressing the reviewers’ comments. The authors of accepted Journal-Track papers will be invited to present their work at the FCCM’24 conference and contribute an abstract of their TRETS paper to the FCCM proceedings. The actual paper will be published by TRETS. If the paper is not accepted in the Journal Track after the second round of review but still worth publication, it will continue as a regular TRETS submission.

Important Dates – Journal Track Submission:

Submission to TRETS: Dec 20, 2023
Initial review: January 20, 2024
Revision submission deadline: February 20, 2024
Notification of acceptance: Mar 14, 2024

All deadlines apply to the Anywhere on Earth (UTC – 12) timezone

Direct FCCM Submission

Authors may also choose to submit through the direct FCCM paper submission route as usual.
Submission Website: https://fccm24.hotcrp.com/

Important Dates – Direct Submission:

All deadlines apply to the Anywhere on Earth (UTC – 12) timezone

Abstracts Due (All Papers)                                     January 9, 2024 (NO EXTENSIONS)
Submissions Due (All Papers)                               January 15, 2024 (NO EXTENSIONS)
Workshop Proposals Due                                       February 16, 2024
Rebuttal Period                                                        February 15 – 22, 2024
Notification of Acceptance (All Papers)                March 14, 2024
Artifact Evaluation (All Papers)                              March 21, 2024
Demo Night Submissions                                       March 28, 2024
Notification of Acceptance (Demo Night)             April 4, 2024
Camera-Ready Submission                                   April 11, 2024
Early Registration Deadline                                    April 19, 2024
Conference                                                               May 5, 2024

Organizing Committee:

General Chair: Christophe Bobda (University of Florida)
Program Chair: Hayden So (University of Hong Kong)
Program Vice Chairs:
– Callie Hao (Geogia Tech)
– Lana Josipovic (ETHZ)
– John Wickerson (Imperial College, London)
Artifacts Chairs:
– Miriam Leeser (Northwestern University)
– Chris Lavin (AMD)
Finance Chair: Andrew Schmidt (AMD)
Sponsorship Chair: Naveen Purushotham (AMD)
Publications Chair: Ali Ahmadinia (California State University, San Marcos)
Workshops Chair: Jeff Goeders (BYU University)
Publicity and Website Chair: Peipei Zhou (University of Pittsburgh)
Demo-Night Chair: Estelle Kao (Intel)
PhD Forum Chair: Dirk Koch (University of Heidelberg)
Local Arrangements Chair: Sujan Sah Kumar (University of Florida)

Paper Types:

Submissions can be made for any of the two paper types:

  1. Traditional technical papers that introduce and evaluate new technologies. These papers must have strong empirical results and address significant challenges of the corresponding problem.
  2. Practical papers that make significant practical contributions, including industry papers, as opposed to introducing and evaluating new technologies. For example, new tools built on existing technologies that help practitioners better use FPGAs. Practical papers will be reviewed based on the significance and technical soundness of the practical contribution.

Paper Formats:

Long papers are limited to 10 pages (excluding references). Short papers are limited to 6 pages (excluding references). Authors are encouraged to submit preliminary work as a short paper. This category is intended for new projects and early results or work that can be concisely presented in the 6-page budget. Submissions accepted as posters will have a one-page extended abstract.

Page restrictions for all formats exclude references, which may use additional pages.

Submissions violating the formatting requirements may be automatically rejected. Do not submit the same work in more than one of the formats.

Accepted papers will have the same page lengths as initial submissions. Short papers will have short oral presentations, and long papers may have long or short presentations based on committee decisions on the time required to present the material.

All submissions should be written in English. An online submission link will be available on the FCCM website. Papers must conform to the US letter-sized IEEE conference proceedings format to be reviewed and published.

Paper Preparation:

Across all topics (and especially for application papers), successful manuscripts will include sufficient details to reproduce the results presented (e.g., full part numbers, software versions). Application papers should not just be an implementation of an application on an FPGA but should show how reconfigurable technology is leveraged by the application and should ideally contain insights and lessons that can be carried forward into future designs. Additional suggestions and guidelines are available on www.fccm.org. See the ACM/TCFPGA Hall-of-Fame (hof.tcfpga.org) and the set of previous FCCM Best Paper winners (wiki.tcfpga.org/FCCMBest) for outstanding examples of FCCM papers.

Simultaneous Submissions:

Papers must not be simultaneously under review or waiting to appear at another conference or in a journal and must not be essentially the same as any paper that has been previously published. If a paper contains text or technical content that is similar to a previously published or submitted paper, that other paper should be cited in the FCCM submission, and the differences should be made clear.

Reviewer Conflicts

Authors must register any program-committee conflicts as they submit their paper. Conflicts can include those that have co-authored a paper in the past 3 years, those that have current or shared institutional affiliation within the past year, or other situations in which the relationship would prevent a reviewer from being objective. Note that if an undeclared conflict is discovered, or a conflict is declared in an attempt to “game” the review process, the submission may be rejected. If you believe you may have a conflict with the program chair, please contact the program chair well in advance of the submission deadline.

Review Process:

FCCM uses a double-blind reviewing system. Manuscripts must not identify authors or their affiliations. Authors are encouraged to cite their work but must not implicitly identify themselves. For example, references that clearly identify the authors (“We build on our previous work…”) should be written as “This work builds on XYZ [citation]”. Do not put a “deleted for double-blind” entry in the reference section.

In the case of widely-available Open Source software, authors should cite the website(s) but not claim to own them. Authors should also remember to mask grant numbers and other government markings during the review process. Note that there are resources to blind open-source repositories for review, such as https://github.com/tdurieux/anonymous_github. Papers that attempt to identify authors or leverage prior work or institutional support fora competitive advantage in the peer review process will not be considered. Placing a preliminary version of the unpublished paper on arXiv is not disqualifying, but it is also not encouraged; just because a paper can be unblinded by active search will not undermine the spirit of the double-blind review. Artifacts, including open-source designs and tools, are encouraged; if there are questions about handling the blind-review process, contact the program chair.

FCCM 2024 includes a rebuttal phase. Specific questions from reviewers will be made available by February 15, 2024. Authors have the option to provide an up to 500-word response by February 22, 2024. Reviewers will consider the responses during final paper deliberations.

Artifact Evaluation:
Authors of accepted full length papers can optionally participate in an artifact evaluation process. The inclusion of artifacts with a paper submission is not required for submission nor elevates the submission beyond those without. However, the goal of artifact evaluation is to encourage the availability and reproducibility of published results.

Artifacts that are included with submission will be subjected to a separate and independent review process from their accompanying papers. Papers submitted with artifacts must preserve the double-blind nature of the review process and all relevant links should be removed for blind review. Artifacts will be disclosed in a separate form that will be evaluated after paper acceptance.

Note: Artifact submission is optional, and authors are not required to open source their work for submission to FCCM.

More information about Artifact Evaluation can be found from the FCCM website.

Best Paper Award and a Special Section for the Best FCCM 2024Papers in ACM TRETS:

FCCM 2024 will continue the tradition of having a best long and short paper award. We will also invite the authors of the best papers to extend their work to be considered for publication in a special section of ACM’s Transactions on Reconfigurable Technology and Systems (TRETS) for FCCM 2024.

Questions:

Questions about this call, submissions, and potential submissions should be directed to the program chair, Hayden So (hso@eee.hku.hk)

IEEE Privacy Policy


Call for Papers: ISPASS 2024
http://www.ispass.org/ispass2024/
Submitted by Fangjia Shen

The IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS) provides a forum for sharing advanced academic and industrial research focused on performance analysis in the design of computer systems and software. ISPASS 2024 will be held on May 5-7, 2024 in Indianapolis, Indiana. Authors are invited to submit previously unpublished work for possible presentation at the conference.

Papers are solicited in fields that include the following:

  • Performance and efficiency (power, area, etc.) evaluation methodologies
    • Analytical modeling
    • Statistical approaches
    • Tracing and profiling tools
    • Simulation techniques
    • Hardware (e.g., FPGA) accelerated simulation
    • Hardware performance counter architectures
    • Power, temperature, variability and/or reliability models for computer systems
    • Microbenchmark-based hardware analysis techniques
  • Foundations of performance and efficiency analysis
    • Metrics
    • Bottleneck identification and analysis
    • Visualization
  • Efficiency and performance analysis of commercial and experimental hardware
    • Multi-threaded, multicore and many-core architectures
    • Accelerators and graphics processing units
    • Memory systems, including storage-class memory
    • Embedded and mobile systems
    • Enterprise systems and data centers
    • HPC and Supercomputers
    • Computer networks
    • Quantum computing
    • Emerging technologies
  • Efficiency and performance analysis of emerging workloads and software
    • Software written in managed languages
    • Virtualization and consolidation workloads
    • Datacenter, internet-sector workloads
    • Embedded, multimedia, games, telepresence
    • Deep learning and convolutional neural networks
  • Application and system code tuning and optimization
  • Confirmations or refutations of important prior results

In addition to research papers, ISPASS welcomes tool and benchmark paper submissions. The conference is an ideal forum to introduce new tools and benchmarks to the community. These papers, which can detail tools and benchmarks in the above fields of interest, will be judged primarily on their potential to enable and amplify future research, which should be clearly motivated in the paper. We also expect that authors of accepted tools/benchmark papers open-source their tool/benchmark before the conference.

Important Dates

  • Paper abstract submission deadline: December 8, 2023, 11:59:59 PM Anywhere on Earth
  • Full submission deadline: December 15, 2023, 11:59:59 PM Anywhere on Earth
  • Rebuttal: February 9-13, 2024
  • Paper notification: February 29, 2024

Call for Papers: ARC 2024
https://arc2024.av.it.pt/
Submitted by Zhenman Fang

The 20th International Symposium on Applied Reconfigurable Computing (ARC 2024)
March 20 – 22, 2024, Aveiro, Portugal (https://arc2024.av.it.pt/)
Symposium information
Applied Reconfigurable Computing focuses on the use of reconfigurable hardware, such as field-programmable gate arrays (FPGAs), to accelerate and optimize various computational tasks and applications. It involves designing and implementing hardware configurations that can be dynamically adapted to specific workloads, improving performance and efficiency in a wide range of applications. The 20th edition of the symposium aims to bring together researchers and practitioners of reconfigurable computing with an emphasis on practical applications of this technology.
The ARC’2024 proceedings will be published as a volume in Springer’s Lecture Notes in Computer Science (LNCS) series and will also be available through the SpringerLink online service.
Selected papers will be invited to be submitted for consideration in a special issue at a publishing venue to be announced shortly (please check the symposium website).
Submission Information
Authors are invited to submit original contributions in English including, but not limited to, the areas of interest mentioned below. Submission must be up-loaded to the ARC website and identify the format of the contribution as either
– Long Papers:  (12 pages maximum) should include mainly accomplished results
                (oral presentation).
– Short Papers: (6 pages maximum) to be composed of work in progress or
          reporting recent developments (poster presentation).
The format of the paper should be according to the Springer-Verlag LNCS Series format rules (see: http://www.springer.com/comp/lncs/authors.html).
Important Dates
Submission deadline:                  11 December 2023
Decision Notification:                10 January  2024
Author Registration:                  30 January  2024
Camera-Ready Paper Submission:       07 February 2024
Topics of Interest
Papers in English in all areas of applied reconfigurable computing are invited, with particular emphasis on:
» Design Methods & Tools
   High-level languages & compilation
   Simulation & synthesis
   Design space exploration
» Applications
   Security & cryptography
   Embedded computing & DSP
   Robotics, space, bioinformatics
   Deep learning & neural networks
» Architectures
   Computation in/near memory
   Self-adaptive, evolvable PSoCs & adaptive SoCs
   Low-power designs
   Approximate computing
   Fine-/coarse-/mixed-grained
   Interconnect (NoCs, …)
   Resilient & fault tolerant
Organizing Committee
General Chairs:    Pedro C. Diniz   (Univ. of Porto, Portugal)
Program Chairs:
   Iouliia Skliarova      (University of Aveiro, Portugal)
   Piedad Brox Jiménez    (Microelectronics Inst. of Seville, Spain)
Proceedings Chair: Mário Véstias          (ISEL, Lisboa, Portugal)
Local Chair:    Arnaldo Oliveira   (University of Aveiro, Portugal)
Special Issue Chair: Christian Hochberger (TU Darmstadt, Germany)

Call for Papers: 6th AccML Workshop at HiPEAC 2024
https://accml.dcs.gla.ac.uk/
Submitted by Jose Cano

6th Workshop on Accelerated Machine Learning (AccML)
Co-located with the HiPEAC 2024 Conference
(https://www.hipeac.net/2024/munich/)
January 17, 2024
Munich, Germany

CALL FOR CONTRIBUTIONS

The remarkable performance achieved in a variety of application areas (natural language processing, computer vision, games, etc.) has led to the emergence of heterogeneous architectures to accelerate machine learning workloads. In parallel, production deployment, model complexity and diversity pushed for higher productivity systems, more powerful programming abstractions, software and system architectures, dedicated runtime systems and numerical libraries, deployment and analysis tools. Deep learning models are generally memory and computationally intensive, for both training and inference. Accelerating these operations has obvious advantages, first by reducing the energy consumption (e.g. in data centers), and secondly, making these models usable on smaller devices at the edge of the Internet. In addition, while convolutional neural networks have motivated much of this effort, numerous applications and models involve a wider variety of operations, network architectures, and data processing. These applications and models permanently challenge computer architecture, the system stack, and programming abstractions. The high level of interest in these areas calls for a dedicated forum to discuss emerging acceleration techniques and computation paradigms for machine learning algorithms, as well as the applications of machine learning to the construction of such systems.

LINKS TO THE WORKSHOP PAGES
Organizers: https://accml.dcs.gla.ac.uk/
HiPEAC: https://www.hipeac.net/2024/munich/#/program/sessions/8090/

TOPICS
Topics of interest include (but are not limited to):

– Novel ML systems: heterogeneous multi/many-core systems, GPUs and FPGAs;
– Software ML acceleration: languages, primitives, libraries, compilers and frameworks;
– Novel ML hardware accelerators and associated software;
– Emerging semiconductor technologies with applications to ML hardware acceleration;
– ML for the construction and tuning of systems;
– Cloud and edge ML computing: hardware and software to accelerate training and inference;
– Computing systems research addressing the privacy and security of ML-dominated systems;
– ML techniques for more efficient model training and inference (e.g. sparsity, pruning, etc);
– Generative AI and their impact on computational resources

INVITED SPEAKERS

– Giuseppe Desoli (STMicroelectronics): Revolutionizing Edge AI: Enabling Ultra-low-power and High-performance Inference with In-memory Computing Embedded NPUs

Abstract: The increasing demand for Edge AI has led to the development of complex cognitive applications on edge devices, where energy efficiency and compute density are crucial. While HW Neural Processing Units (NPUs) have already shown considerable benefits, the growing need for more complex algorithms demands significant improvements. To address the limitations of traditional Von Neumann architectures, novel designs based on computational memories are being developed by industry and academia. In this talk, we present STMicroelectronics’ future directions in designing NPUs that integrate digital and analog In-Memory Computing (IMC) technology with high-efficiency dataflow inference engines capable of accelerating a wide range of Deep Neural Networks (DNNs). Our approach combines SRAM computational memory and phase change resistive memories, and we discuss the architectural considerations and purpose-designed compiler mapping algorithms required for practical industrial applications and some challenges we foresee in harnessing the potential of In-memory Computing going forward.

– John Kim (KAIST): Domain-Specific Networks for Accelerated Computing

Abstract: Domain-specific architectures are hardware computing engine that is specialized for a particular application domain. As domain-specific architectures become widely used, the interconnection network can become the bottleneck for the system as the system scales. In this talk, I will present the role of domain-specific interconnection networks to enable scalable domain-specific architectures. In particular, I will present the impact of the physical/logical topology of the interconnection network on communication such as AllReduce in domain-specific systems. I will also discuss the opportunity of domain-specific interconnection networks and how they can be leveraged to optimize overall system performance and efficiency. As a case study, I will present the unique design of the Groq software-managed scale-out system and how it adopts architectures from high-performance computing to enable a domain-specific interconnection network.

– Adam Paszke (Google): A Multi-Platform High-Productivity Language for Accelerator Kernels

Abstract: Compute accelerators are the workhorses of modern scientific computing and machine learning workloads. But, their ever increasing performance also comes at a cost of increasing micro-architectural complexity. Worse, it happens at a speed that makes it hard for both compilers and low-level kernel authors to keep up. At the same time, the increased complexity makes it even harder for a wider audience to author high-performance software, leaving them almost entirely reliant on high-level libraries and compilers. In this talk I plan to introduce Pallas: a domain specific language embedded in Python and built on top of JAX. Pallas is highly inspired by the recent development and success of the Triton language and compiler, and aims to present users with a high-productivity programming environment that is a minimal extension over native JAX. For example, kernels can be implemented using the familiar JAX-NumPy language, while a single line of code can be sufficient to interface the kernel with a larger JAX program. Uniquely, Pallas kernels support a subset of JAX program transformations, making it possible to derive a number of interesting operators from a single implementation. Finally, based on our experiments, Pallas can be leveraged for high-performance code generation not only for GPUs, but also for other accelerator architectures such as Google’s TPUs.

– Ayse Coskun (Boston University): ML-Powered Diagnosis of Performance Anomalies in Computer Systems

Abstract: Today’s large-scale computer systems that serve high performance computing and cloud face challenges in delivering predictable performance, while maintaining efficiency, resilience, and security. Much of computer system management has traditionally relied on (manual) expert analysis and policies that rely on heuristics derived based on such analysis. This talk will discuss a new path on designing ML-powered “automated analytics” methods for large-scale computer systems and how to make strides towards a longer term vision where computing systems are able to self-manage and improve. Specifically, the talk will first cover how to systematically diagnose root causes of performance “anomalies”, which cause substantial efficiency losses and higher cost. Second, it will discuss how to identify applications running on computing systems and discuss how such discoveries can help reduce vulnerabilities and avoid unwanted applications. The talk will also highlight how to apply ML in a practical and scalable way to help understand complex systems, demonstrate methods to help standardize study of performance anomalies, discuss explainability of applied ML methods in the context of computer systems, and point out future directions in automating computer system management.

SUBMISSION
Papers will be reviewed by the workshop’s technical program committee according to criteria regarding the submission’s quality, relevance to the workshop’s topics, and, foremost, its potential to spark discussions about directions, insights, and solutions in the context of accelerating machine learning. Research papers, case studies, and position papers are all welcome.

In particular, we encourage authors to submit work-in-progress papers: To facilitate sharing of thought-provoking ideas and high-potential though preliminary research, authors are welcome to make submissions describing early-stage, in-progress, and/or exploratory work in order to elicit feedback, discover collaboration opportunities, and spark productive discussions.

The workshop does not have formal proceedings.

IMPORTANT DATES
Submission deadline: November 17, 2023
Notification of decision: December 8, 2023

ORGANIZERS
José Cano (University of Glasgow)
Valentin Radu (University of Sheffield)
José L. Abellán (University of Murcia)
Marco Cornero (DeepMind)
Ulysse Beaugnon (Google)
Juliana Franco (DeepMind)


Call for Papers: GPGPU-16
https://mocalabucm.github.io/gpgpu2024/
Submitted by Daniel Wong

Call for Papers for GPGPU-16
Held in cooperation with PPoPP’24
Full-Day Workshop (March 2 or 3, 2024)

Overview:
GPUs are delivering more and more computing power required by modern society. With the growing popularity of massively parallel devices, users demand better performance, programmability, reliability, and security. The goal of this workshop is to provide a forum to discuss massively parallel applications, environments, platforms, and architectures, as well as infrastructures that facilitate related research.

Authors are invited to submit papers of original research in the general area of GPU computing and architectures. Topics include, but are not limited to:

  • GPU Architecture and Hardware
    • Next-generation GPU architectures
    • Energy-efficient GPU designs
    • Scalable multi-GPU systems
    • GPU memory hierarchies and management
  • Programming Models and Compilers
    • High-level programming abstractions for GPUs
    • Compiler optimizations for GPU codes
    • Source-to-source translations and tools
    • Debugging and profiling tools for GPUs
  • GPU Algorithms and Data Structures
    • Parallel algorithms tailored for GPUs
    • Data structures optimized for GPU memory hierarchies
    • Algorithmic primitives and building blocks
  • Performance Optimization Techniques
    • Performance modeling and benchmarking
    • Auto-tuning and performance portability
    • Techniques for reducing communication overheads
  • GPU Applications
    • Case studies of real-world GPU applications
    • GPU applications in scientific computing, machine learning, graphics, and emerging field (e.g., quantum, neuromorphic, bioinformatics and genomics)
    • Performance comparisons between GPU and other parallel computing platforms
  • Integration of GPUs with Other Technologies
    • GPU and FPGA co-processing
    • Hybrid systems (e.g., CPU-GPU, GPU-TPU integration)
    • Cloud-based GPU computing
  • Challenges and Future Trends
    • Reliability and fault tolerance in GPU systems
    • Security and privacy concerns in GPU computing
    • The future of heterogeneity in computing platforms
    • GPU programming and architecture education

Important Dates (Tentative) (11:59 pm, Anywhere on Earth)
Papers due: Nov 28, 2022
Notification: Jan 6, 2023
Final paper due: Feb 17, 2023

Submission Guidelines

Full paper submissions must be in PDF format for A4 or US letter-size paper. They must not exceed 6 pages (excluding references) in standard ACM two-column sigplan format (review mode, sigplan template). Authors can select if they want to reveal their identity in the submission.

Templates for ACM format are available for Microsoft Word, and LaTeX at: https://www.acm.org/publications/proceedings-template.


Call for Workshops/Tutorials: ISC High Performance 2024 Call for Tutorials
https://www.isc-hpc.com/submissions-tutorials-2024.html
Submitted by Diana Moise

Deadline for submission is December 8th, 2023
The ISC High Performance 2024 Conference call for tutorials is now open! (https://www.isc-hpc.com/submissions-tutorials-2024.html)
The ISC tutorials are interactive courses and collaborative learning experiences focusing on key topics of high performance computing, machine learning, data analytics and quantum computing. Renowned experts in their respective fields will give attendees a comprehensive introduction to the topic as well as providing a closer look at specific problems. Tutorials are encouraged to include a “hands-on” component to allow attendees to practice prepared materials.
Submitted tutorial proposals will be reviewed by the ISC 2024 Tutorials Committee, which is chaired by Shadi Ibrahim, Inria, France, with Diana Moise, HPE, Switzerland as Deputy Chair.
All tutorial attendees require a tutorial pass. For accepted tutorials, ISC will provide a limited number of complimentary tutorial participation passes to tutorial presenters.
ISC 2024 registration fees will be published in early 2024.
AREAS OF INTEREST
Tutorial submissions are encouraged on the following topics:
 – Any area of interest listed in the call for research papers (https://www.isc-hpc.com/submissions-research-papers-2024.html).
 – Additional topics that expand broader community engagement.
 – Innovative and emerging HPC technologies, e.g., cloud technologies for HPC, quantum computing, artificial intelligence, and machine learning.
 – Introductory tutorials for attendees new to HPC
We encourage tutorials that serve a broad audience over tutorials that focus solely on the research in a limited domain or a particular group. Practical tutorials are preferred to completely theoretical ones and we encourage organizers to incorporate hands-on sessions where appropriate.
REVIEW
– Each tutorial will be reviewed by a minimum of 3 reviewers.
– Criteria for review include originality, significance, timeliness, impact, community interest, attendance in prior years (if applicable), quality, hands-on activity, and clarity of the proposal.
– Reviews will include actionable feedback related to the length of the tutorials and the organization of the content (for example, suggestions to add and remove some contents).
IMPORTANT DATES
Submission Deadline December 8, 2023 23:59pm AoE
Notification of Acceptance February 9, 2024
Working Materials for Tutorial Attendees due April 30, 2024
Tutorials May 12, 2024
Half-day: 9:00 am – 1:00 pm, 2:00 pm – 6:00 pm
Full-day: 9:00 am – 6:00 pm
PROGRAM COMMITTEE
Shadi Ibrahim, Inria, France (Chair)
Diana Moise, Cray, HPE, Switzerland (Deputy Chair)
Olivier Beaumont, Inria, France
Jalil BOUKHOBZA, ENSTA Bretagne, Lab-STICC CNRS UMR 6285, France
Suren Byna, The Ohio State University, Lawrence Berkeley National Laboratory, United States of America
Philip Carns, Argonne National Laboratory, United States of America
Ewa Deelman, USC Information Sciences Institute, United States of America
Aniello Esposito, HPE, Switzerland
Ana Gainaru, Oak Ridge National Laboratory, United States of America
Bilel Hadri, KAUST Supercomputing Laboratory, Saudi Arabia
Heike Jagode, University of Tennessee Knoxville, United States of America
Michael Kuhn, Otto von Guericke University Magdeburg, Germany
Jay Lofstead, Sandia National Laboratories, United States of America
Sarah Neuwirth, Johannes Gutenberg University Mainz, Jülich Supercomputing Centre (JSC), Germany
Gabriel Noaje, NVIDIA, Singapore
George Pallis, University of Cyprus, Cyprus
Antonio J. Peña, Barcelona Supercomputing Center (BSC), Spain
Anna Queralt, Polytechnic University of Catalonia, Barcelona Supercomputing Center, Spain
Ana-Lucia Varbanescu, University of Twente, University of Amsterdam, Netherlands
Amelie Chi Zhou, Hong Kong Baptist University, Hong Kong
For more complete and up-to-date information, please see the online call for tutorials here: https://www.isc-hpc.com/submissions-tutorials-2024.html

Call for Workshops/Tutorials: ISC High Performance 2024: Call for Tutorials
https://www.isc-hpc.com/submissions-tutorials-2024.html
Submitted by Diana Moise

ISC High Performance 2024
Call for Tutorials
Deadline for submission is December 8th, 2023
The ISC High Performance 2024 Conference call for tutorials is now open! (https://www.isc-hpc.com/submissions-tutorials-2024.html)
The ISC tutorials are interactive courses and collaborative learning experiences focusing on key topics of high performance computing, machine learning, data analytics and quantum computing. Renowned experts in their respective fields will give attendees a comprehensive introduction to the topic as well as providing a closer look at specific problems. Tutorials are encouraged to include a “hands-on” component to allow attendees to practice prepared materials.
Submitted tutorial proposals will be reviewed by the ISC 2024 Tutorials Committee, which is chaired by Shadi Ibrahim, Inria, France, with Diana Moise, HPE, Switzerland as Deputy Chair.
All tutorial attendees require a tutorial pass. For accepted tutorials, ISC will provide a limited number of complimentary tutorial participation passes to tutorial presenters.
ISC 2024 registration fees will be published in early 2024.
AREAS OF INTEREST
Tutorial submissions are encouraged on the following topics:
 – Any area of interest listed in the call for research papers (https://www.isc-hpc.com/submissions-research-papers-2024.html).
 – Additional topics that expand broader community engagement.
 – Innovative and emerging HPC technologies, e.g., cloud technologies for HPC, quantum computing, artificial intelligence, and machine learning.
 – Introductory tutorials for attendees new to HPC
We encourage tutorials that serve a broad audience over tutorials that focus solely on the research in a limited domain or a particular group. Practical tutorials are preferred to completely theoretical ones and we encourage organizers to incorporate hands-on sessions where appropriate.
REVIEW
– Each tutorial will be reviewed by a minimum of 3 reviewers.
– Criteria for review include originality, significance, timeliness, impact, community interest, attendance in prior years (if applicable), quality, hands-on activity, and clarity of the proposal.
– Reviews will include actionable feedback related to the length of the tutorials and the organization of the content (for example, suggestions to add and remove some contents).
IMPORTANT DATES
Submission Deadline December 8, 2023 23:59pm AoE
Notification of Acceptance February 9, 2024
Working Materials for Tutorial Attendees due April 30, 2024
Tutorials May 12, 2024
Half-day: 9:00 am – 1:00 pm, 2:00 pm – 6:00 pm
Full-day: 9:00 am – 6:00 pm
PROGRAM COMMITTEE
Shadi Ibrahim, Inria, France (Chair)
Diana Moise, Cray, HPE, Switzerland (Deputy Chair)
Olivier Beaumont, Inria, France
Jalil BOUKHOBZA, ENSTA Bretagne, Lab-STICC CNRS UMR 6285, France
Suren Byna, The Ohio State University, Lawrence Berkeley National Laboratory, United States of America
Philip Carns, Argonne National Laboratory, United States of America
Ewa Deelman, USC Information Sciences Institute, United States of America
Aniello Esposito, HPE, Switzerland
Ana Gainaru, Oak Ridge National Laboratory, United States of America
Bilel Hadri, KAUST Supercomputing Laboratory, Saudi Arabia
Heike Jagode, University of Tennessee Knoxville, United States of America
Michael Kuhn, Otto von Guericke University Magdeburg, Germany
Jay Lofstead, Sandia National Laboratories, United States of America
Sarah Neuwirth, Johannes Gutenberg University Mainz, Jülich Supercomputing Centre (JSC), Germany
Gabriel Noaje, NVIDIA, Singapore
George Pallis, University of Cyprus, Cyprus
Antonio J. Peña, Barcelona Supercomputing Center (BSC), Spain
Anna Queralt, Polytechnic University of Catalonia, Barcelona Supercomputing Center, Spain
Ana-Lucia Varbanescu, University of Twente, University of Amsterdam, Netherlands
Amelie Chi Zhou, Hong Kong Baptist University, Hong Kong
For more complete and up-to-date information, please see the online call for tutorials here: https://www.isc-hpc.com/submissions-tutorials-2024.html

Call for Workshops/Tutorials: Call of Workshops and Tutorials for FPGA’24
https://www.isfpga.org/call-for-workshops/
Submitted by Aman Arora

The 32nd edition of FPGA ​invites the submission of half-day and full-day tutorial and workshop proposals. The workshops will be held on​ March 3, 2024 in Monterey, California, USA. The aim of the conference workshops is to emphasize emerging topics related but not limited to FPGA architectures and tools, reconfigurable computing, and custom hardware acceleration for applications such as communications, machine learning, networking, and other problems. The tutorials are expected to enable beginning researchers to enter the area, current researchers to broaden their scope, and practitioners to gain new insights and applicable skills. The workshops should highlight current topics related to technical and/or industry issues and should include a set of invited presentations and/or panels that encourage the participation of attendees in active discussion.

Workshop Proposal Format:

Each workshop proposal (maximum 2 pages) must include:

  • Title of the tutorial or workshop
  • Organizers: names, affiliation, contact information
  • Scope and topics
  • Rationale: Why is the topic current and important? Why would the tutorial/workshop attract attendees?
  • Duration: Half-day, Full-day and a tentative schedule (up to 3.5 hours for half day; the event can be shorter than the allotted time)
  • Names of potential speakers

Proposal Submission:

Proposals should be submitted as a single PDF file via email to the FPGA Workshop Chair workshopchair@isfpga.org (with a subject line “FPGA Workshop/Tutorial Proposal:”) by December 1, 2023. Should you have any questions regarding this call, please address the same email.


Please view the SIGARCH website for the latest postings, to submit new posts, and for general SIGARCH information. We also encourage you to visit the Computer Architecture Today Blog.

- Akanksha Jain
SIGARCH Content Editor

Top