3rd Workshop on Near-Data Processing
Computing in large-scale systems is shifting away from the traditional
compute-centric model successfully used for many decades into one that is much
more data-centric. This transition is driven by the evolving nature of what
computing comprises, no longer dominated by the execution of arithmetic and
logic calculations but instead becoming dominated by large data volume and the
cost of moving data to the locations where computations are performed. Data
movement impacts performance, power efficiency and reliability, three
fundamental components of a system. These trends are leading to changes in the
computing paradigm, in particular the notion of moving computation to the data
in a so-called Near-Data Processing approach, which seeks to perform
computations in the most appropriate location depending on where data resides
and what needs to be extracted from that data. Examples already exist in
systems that perform some computations closer to disk storage, leveraging the
data streaming coming from the disks, filtering the data so that only useful
items are transferred for processing in other parts of the system.
Conceptually, the same principle can be applied throughout a system, by placing
computing resources close to where data is located, and decomposing
applications so that they can leverage such a distributed and potentially
heterogeneous computing infrastructure.
Copy and paste this URL into your WordPress site to embed
Copy and paste this code into your site to embed