Call for Participation: D43D: International Workshop on Design for 3D Silicon Integration

Submitted by Djordje Jevdjic
http://www.d43d.com/
June 23 to June 24, 2014

Submitted by Djordje Jevdjic
http://www.d43d.com/
D43D: International Workshop on Design for 3D Silicon Integration
June 23-24, 2014
EPFL, Lausanne

Scope & Venue

3D IC is emerging as a promising approach to extend Moore’s law, overcome pin bandwidth limitations, and improve digital platform density and cost beyond a single chip. 3D IC as a technology, however, also introduces a number of key design, methodological, implementation and technological challenges that must be overcome to become practical and cost-effective.
This workshop is a two-day forum that brings together experts from industry and academia to shed light on these near-term to long-term challenges and solutions, and covers topics including, but not limited to, applications requiring 3D, 3D processor, memory and interconnect architectures, thermal management, design methodologies and tools, and testing.

The workshop will take place at EPFL, one of the premier institutions of computer science and engineering, consistently ranked among the most internationally diverse campuses, located on the shores of Lake Geneva in Switzerland.

Registration & Info
For hotel accommodation and workshop information please check our website:
www.d43d.com.
Registration is open until May 25th, 2014.

Organizers
General Chair:
Babak Falsafi, EPFL

Program Chair:
Pascal Vivet, CEA-LETI

Finance Chair:
Stéphanie Baillargues, EPFL

Website Chair:
Javier Picorel, EPFL

Steering Committee:
David Atienza, EPFL
Ahmed Jerraya, CEA-LETI

Sponsors
The workshop is partially sponsored by EcoCloud (www.ecocloud.ch) and IEEE CEDA.

BadgerTrap: a tool to instrument TLB misses

Submitted by Mark D. Hill
http://www.cs.wisc.edu/multifacet/badger-trap/

Submitted by Mark D. Hill
http://www.cs.wisc.edu/multifacet/badger-trap/
BadgerTrap (http://www.cs.wisc.edu/multifacet/badger-trap/) is a tool to
instrument x86-64 TLB misses. It converts TLB misses to reserved page faults by
using the reserved bit in a page table entry. Using this tool can help guide
memory management unit (MMU) research for x86-64 machines. In the current form,
it counts the number of TLB misses, but can be used to generate traces of TLB
misses or can be used to perform any study on TLB misses by instrumenting TLB
misses in the Linux kernel.

–Jayneel Gandhi, Arkaprava Basu, Mark D. Hill, Michael M. Swift
Wisconsin Multifacet Project, University of Wisconsin-Madison

Call for Participation: ISCA 2014

Submitted by Natalie Enright Jerger
http://cag.engr.uconn.edu/isca2014/
June 14 to June 18, 2014

Submitted by Natalie Enright Jerger
http://cag.engr.uconn.edu/isca2014/
Early Registration Deadline Extended to May 24, 2014!

CALL FOR PARTICIPATION

ISCA 2014
The 41st International Symposium on Computer Architecture

Minneapolis, MN — June 14-18, 2014

http://cag.engr.uconn.edu/isca2014/

*** EARLY REGISTRATION DEADLINE: May 24, 2014 ****

*** HOTEL CUT-OFF DATE: May 24, 2014

*** TUTORIALS
Saturday, June 14, 2014
T1: ESESC, a fast Multicore Simulator with Power and Thermal Models
T2: Analyzing Analytics (morning)
T3: Accelerating Big Data Processing with Hadoop and Memcached on
Datacenters with Modern Networking and Storage Architecture (afternoon)
T4: Architectural Modeling for Emerging Memory Technologies (afternoon)

Sunday, June 15, 2014
T5: Heterogeneous System Architecture (HSA): Architecture and Algorithms
T6: Research Infrastructures for Accelerator-centric Architectures
T7: PinPoints: Simulation Region Selection with PinPlay and Sniper (morning)
T8: Graphics Processor Unit (GPU) Programming Support in Open Computer
Vision (OpenCV) Applications (afternoon)

*** WORKSHOPS
Saturday, June 14, 2014
W1: 2nd International Workshop on Parallelism in Mobile Platforms (PRISM-2)
W2: 2nd Workshop on Near-threshold Computing (WNTC)
W3: Heterogeneous Architectures: Software and Hardware (HASH) (CANCELLED)
W4: 1st Workshop on Neuromorphic Architectures (NeuroArch)
W5: Memory Forum (afternoon)

Sunday, June 15, 2014
W6: 3rd Workshop on Hardware and Architectural Support for Security
and Privacy (HASP)
W7: 2nd Workshop on Many-core Embedded Systems (MES)
W8: 4th Workshop on Energy Secure System Architectures (ESSA)
W9: 4th Workshop on Architectures and Systems for Big Data (ASBD 2014)
W10: Championship Branch Prediction (CBP-4) (morning)
W11: 11th Workshop on Duplicating, Deconstructing and Debunking (WDDD)
(afternoon)
W12: 7th Annual Workshop on Architectural and Microarchitectural
Support for Binary Translation (AMAS-BT) (afternoon)

*** MAIN CONFERENCE PROGRAM
Monday, June 16, 2014

Monday, 8:45am-9:45am
Keynote I: Insight into the MICROSOFT XBOX ONE Technology
Dr. Ilan Spillinger, Corporate Vice President, Technology and Silicon,
Microsoft

Monday, 10:45am-12:00pm
Session 1: Machines and Prototypes

Unifying on-chip and inter-node switching within the Anton 2 network
Brian Towles, J.P. Grossman, Brian Greskamp, and David E. Shaw (D. E.
Shaw Research)

A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services
Andrew Putnam (Microsoft), Adrian M. Caulfield (Microsoft), Eric S.
Chung (Microsoft), Derek Chiou (Microsoft and University of Texas at
Austin), Kypros Constantinides (Amazon), John Demme (Columbia
University), Hadi Esmaeilzadeh (Georgia Institute of Technology),
Jeremy Fowers (Microsoft), Gopi Prashanth Gopal (Microsoft), Jan Gray
(Microsoft), Michael Haselman (Microsoft), Scott Hauck (Microsoft and
University of Washington), Stephen Heil (Microsoft), Amir Hormati
(Google), Joo-Young Kim (Microsoft), Sitaram Lanka (Microsoft), James
Larus (EPFL), Eric Peterson (Microsoft), Simon Pope (Microsoft), Aaron
Smith (Microsoft), Jason Thong (Microsoft), Phillip Yi Xiao
(Microsoft), Doug Burger (Microsoft)

SCORPIO: A 36-Core Research Chip Demonstrating Snoopy Coherence on a
Scalable Mesh NoC with In-Network Ordering
Bhavya K. Daya, Chia-Hsin Owen Chen, Suvinay Subramanian, Woo-Cheol
Kwon, Sunghyun Park, Tushar Krishna, Jim Holt, Anantha P.
Chandrakasan, Li-Shiuan Peh (Massachusetts Institute of Technology)

Monday, 1:15pm-2:55pm
Section 2A: Resilience

Avoiding Core’s DUE & SDC via Acoustic Wave Detectors and Tailored
Error Containment and Recovery
Gaurang Upasani (Universitat Politècnica de Catalunya), Xavier Vera
(Intel Barcelona Research Center), Antonio González (Universitat
Politècnica de Catalunya / Intel Barcelona Research Center)

MemGuard: A Low Cost and Energy Efficient Design to Support and
Enhance Memory System Reliability
Long Chen, Zhao Zhang (Iowa State University)

GangES: Gang Error Simulation for Hardware Resiliency Evaluation
Siva Kumar Sastry Hari (NVIDIA), Radha Venkatagiri (University of
Illinois at Urbana-Champaign), Sarita V. Adve (University of Illinois
at Urbana-Champaign), Helia Naeimi (Intel Labs)

Real-World Design and Evaluation of Compiler-Managed GPU Redundant
Multithreading
Jack Wadden (University of Virginia), Alexander Lyashevsky (AMD
Research), Sudhanva Gurumurthi (AMD Research), Vilas Sridharan (RAS
Architecture, AMD), Kevin Skadron (University of Virginia)

Section 2B: Design Space Exploration

ArchRanker: A Ranking Approach to Design Space Exploration
Tianshi Chen (Chinese Academy of Sciences), Qi Guo (Carnegie Mellon
University), Ke Tang (University of Science and Technology of China),
Olivier Temam (Inria), Zhiwei Xu (Chinese Academy of Sciences),
Zhi-Hua Zhou (Nanjing University), Yunji Chen (Chinese Academy of
Sciences)

Aladdin: A Pre-RTL, Power-Performance Accelerator Simulator Enabling
Large Design Space Exploration of Customized Architectures
Yakun Sophia Shao, Brandon Reagen, Gu-Yeon Wei, David Brooks (Harvard
University)

SynFull: Synthetic Traffic Models Capturing Cache Coherent Behaviour
Mario Badr, Natalie Enright Jerger (University of Toronto)

Harnessing ISA Diversity: Design of a Heterogeneous-ISA Chip Multiprocessor
Ashish Venkat, Dean M. Tullsen (University of California, San Diego)

Monday, 3:25pm-5:05pm
Section 3A: Caches

The Direct-to-Data (D2D) Cache: Navigating the Cache Hierarchy with a
Single Lookup
Andreas Sembrant, Erik Hagersten, David Black-Schaffer (Uppsala University)

SC2: A Statistical Compression Cache Scheme
Angelos Arelakis, Per Stenstrom (Chalmers University of Technology)

The Dirty-Block Index
Vivek Seshadri (Carnegie Mellon University), Abhishek Bhowmick
(Carnegie Mellon University), Onur Mutlu (Carnegie Mellon University),
Phillip B. Gibbons (Intel Pittsburgh), Michael A. Kozuch (Intel
Pittsburgh), Todd C. Mowry (Carnegie Mellon University)

Going Vertical in Memory Management: Handling Multiplicity by Multi-policy
Lei Liu (Chinese Academy of Sciences), Yong Li (University of
Pittsburgh), Zehan Cui (Chinese Academy of Sciences), Yungang Bao
(Chinese Academy of Sciences), Mingyu Chen (Chinese Academy of
Sciences), Chengyong Wu (Chinese Academy of Sciences)

Section 3B: GPUs and Parallelism

Fine-grain Task Aggregation and Coordination on GPUs
Marc S. Orr (University of Wisconsin-Madison / AMD Research), Bradford
M. Beckmann (AMD Research), Steven K. Reinhardt (AMD Research), David
A. Wood (University of Wisconsin-Madison / AMD Research)

Enabling Preemptive Multiprogramming on GPUs
Ivan Tanasic (Barcelona Supercomputing Center / Universitat
Politecnica de Catalunya), Isaac Gelado (NVIDIA Research), Javier
Cabezas (Barcelona Supercomputing Center / Universitat Politecnica de
Catalunya), Alex Ramirez (Barcelona Supercomputing Center /
Universitat Politecnica de Catalunya), Nacho Navarro (Barcelona
Supercomputing Center / Universitat Politecnica de Catalunya), Mateo
Valero (Barcelona Supercomputing Center / Universitat Politecnica de
Catalunya)

Single-Graph Multiple Flows: Energy Efficient Design Alternative for GPGPUs
Dani Voitsechov, Yoav Etsion (Technion ? Israel Institute of Technology)

HELIX-RC: An Architecture-Compiler Co-Design for Automatic
Parallelization of Irregular Programs
Simone Campanoni (Harvard University), Kevin Brownell (Harvard
University), Svilen Kanev (Harvard University), Timothy M. Jones
(University of Cambridge), Gu-Yeon Wei (Harvard University), David
Brooks (Harvard University)

Tuesday, June 17, 2014

Tuesday, 8:30-9:30
Keynote II: Should Computer Architects Take a Closer Look At Today’s
Most Pervasive Computer System ? The Mobile Phone?
Prof. Trevor Mudge, Department of Computer Science and Engineering,
University of Michigan

Tuesday, 10:45am-12:00pm
Section 4: Emerging Technologies

Efficient Digital Neurons for Large Scale Cortical Architectures
James E. Smith (University of Wisconsin-Madison)

An Examination of the Architecture and System-level Tradeoffs of
Employing Steep Slope Devices in 3D CMPs
Karthik Swaminathan, Huichu Liu, Jack Sampson, Vijaykrishnan Narayanan
(Pennsylvania State University)

STAG: Spintronic-Tape Architecture for GPGPU Cache Hierarchies
Rangharajan Venkatesan, Shankar Ganesh Ramasubramanium, Swagath
Venkataramani, Kaushik Roy, Anand Raghunathan (Purdue University)

Tuesday, 2:00pm-3:15pm
Section 5A: NVRAM

Memory Persistency
Steven Pelley, Peter M. Chen, Thomas F. Wenisch (University of Michigan)

Reducing Access Latency of MLC PCMs through Line Striping
Morteza Hoseinzadeh (Sharif University of Technology), Mohammad
Arjomand (Sharif University of Technology), Hamid Sarbazi-Azad (Sharif
University of Technology / Institute for Research in Fundamental
Sciences)

HIOS: A Host Interface I/O Scheduler for Solid State Disks
Myoungsoo Jung (University of Texas at Dallas), Wonil Choi (University
of Texas at Dallas), Shekhar Srikantaiah (Qualcomm), Joonhyuk Yoo
(Daegu University), Mahmut T. Kandemir (Pennsylvania State University)

Section 5B: Datacenters and Cloud

Towards Energy Proportionality for Large-Scale Latency-Critical Workloads
David Lo (Stanford University), Liqun Cheng (Google), Rama Govindaraju
(Google), Luiz André Barroso (Google), Christos Kozyrakis (Stanford
University)

SleepScale: Runtime Joint Speed Scaling and Sleep States Management
for Power Efficient Data Centers
Yanpei Liu (University of Wisconsin-Madison), Stark C. Draper
(University of Toronto), Nam Sung Kim (University of Wisconsin-Madison)

Optimizing Virtual Machine Consolidation Performance on NUMA Server
Architecture for Cloud Workloads
Ming Liu, Tao Li (University of Florida)

Tuesday, 3:45pm-5:00pm
Section 6A: DRAM

Row-Buffer Decoupling: A Case for Low-Latency DRAM Microarchitecture
Seongil O (Seoul National University), Young Hoon Son (Seoul National
University), Nam Sung Kim (University of Wisconsin-Madison), Jung Ho
Ahn (Seoul National University)

Half-DRAM: a High-bandwidth and Low-power DRAM Architecture from the
Rethinking of Fine-grained Activation
Tao Zhang (Pennsylvania State University / NVIDIA), Ke Chen (Oracle),
Cong Xu (Pennsylvania State University), Guangyu Sun (Peking
University), Tao Wang (Peking University), Yuan Xie (Pennsylvania
State University)

Flipping Bits in Memory Without Accessing Them: An Experimental Study
of DRAM Disturbance Errors
Yoongu Kim (Carnegie Mellon University), Ross Daly, Jeremie Kim
(Carnegie Mellon University), Chris Fallin, Ji Hye Lee (Carnegie
Mellon University), Donghyuk Lee (Carnegie Mellon University), Chris
Wilkerson (Intel Labs), Konrad Lai, Onur Mutlu (Carnegie Mellon
University)

Section 6B: Circuits and Architecture

Architecture Implications of Pads as a Scarce Resource
Runjie Zhang (University of Virginia), Ke Wang (University of
Virginia), Brett H. Meyer (McGill University), Mircea R. Stan
(University of Virginia), Kevin Skadron (University of Virginia)

Increasing Off-Chip Bandwidth in Multi-Core Processors with Switchable Pins
Shaoming Chen, Yue Hu, Ying Zhang, Lu Peng, Jesse Ardonne, Samuel
Irving, Ashok Srivastava (Louisiana State University)

A Low Power and Reliable Charge Pump Design for Phase Change Memories
Lei Jiang, Bo Zhao, Jun Yang, Youtao Zhang (University of Pittsburgh)

Wednesday, June 18, 2014

Wednesday, 8:30am-10:10am
Section 7A: Coherence and Replay

Fractal++: Closing the Performance Gap between Fractal and
Conventional Coherence
Gwendolyn Voskuilen, T. N. Vijaykumar (Purdue University)

OmniOrder: Directory-Based Conflict Serialization of Transactions
Xuehai Qian (University of California, Berkeley), Benjamin Sahelices
(Universidad de Valladolid), Josep Torrellas (University of Illinois
at Urbana-Champaign)

Pacifier: Record and Replay for Relaxed-Consistency Multiprocessors
with Distributed Directory Protocol
Xuehai Qian (University of California, Berkeley), Benjamin Sahelices
(Universidad de Valladolid), Depei Qian (Beihang University)

Replay Debugging: Leveraging Record and Replay for Program Debugging
Nima Honarmand, Josep Torrellas (University of Illinois at Urbana-Champaign)

Section 7B: Security/OOO Processors

The CHERI capability model: Revisiting RISC in an age of risk
Jonathan Woodruff (University of Cambridge), Robert N. M. Watson
(University of Cambridge), David Chisnall (University of Cambridge),
Simon W. Moore (University of Cambridge), Jonathan Anderson
(University of Cambridge), Brooks Davis (SRI International), Ben
Laurie (Google UK Ltd), Peter G. Neumann (SRI International), Robert
Norton (University of Cambridge), Michael Roe (University of Cambridge)

CODOMs: Protecting Software with Code-centric Memory Domains
Lluís Vilanova (Barcelona Supercomputing Center / Universitat
Politècnica de Catalunya / Technion ? Israel Institute of Technology),
Muli Ben-Yehuda (Technion – Israel Institute of Technology), Nacho
Navarro (Barcelona Supercomputing Center / Universitat Politècnica de
Catalunya), Yoav Etsion (Technion ? Israel Institute of Technology),
Mateo Valero (Barcelona Supercomputing Center / Universitat
Politècnica de Catalunya)

EOLE: Paving the Way for an Effective Implementation of Value Prediction
Arthur Perais, André Seznec (IRISA/INRIA)

Improving the Energy Efficiency of Big Cores
Kenneth Czechowski (Georgia Institute of Technology), Victor W. Lee
(Intel), Ed Grochowski (Intel), Ronny Ronen (Intel), Ronak Singhal
(Intel), Richard Vuduc (Georgia Institute of Technology), Pradeep
Dubey (Intel)

Wednesday, 10:40am-12:20pm
Section 8: Accelerators

General-Purpose Code Acceleration with Limited-Precision Analog Computation
Renée St. Amant (University of Texas at Austin), Amir Yazdanbakhsh
(Georgia Institute of Technology), Jongse Park (Georgia Institute of
Technology), Bradley Thwaites (Georgia Institute of Technology), Hadi
Esmaeilzadeh (Georgia Institute of Technology), Arjang Hassibi
(University of Texas at Austin), Luis Ceze (University of Washington),
Doug Burger (Microsoft Research)

Race Logic: A Hardware Acceleration for Dynamic Programming Algorithms
Advait Madhavan, Timothy Sherwood, Dmitri Strukov (University of
California, Santa Barbara)

Eliminating Redundant Fragment Shader Executions on a Mobile GPU via
Hardware Memoization
Jose-Maria Arnau (Universitat Politecnica de Catalunya), Joan-Manuel
Parcerisa (Universitat Politecnica de Catalunya), Polychronis
Xekalakis (Intel)

WebCore: Architectural Support for Mobile Web Browsing
Yuhao Zhu, Vijay Janapa Reddi (The University of Texas at Austin)

ORGANIZATION COMMITTEE
General Co-Chairs
Pen-Chung Yew, University of Minnesota
Antonia Zhai, University of Minnesota

Program Chair
Steve Keckler, NVIDIA/University of Texas at Austin

Workshop Co-Chairs
David Wentzlaff, Princeton University
Nuwan Jayasena, AMD Research

Tutorial Co-Chairs
Martha Kim, Columbia University
Debbie Marr, Intel

Finance Chair
Yuan Xie, Pennsylvania State University

Industry Liaison Co-Chairs
Hyesoon Kim, Georgia Institute of Technology
Samantika Subramaniam, Intel

Local Arrangements Chair
John Sartori, University of Minnesota

Web Chair
Omer Khan, University of Connecticut

Publicity Co-Chairs
Chia-Lin Yang, National Taiwan University
Natalie Enright Jerger, University of Toronto
Lieven Eeckhout, Ghent University

Registration Chair
Ulya Karpuzcu, University of Minnesota

Proceedings Chair
Eric Chung, Microsoft Research

Travel Award Chair
James Tuck, NC State University

Submission Chair
Paul Gratz, Texas A&M University

Steering Committee
Mark Horowitz, Stanford University
David Kaeli, Northeastern University
Shih-Lien Lu, Intel
Avi Mendelson, Technion
Margaret Martonosi, Princeton University
Yale Patt, University of Texas at Austin
Josep Torrellas, University of Illinois at Urbana-Champaign
David A. Wood, University of Wisconsin-Madison

Call for Participation: SPAA 2014

Submitted by Jeremy Fineman
http://www.spaa-conference.org
June 23 to June 25, 2014

Submitted by Jeremy Fineman
http://www.spaa-conference.org
26th ACM Symposium on
Parallelism in Algorithms and Architectures (SPAA 2014)
June 23-25, 2014 Charles University, Prague, Czech Republic

Highlight: The two keynote speakers will be Bruce Maggs and Fabian Kuhn.

Registration is open. The early registration deadline is May 23.

Please visit the conference webpage for:
– list of accepted papers
– registration
– information on local arrangements

Call for Papers: HiPEAC 2015

Submitted by Gennady Pekhimenko
http://www.hipeac.net/conference
January 19 to January 21, 2015

Submitted by Gennady Pekhimenko
http://www.hipeac.net/conference
Call for Papers – HIPEAC 2015

10th International Conference on High-Performance Embedded
Architectures and Compilers

January 19-21, 2015
Amsterdam, The Netherlands

http://www.hipeac.net/conference

IMPORTANT DATE:
Paper deadline: June 1, 2014

Sponsored by:
HiPEAC Compilation & Architecture
Seventh Framework Programme

Description:
The HiPEAC conference is the premier European forum for
experts in computer architecture, programming models,
compilers and operating systems for embedded and
general-purpose systems.

The 10th HiPEAC conference will take place in Amsterdam,
The Netherlands from Monday, January 19 to Wednesday, January
21, 2015. Associated workshops, tutorials, special sessions,
several large poster session and an industrial exhibition will
run in parallel with the conference. The three day event
attracts about 500 delegates each year.

Paper selection is done by ACM TACO, the ACM Transactions on
Architecture and Code Optimization. Prospective authors submit
their original papers to ACM TACO at any time before the paper
deadline of June 1, 2014 to benefit from two rounds of reviews
before the conference paper track cut-off date which is
November 15, 2014.

See below for detailed information about the new publication
model called ACM TACO 2.0.

Topics of interest include, but are not limited to:
* Processor, memory, and storage systems architecture
* Parallel, multi-core and heterogeneous systems
* Interconnection networks
* Architectural support for programming productivity
* Power, performance and implementation efficient designs
* Reliability and real-time support in processors, compilers
and run-time systems
* Application-specific processors, accelerators and
reconfigurable processors
* Architecture and programming environments for GPU-based
computing
* Simulation and methodology
* Architectural and run-time support for programming languages
* Programming models, frameworks and environments for exploiting
parallelism
* Compiler techniques
* Feedback-directed optimization
* Program characterization and analysis techniques
* Dynamic compilation, adaptive execution, and continuous
profiling/optimization
* Binary translation/optimization
* Code size/memory footprint optimizations

GENERAL CHAIRS
Andy D. Pimentel, University of Amsterdam
Stephan Wong, Delft University of Technology

PROGRAM CHAIR
Onur Mutlu, Carnegie Mellon University

WORKSHOPS & TUTORIALS CHAIRS
Diana Göhringer, Ruhr-Universität Bochum
Sascha Uhrig, TU Dortmund

PUBLICITY CHAIRS
Sorin Cotofana, Delft University of Technology
Antonio Beck, UFRGS
Chao Wang, USTC
Gennady Pekhimenko, Carnegie Mellon Univ.

POSTER & EXHIBITION CHAIR
Koen De Bosschere, Ghent University

SPONSOR CHAIR
Albert Cohen, INRIA

INDUSTRIAL SESSION CHAIR
Daniel Gracia Pérez, Thales

FINANCE CHAIR
Vicky Wandels, Ghent University

WEB AND REGISTRATIONS CHAIR
Eneko Illarramendi, Ghent University

LOCAL ARRANGEMENTS COMMITTEE
Andy D. Pimentel, University of Amsterdam
Stephan Wong, Delft University of Technology
Clemens Grelck, University of Amsterdam
Todor Stefanov, University of Leiden
Zaid Al-Ars, Delft University of Technology

ACM TACO 2.0 Publication Model:
Over the last three years ACM TACO has optimized its
internal review processes. Today, the average turnaround
time from submission to first response is 46 days and 95%
of the manuscripts get a response within 2 months. For
revised manuscripts, the review process goes even faster.
In 2013, most accepted manuscripts went through two rounds
of reviews to reach a final decision only 5 months after
submission. Accepted manuscripts are immediately uploaded
in the ACM digital library. Hence, excellent manuscripts
can make it from submission to publication in about three
months; papers needing a major revision are published after
6 months. We call this “ACM TACO 2.0”

ACM TACO 2.0 now has a review cycle and an acceptance rate
which is competitive with the best ACM conferences, but
without the inconvenient non-negotiable submission deadlines,
and with the advantage of being able to revise a paper based
on the detailed review reports by carefully selected
reviewers, and of being published as soon as it is accepted.
On top of that, authors of original work papers get an open
invitation to present their paper at the yearly HiPEAC
conference, which is the premier European network event on
topics central to ACM TACO, attended by more than 500
scientists.

ACM TACO interim Editor-in-Chief
Prof. Koen De Bosschere

ACM TACO Senior Editor
Prof. Per Stenström

Call for Papers: TASUS 2014

Submitted by JESUS CARRETERO
http://www.arcos.inf.uc3m.es/~tasus/
August 25, 2014

Submitted by Jesus Carretero
http://www.arcos.inf.uc3m.es/~tasus/
TASUS 2014: TECHNIQUES AND APPLICATIONS FOR SUSTAINABLE ULTRASCALE
COMPUTING SYSTEMS.

The ever-increasing data and processing requirements of applications
from various domains are constantly pushing for dramatic increases in
computational and storage capabilities. Today, we have
reached a point where computer systems’ growth cannot be addressed
anymore in an incremental way, due to the huge challenges lying ahead, in
particular scalability, energy barrier, data management, programmability,
and reliability.

Ultrascale computing systems (UCS) are envisioned as a
large-scale complex system joining parallel and distributed
computing systems, maybe located
at multiple sites, that cooperate to provide solutions to the
users. As a growth of two or three orders of magnitude of today’s
computing systems is expected, including systems with unprecedented
amounts of heterogeneous hardware, lines of source code, numbers
of users, and volumes of data, sustainability is critical to ensure the
feasibility of those systems. Due to those needs, currently there is an
emerging cross-domain interaction between high-performance in clouds or
the adoption of distributed programming paradigms, such as Map-Reduce,
in scientific applications, the cooperation between HPC and
distributed system communities still poses many challenges towards
building the ultrascale
systems of the future. Especially in unifying the services to deploy
sustainable applications portable to HPC systems, multi-clouds,
data centers, and big data.

TASUS workshop focuses on the software side, aiming at bringing
together researchers from academia and industry interested in the design,
implementation, and evaluation of services and system software
mechanisms to improve sustainability in ultrascale computing systems
with a holistic approach.

Topics:

We are looking for original high quality research and position papers
on applications, services, and system software for sustainable ultrascale
systems. Topics of interest include:

– Existing and emerging designs to achieve sustainable ultrascale systems.
– High-level parallel programming tools and programmability
techniques to improve applications sustainability on ultrascale
platforms. (model driven, refactoring, dynamic code generation, unified
services, middlewares, …).
– Synergies among emerging programming models and run-times
from HPC, distributed systems, and big data communities to provide
sustainable execution models (increased productivity, transparency,
elasticity, …).
– New energy efficiency techniques for monitoring, analyzing, and
modeling ultrascale systems, including energy efficiency metrics for
multiple resources (computing, storage, networking) and sites.
– Eco-design of ultrascale components and applications, with
special emphasis on energy-aware software components that help
users to shape energy issues for their applications.
– Sustainable resilience and fault-tolerant mechanisms that can
cooperate throughout the whole software stack to handle errors.
– Fault tolerance techniques in partitioned global address space
(e.g. PGAS, MPI, hybrid) and federated cooperative environments.
– Data management optimization techniques through cross layer
adaptation of the I/O stack to provide global system information to
improve data locality.
– Enhanced data management lifecycle on scalable architectures
combining HPC and distributed computing (clouds and data centers).
– Experiences with applications, high-level algorithms, and services
amenable to ultrascale systems.

Important dates:

· Workshop papers due: May 30, 2014

· Workshop author notification: July 4, 2014

· Workshop early registration: July 25, 2014
· Workshop camera-ready papers due: October 3, 2014

Committees

Workshop Organizers:
Prof. Jesus Carretero. University Carlos III of Madrid. Spain.
Dr. Laurent Lefevre. INRIA, ENS of Lyon. France
Prof. Gudula Rünger. Technical University of Chemnitz. Germany.
Prof. Domenico Talia. Universitá della Callabria. Italy.

Call for Participation: ASAP 2014

Submitted by Jason D. Bakos
http://www.zurich.ibm.com/asap2014/
June 18 to June 20, 2014

Submitted by Jason D. Bakos
http://www.zurich.ibm.com/asap2014/
The 25th IEEE International Conference on Application-specific Systems,
Architectures and Processors (ASAP 2014)
June 18-20, 2014,
IBM Research – Zurich, Switzerland
http://www.zurich.ibm.com/asap2014/

Advanced registration deadline: May 9, 2014
Program: http://www.zurich.ibm.com/asap2014/program.html

Keynote speakers:
– Oskar Mencer, Imperial College London “Computing in Space”
– Jeff Stuecheli, IBM Systems and Technology Group “Open Innovation with
POWER8″
– Onur Mutlu, Carnegie Mellon University “Rethinking Memory System Design for
Data-Intensive Computing”
More information available at: http://www.zurich.ibm.com/asap2014/keynote.html

The 25th IEEE International Conference on Application-specific Systems,
Architectures and Processors 2014 takes place June 18-20, 2014 in
Zurich, Switzerland.
The 2014 edition of the conference is organized by IBM Research – Zurich and
the Swiss Federal Institute of Technology Zurich (ETH).

The history of the event traces back to the International Workshop on Systolic
Arrays, organized in 1986 in Oxford, UK. It later developed into the
International Conference on Application Specific Array Processors. With its
current title, it was organized for the first time in Chicago, USA in 1996.
Since then it has alternated between Europe and North-America.

The conference will cover the theory and practice of application-specific
systems, architectures and processors. The 2014 conference will build upon
traditional strengths in areas such as computer arithmetic, cryptography,
compression, signal and image processing, network processing, reconfigurable
computing, application-specific instruction-set processors, and hardware
accelerators.