Developing Competency in Parallelism: Techniques for Education and Training

Submitted by Ed Gehringer
http://www4.ncsu.edu/~efg/dcp_2012_cfp.html
October 22, 2012

Submitted by Ed Gehringer
http://www4.ncsu.edu/~efg/dcp_2012_cfp.html
Call for Papers – Due Friday, August 24, 2012

With the increasing penetration of parallelism into computing,
programmers of all stripes need to acquire competencies in parallel
programming. This workshop will concentrate on discussing and
disseminating resources for gently introducing parallelism into
programmers’ skill sets.

Audience: Academic faculty and industrial trainers.

SUBMISSION SUMMARY
Due on: Friday, August 24, 2012
Notifications: Friday, September 7, 2012
Camera-ready copy due: Friday, October 5, 2012
Workshop date: Monday, October 22, 2012
Format: ACM Proceedings format
Submit to: dcp-pc@lists.ncsu.edu
Contact: Dick Brown and Ed Gehringer (chairs)

The program will include multiple refereed paper sessions, as well as

– a separate hands-on session (contributed by organizers) presenting
an example body of materials for teaching and training in parallel
programming, and

– an “unconference” session; you may submit topics for discussion in
advance by filling out this form.

We are seeking paper submissions along the following lines:

– Training materials from developers and vendors of programming languages
– Short “killer” parallel application examples that can be used in
academic or training environments

– Short modules that can be used in short courses for practicing
programmers, or dropped into academic courses dealing with some
aspect of programming

– Tools for visualizing or teaching parallelism in programming. (A
tools submission should include expository illustrations, screen
shots, and/or accompanying video(s) that portray the functionality
and value of that tool for pedagogy and training; test access to a
tool is optional.)

We especially seek papers related to

– GPU, or hybrid (GPU+CPU) programming, or
– productive parallel-programming frameworks scuh as Hadoop/MapReduce.

Submission

Papers of up to 8 pages should be submitted in ACM Proceedings format,
to dcp-pc@lists.ncsu.edu.

For additional information, clarification, or answers to questions
please contact the workshop organizers at dcp-pc@lists.ncsu.edu.

ORGANIZERS

Dick Brown, St. Olaf College, rab@stolaf.edu
Ed Gehringer, North Carolina State University, efg@ncsu.edu

PROGRAM COMMITTEE

Joel Adams, Calvin College
Dennis Brylow, Marquette University
David Bunde, Knox College
Dan Ernst, Cray Research
Jens Mache, Lewis & Clark College
Dennis Mancl, Alcatel-Lucent
Bina Ramamurthy, SUNY at Buffalo

Outbrief of DARPA/ISAT Workshop: Advancing Computer Systems without Technology Progress

Submitted by Mark D. Hill
http://www.cs.wisc.edu/~markhill/papers/isat2012_ACSWTP.pdf

Submitted by Mark D. Hill
http://www.cs.wisc.edu/~markhill/papers/isat2012_ACSWTP.pdf
Advancing Computer Systems without Technology Progress
Mark D. Hill and Christos Kozyrakis
ISAT Outbrief, April 17-18, 2012, of DARPA/ISAT Workshop, March 26-27, 2012

This outbrief–now released for public distribution–summarizes finding
from 48 researchers who gathered in Chicago earlier this year to discuss
the following:

For decades, computer systems designers have built better systems relying
in large part on dramatically better technology. However, scaling CMOS in
a power and cost effective manner is now difficult, while post-CMOS
technologies are not yet mature. Thus, for the foreseeable future,
we must either accept that computers are good enough or advance them
without (significant) technology progress. Since computer system
superiority has been central to U.S. security, government, education,
and commerce, this workshop seeks to catalyze the latter.

Hypothesis: CMOS transistors will soon stop getting “better,” especially
w.r.t. power, and it is out of scope to discuss post-CMOS technologies.

Charge: Given the hypothesis, how do we continue to make computers systems
“better”?

Call For Papers: Computer Science Journals (CSC Journals)

Submitted by J. Stewart
http://www.cscjournals.org/csc/cfp.php
July 31, 2012

Submitted by J. Stewart
http://www.cscjournals.org/csc/cfp.php
Call For Papers

Computer Science Journals (CSC Journals) globally welcomes research scholars & scientists from different domains in its realm of Open Access Publication. Through the integration of scientific researchers & industrial practitioners; each Call for Papers by CSC Journals becomes another success story of its contribution towards the prosperity of scientific research, social infrastructure & industrial advancement.

The purpose of Call for Papers (CFP) is to disseminate original scientific research & knowledge through wide assortment of journals in the field of Computers and fused sciences. This includes computer science and engineering in general, bio-science, image sciences, signal sciences, mathematical sciences, social & management sciences, nano sciences, intelligent systems, ubiquitous computing and many others. CSC Journals always seeks to publish a balanced mix of high quality theoretical or empirical research articles, case studies, book reviews, proposals, analysis, surveys, tutorials, editorials as well as pedagogical and curricular issues.

Important Dates

Paper Submission: July 31, 2012

Author Notification: September 15, 2012

Journal Publication: October 2012

Extended Paper Deadline 7/20 for the Workshop on Managing Systems Automatically and Dynamically (MAD)

Submitted by Greg Bronevetsky
https://www.usenix.org/conference/mad12
October 8 to October 10, 2012

Submitted by Greg Bronevetsky
https://www.usenix.org/conference/mad12
Workshop on Managing Systems Automatically and Dynamically (MAD)
At the USENIX Symposium on Operating Systems Design and Implementation (OSDI)
October 8-10, 2012
Hollywood, CA, USA

IMPORTANT DATES
* Full paper submission due: Friday, July 20, 2012
* Notification of acceptance: Friday, August 17, 2012
* Final papers due: Wednesday, September 12, 2012

OVERVIEW
The complexity of modern systems makes them extremely challenging to manage.
From highly heterogeneous desktop environments to large-scale systems that
consist of many thousands of software and hardware components, these systems
exhibit a wide range of complex behaviors are difficult to predict. As such,
although raw computational capability of these systems grows each year, much of
it is lost to (i) complex failures that are difficult to localize and (ii) to
poor performance and efficiency that results from system configuration that is
inappropriate for the user’s workload. The MAD workshop (an extended
follow-on of the SLAML workshop) focuses on techniques to make complex
systems manageable, addressing the problem’s three major aspects:

System Monitoring
Systems report their state and behavior using a wide range of mechanisms.
System and application logs include reports of key events that occur within
software or hardware components. Performance counters measure various OS and
hardware-level metrics (e.g. packets sent or cache misses) within a given time
period. Further, information from source code version control systems or
request traces can help identify the source of failures of poor performance.

Data Analysis
Data produced by monitoring can be analyzed using a variety of techniques to
understand the system state and predict its behavior in various possible
scenarios. Traditionally this consisted of system administrators manually
inspecting system logs or using explicit pattern-matching rules to identify key
events. Recent research has also focused on statistical and machine learning
techniques to automatically identify behavioral patterns. Finally, the data can
be presented directly to system administrators. Because of its large volume,
such displays involve aggregation techniques that show the maximal information
in minimal space.

Informed Action
The analyses and visualizations are used by operators to select the best action
to improve productivity or localize and resolve system failures. The possible
actions include restarting processes, rebooting servers, rolling back
application updates or reconfiguring system components. Since the choice of the
best action is complex, it requires assistance from additional analysis tools
to predict the productivity of any given configuration on the given workload.
MAD seeks original early work on system management, including position papers
and work-in-progress reports that will mature to be published at high-quality
conferences. Papers are expected to demonstrate a strong foundation in the
needs of the system management community and be positioned within the broader
context of related work. In addition to technical merit, papers will be
selected to encourage discussion at the workshop and among members of the
general system management community.

TOPICS
Topics include but are not limited to:
Monitoring
* Techniques to collect metric and log data, including tracing and statistical
measurements
* Large-scale aggregation of metric and log data
* Reports on publicly available sources of sample logs of system metrics

Analysis
* Automated analysis of system logs and metrics using statistical, machine
learning, natural language processing techniques
* Visualization of system information in a way that leads administrators to
actionable insights
* Evaluation of the quality of learned models, including assessing the
confidence/reliability of models and comparisons between different methods

Action
* Applications of log and metric analysis to address reliability, performance,
power management, security, fault diagnosis, scheduling, or manageability
* Challenges of scale in applying machine learning to large systems
* Integration of machine learning into real-world systems and processes

WORKSHOP ORGANIZERS
Peter Bodik, Microsoft Research (peterb@microsoft.com)
Greg Bronevetsky, Lawrence Livermore National Laboratory (bronevetsky@llnl.gov)

SUBMISSION GUIDELINES
Submitted papers must be no longer than 6 8.5″x11″ or A4 pages, using a 10
point font on 12 point (single spaced) leading, with a maximum text block of
6.5 inches wide by 9 inches deep. The page limit includes everything except
for references, for which there is no limit. The use of color is acceptable,
but the paper should be easily readable if viewed or printed in gray scale.
Authors must make a good faith effort to anonymize their submissions, and they
should not identify themselves either explicitly or by implication (e.g.,
through the references or acknowledgments). Submissions violating the detailed
formatting and anonymization rules on the Web site will not be considered for
publication. Authors who are not sure about anonymization or whether their
paper fits into MAD should contact the MAD chairs. There will be no extensions
for reformatting. Papers will be held in full confidence during the reviewing
process, but papers accompanied by nondisclosure agreement forms are not
acceptable and will be rejected without review. Authors of accepted papers will
be expected to supply electronic versions of their papers and encouraged to
supply source code and raw data to help others replicate and better understand
their results.

Call for Papers: ISPASS 2013 (IEEE International Symposium on Performance Analysis of Systems and Software)

Submitted by Niti Madan
http://ispass.org/ispass2013/
September 21 to September 28, 2012

Submitted by Niti Madan
http://ispass.org/ispass2013/
The IEEE International Symposium on Performance Analysis of
Systems and Software provides a forum for sharing advanced
academic and industrial research work focused on performance
analysis in the design of computer systems and software.
Authors are invited to submit previously unpublished work
for possible presentation at the conference. Papers are
solicited in fields that include the following:

-Power/Performance Evaluation methodologies

Analytical modeling
Statistical approaches
Tracing and profiling tools
Simulation techniques
Hardware (e.g., FPGA) accelerated simulation
Hardware performance counter architectures
Power/Temperature/Variability/Reliability models for computer systems
Micro-benchmark based hardware analysis techniques

-Power/Performance analysis

Metrics
Bottleneck identification and analysis
Visualization

-Power/Performance analysis of commercial and experimental hardware
General-purpose microprocessors
Multi-threaded, multi-core and many-core architectures
Accelerators and graphics processing units
Embedded and mobile systems
Enterprise systems and data centers
Supercomputers
Computer networks

-Power/Performance analysis of emerging workloads and software

Software written in managed languages
Virtualization and consolidation workloads
Internet-sector workloads
Embedded, multimedia, games, telepresence
Bioinformatics, life sciences, security, biometrics

-Application and system code tuning and optimization

-Confirmations or refutations of important prior results

In addition to research papers, we also welcome tool papers. The conference
is an ideal forum to publicize new tools to the community.
Tool papers will be judged primarily on their potentially wide impact
and use than on their research contribution.
Tools in any of the above fields of interest are eligible.

Important Dates

Paper Abstract submissions due: September 21, 2012

Full submissions due: September 28, 2012 (No extensions)

Rebuttal: November 21-23, 2012
Notification of acceptance: December 10, 2012
Final paper due: January 27, 2013
Conference dates: April 21-23 2013

RACES'12: Relaxing Synchronization for Multicore and Manycore Scalability, SPLASH'12 Workshop

Submitted by RACES PC Chair
http://soft.vub.ac.be/races/
October 21, 2012

Submitted by RACES PC Chair
http://soft.vub.ac.be/races/
Call for Participation

R A C E S 2 0 1 2

Relaxing Synchronization for Multicore and Manycore Scalability

Workshop Co-located with SPLASH in Tucson, Arizona
Sunday, October 21

Submission deadline: Monday, August 6
http://soft.vub.ac.be/races/

Massively-parallel systems are coming: core counts keep rising – whether
conventional cores as in multicore and manycore systems, or specialized cores
as in GPUs. Conventional wisdom has been to utilize this parallelism by
reducing synchronization to the minimum required to preserve determinism – in
particular, by eliminating data races. However, Amdahl’s law implies that on
highly-parallel systems even a small amount of synchronization that introduces
serialization will limit scaling. Thus, we are forced to confront the
trade-off between synchronization and the ability of an implementation to
scale performance with the number of processors: synchronization inherently
limits parallelism. This workshop focuses on harnessing parallelism by
limiting synchronization, even to the point where programs will compute
inconsistent or approximate rather than exact answers.

Organizers:

Andrew P. Black, Portland State University
Theo D’Hondt, Vrije Universiteit Brussel
Doug Kimelman, IBM Thomas J. Watson Research Center
Martin Rinard, MIT CSAIL
David Ungar, IBM Thomas J. Watson Research Center

Theme and Topics
—————-

A new school of thought is arising: one that accepts and even embraces
nondeterminism (including data races), and in return is able to dramatically
reduce synchronization, or even eliminate it completely. However, this
approach requires that we leave the realm of the certain and enter the realm
of the merely probable. How can we cast aside the security of correctness, the
logic of a proof, and adopt a new way of thinking, where answers are good
enough but not certain, and where many processors work together in parallel
without quite knowing the states that the others are in? We may need some
amount of synchronization, but how much? Or better yet, how little? What
mental tools and linguistic devices can we give programmers to help them adapt
to this challenge? This workshop focuses on these questions and related ones:
harnessing parallelism by limiting synchronization, even to the point where
programs will compute inconsistent or approximate rather than exact answers.

This workshop aims to bring together researchers who, in the quest for
scalability, have been exploring the limits of how much synchronization can be
avoided. We invite submissions on any topic related to the theme of the
workshop, pro or con. We want to hear from those who have experimented with
formalisms, algorithms, data structures, programming languages, and mental
models that push the limits. In addition, we hope to hear from a few voices
with wilder ideas: those who may not have reduced their notions to practice
yet, but who have thoughts that can inspire us as we head towards this
yet-uncertain future. For example, biology may yield fruitful insights. The
ideal presentation for this workshop will focus on a grand idea, but will be
backed by some experimental result.

Submission
———-

Authors are invited to submit short position papers, technical papers, or
experience reports. Submissions may range from a single paragraph to as long
as desired, but the committee can only commit to reading one full page.
Nonetheless, we expect that in many cases reviewers will read farther than
that. Submissions should be formatted according to the ACM SIG Proceedings
style at http://www.acm.org/sigs/publications/proceedings-templates and should
be submitted via EasyChair at
http://www.easychair.org/conferences/?conf=races2012 in PDF format.

PLEASE NOTE: All submissions (except for those retracted by their authors)
will be posted on the workshop website, along with reviews, which will be
signed by the reviewers, and a rating assigned by the program committee.
Further, the submissions to be presented at the workshop will be selected by a
vote of all registered attendees. As well, submissions to be published in an
official proceedings will be selected by the program committee. Please see the
sections below concerning the rationale and details for this process.

Program Committee
—————–

Andrew P. Black, Portland State University
Yvonne Coady, University of Victoria
Tom Van Cutsem, Vrije Universiteit Brussel
Theo D’Hondt, Vrije Universiteit Brussel
Phil Howard, Portland State University
Doug Kimelman, IBM Thomas J. Watson Research Center
Eddie Kohler, Harvard SEAS
Jim Larus, Microsoft Research
Stefan Marr, Vrije Universiteit Brussel
Tim Mattson, Intel
Paul McKenney, IBM
Hannes Payer, University of Salzburg
Dan Prenner, IBM
Lakshmi Renganarayana, IBM
David Ungar, IBM Thomas J. Watson Research Center

– TBC –

Important Dates
—————

August 6 Submission deadline.
August 29 Reviews sent to authors.
September 3 Last date for retraction by authors.
September 4 Papers, reviews, ratings posted on web site. Voting opens.
September 11 Voting closes.
September 14 Notification of papers accepted for presentation
and/or publication.
August 21 SPLASH early registration deadline.
October 21 Workshop.
mid-November Camera-ready copy due for papers selected for proceedings.

Goals and Outcomes
——————

We will consider the workshop a success if attendees come away with new
insights into fundamental principles, and new ideas for algorithms, data
structures, programming languages, and mental models, leading to improving
scaling by limiting synchronization, even to the point where programs will
compute inconsistent or approximate rather than exact answers. The goal of
this workshop is both to influence current programming practice and to
initiate the coalescence of a new research community giving rise to a new
subfield within the general area of concurrent and parallel programming.
Results generated by the workshop will be made persistent via the workshop
website and possibly via the ACM Digital Library.

The RACES 2012 Review Process and Workshop Presentation Selection Process
=========================================================================

David Ungar
IBM Thomas J. Watson Research Center
PC Chair for Workshop Presentations

Technology has changed the economic tradeoffs that once shaped the reviewing
process. It has become cheap and easy to share submissions, reviews and the
preferences of the attendees. What remains scarce is the number of hours in a
day, and as a consequence the time we have in our workshop in which to learn
and share with each other. I believe that this change in the balance of
factors affords us the opportunity to significantly improve the review and
selection processes.

Sadly, all too often, those who spend their precious time attending a workshop
are not served as well they could be with respect to enlightenment, thought
provoking discussions, and being challenged by new ideas. The fault lies not
in the people who generously donate their time to serve on program committees
and do external reviews. Rather, the fault lies in the process itself. The
very notion of acceptance by committee forces us to boil a rich stew of
reactions, insights, and opinions, down to a single carrot. As a result, it is
common for PC members to come away from a meeting feeling that either some
fraud will be perpetrated on the audience by a fundamentally flawed paper, or,
more often, feeling that a sin of omission will be committed on the audience
by the suppression of a significant but controversial new idea. Sometimes
instead of a carrot we get a lump of gristle.

There are other, lesser, flaws in this process. Although reviewer anonymity
protects negative reviewers from resentment and reprisal, all too often it
prevents an open debate that would promote mutual understanding. Further, in
some cases anonymity allows a reviewer to cast aspersions on authors without
being accountable. Finally, we fail to take maximal advantage of the time and
effort spent in creating insightful reviews when we withhold them from the
audience. Attendees and readers could benefit from expert reactions as they
try to glean the wisdom embedded in the authors’ papers.

In this workshop, we have an opportunity to try a different process, one that
we hope will serve all parties better: All reviews will be signed, all
submissions and reviews will be posted on the web (unless an author chooses to
retract a submission), and the attendees will be the ones selecting which
papers will be presented.

Here are the details:
———————

At least three committee members will review each submission, and each review
will be signed. Once all the reviews for a submission are in, they will be
sent to the author, who can decide to retract the paper if so desired. Then,
all submissions (except any that are retracted) will be posted on the workshop
website, along with all reviews and a net score determined for each submission
by the program committee.

At this point, prior to the workshop, all registered attendees will be invited
to read the submissions and the reviews, and vote on which of the papers they
want to see presented. Of course, an attendee who so wishes will be free to
merely vote according to the recommendation of the PC, or to not vote and to
accept the wisdom of the rest of the attendees. But the important point
remains: it will be those who will be spending the time in the room who get to
decide how that time is spent. Please note that a submission being posted on
the workshop website and/or presented at the workshop are not intended to
constitute prior publication for purposes of publishing in other workshops,
major conferences, or journals.

This process is a grand experiment, designed to exploit the technologies we
Computer Scientists have created, in order to better serve the advancement of
Computer Science. We hope that its potential excites you as much as it excites
us!

The RACES 2012 Published Proceedings Paper Selection Process
============================================================

Theo D’Hondt
Vrije Universiteit Brussel
PC Chair for Proceedings Papers

We understand that many submitters may want to publish their paper in an
official proceedings in addition to having it posted on the workshop website.
In order to satisfy that desire, we will publish a proceedings via the ACM
Digital Library. To satisfy ACM DL selectivity requirements, a separate and
more conventional process will be employed for selecting papers to be included
in the published proceedings: Even though all submissions will be posted on
the workshop website (unless retracted by the author), the program committee
will select a smaller number of papers to be included in the published
proceedings based on the signed and posted reviews. Authors of the selected
papers will be asked to submit revised and extended papers mid-November,
taking into account the reviews and the publisher’s guidelines. Page limits
for the revised and extended papers to be included in the published
proceedings are anticipated to be 10 pages for research papers, and 5 pages
for position papers. Please note that inclusion in the ACM Digital Library
published proceedings may well be considered to be a prior publication for
purposes of publication in other workshops, major conferences, or journals.
For that reason, authors may choose to decline to have their submission
included in the published proceedings, even if it was presented at the
workshop.

For questions please contact: races@soft.vub.ac.be
For updates, follow us on Twitter: @races_workshop

==============================================================================