November 16, 2017
ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI)
Philadelphia, United States
June 18-22, 2018
Submission deadline: Nov 16, 2017 (11:59PM AoE)
Author-Response Period: Jan 25-27, 2018
Author Notification: Feb 13, 2018
Camera-Ready Deadline: Apr 15, 2018
PLDI is a premier forum for programming language research, broadly construed, including design, implementation, theory, applications, and performance. PLDI seeks outstanding research that extends and/or applies programming-language concepts to advance the field of computing. Novel system designs, thorough empirical work, well-motivated theoretical results, and new application areas are all welcome emphases in strong PLDI submissions.
Reviewers will evaluate each contribution for its accuracy, significance, originality, and clarity. Submissions should be organized to communicate clearly to a broad programming-language audience as well as to experts on the paper’s topics. Papers should identify what has been accomplished and how it relates to previous work.
In almost all cases, reviews will be performed by a subset of the Program Committee (PC), the External Program Committee (EPC), and the External Review Committee (ERC). Authors will have the opportunity to respond to initial reviews to correct and clarify technical concerns. The PC will make final accept/reject decisions except for papers with PC authors—such papers will have no PC reviewers and the EPC will make final decisions.
PLDI uses double-blind reviewing. This means that author names and affiliations must be omitted from the submission. Additionally, if the submission refers to prior work done by the authors, that reference should be made in third person. Any supplementary material must also be anonymized.
The submission site is https://pldi18.hotcrp.com.
Please visit the conference web site for further details.
ARTIFACT EVALUATION FOR ACCEPTED PAPERS:
The authors of accepted PLDI papers will be invited to submit supporting materials to the Artifact Evaluation process. Artifact Evaluation is run by a separate committee whose task is to assess how well the artifacts support the work described in the papers.
This submission is voluntary but encouraged and will not influence the final decision regarding the papers. Papers that go through the Artifact Evaluation process successfully will receive a badge printed on the papers themselves. Authors of accepted papers are encouraged to make these materials publicly available upon publication of the proceedings, by including them as “source materials” in the ACM Digital Library.
General Chair: Jeffrey S. Foster, University of Maryland
Program Chair: Dan Grossman, University of Washington