Novel system designs, thorough empirical work, well-motivated theoretical results, and new application areas are all welcome emphases in strong PLDI submissions. This is useful for (virtual) conferences with a continuous program (with repeated sessions). For example, if your name is Smith and you have worked on amphibious type systems, instead of saying “We extend our earlier work on statically typed toads [Smith 2004],” you might say “We extend Smith’s [2004] earlier work on statically typed toads.” Also, be sure not to include any acknowledgements that would give away your identity. Submit work previously discussed at an informal workshop, previously posted on ArXiV or a similar site, previously submitted to a conference not using double-blind reviewing, etc. Sign in using your account. ; Accepted papers have been posted. Email. for defect repair, reformatting, and refactoring. PLDI is a premier forum for programming language research, broadly construed, including design, implementation, theory, applications, and performance. 1. Other conflicts include institutional conflicts, financial conflicts of interest, friends or relatives, or any recent co-authors on papers and proposals (last 2 years). A: It is rare for authorship to be guessed correctly, even by expert reviewers, as detailed in this study. The results are promising and exciting, and lead to further opportunities of exploring the amenability of DL and ML to different SE and PL tasks. The submission site requires entering author names and affiliations, relevant topics, and potential conflicts. We want reviewers to be able to approach each submission without any such, possibly involuntary, pre-judgment. Only the last submission will be reviewed. to come on the ESEC/FSE 2020 website. Use your best judgment. First, submit a review for your paper that is as careful as possible, outlining areas where you think your knowledge is lacking. A: Use common sense. We are grateful to prior organizers for their work, which is reused here. Do not use the PACMPL files or format; PLDI is not using them. (07/2020) Our papers MARBLE : Model-Based Robustness Analysis of Stateful Deep Learning Systems and Cats Are Not Fish : Deep Learning Testing Calls for Out-Of-Distribution Awareness are accepted at ASE 2020 . Q: What should I do if I if I learn the authors’ identity? Artifact Evaluation is run by a separate committee whose task is to assess how well the artifacts support the work described in the papers. Submissions should be organized to communicate clearly to a broad programming-language audience as well as to experts on the paper’s topics. A: Making your code publicly available is not incompatible with double-blind reviewing. Q: How should I avoid learning the authors’ identity if I am using web-search in the process of performing my review? What should I do if a prospective PLDI author contacts me and asks to visit my institution? Authors declare conflicts-of-interest when submitting their papers using the guidelines in the call-for-papers. All types of the accepted submissions will be invited for presentation. A: Double-blind reviewing does not change the principle that reviewers should not review papers with which they have a conflict of interest, even if they do not immediately know who the authors are. Replication package for "Cost Measures Matter for Mutation Testing Study Validity", accepted at FSE 2020 This is a replication package for the experiments reported in 2020 FSE paper "Cost Measures Matter for Mutation Testing Study Validity".For more information, please read "". The rapidly developing field of representation learning (RL) in artificial intelligence is concerned with questions surrounding how we can best learn meaningful and useful representations of data. Authors can submit multiple times prior to the (firm!) But there are many gray areas and trade-offs. Q: I am submitting a paper that extends my own work that previously appeared at a workshop. discuss work under submission, but you should not broadly advertise your work through media that is likely to reach your reviewers. You should also avoid revealing the institutional affiliation of authors or at which the work was performed. If you have any doubts about how to interpret the double blind rules, please contact the Program Chair. The agenda for the business meeting will include: Authors of empirical papers are encouraged to consider the seven categories of the SIGPLAN Empirical Evaluation Guidelines when preparing their submissions. Otherwise you should not treat double-blind reviewing differently from other reviewing. Your job is not to make your identity undiscoverable but simply to make it possible for reviewers to evaluate your submission without having to know who you are. As a last resort, if you feel like your review would be extremely uninformed and you’d rather not even submit a first cut, contact the Program Chair. signin. We aim to build a vibrant forum for forward-looking, innovative research in learning representations in software engineering and programming languages. If authors are unable despite reasonable effort to obtain visas to travel to the conference, we will make arrangements to enable remote participation or presentation by another attendee on behalf of the authors. If naming your system essentially reveals your identity (or institution), then anonymize it. In particular, refrain from seeking out information on the authors’ identity, but if you discover it accidentally this will not automatically disqualify you as a reviewer. Create an account. A: PC and ERC members should do their own reviews, not delegate them to someone else. Publicize your work on social media if wide public [re-]propagation is common (e.g., Twitter) and therefore likely to reach potential reviewers. This material should be uploaded at the same time as the submission. PLDI uses double-blind reviewing. Word users should use the acmart template for Word. This call-for-papers is an adaptation and evolution of content from previous instances of PLDI. Publicize your work on major mailing lists used by the community (because potential reviewers likely read these lists). ; Registered? The core question is really whether the system is one that, once identified, automatically identifies the author(s) and/or the institution.