NIPS 2018 Workshop on Security in Machine Learning
Date: December 7, 2018 (Friday)
Location: Montreal, Canada (co-located with NIPS 2018)
Submission deadline: October 26, 2018 Anywhere on Earth (AoE)
Submission server: (Not open for submission yet) https://cmt3.research.microsoft.com/SECML2018
Contact: firstname.lastname@example.org (this will email all organizers)
Abstract—There is growing recognition that ML exposes new vulnerabilities in software systems. Some of the threat vectors explored so far include training data poisoning, adversarial examples or model extraction. Yet, the technical community's understanding of the nature and extent of the resulting vulnerabilities remains limited. This is due in part to (1) the large attack surface exposed by ML algorithms because they were designed for deployment in benign environments---as exemplified by the IID assumption for training and test data, (2) the limited availability of theoretical tools to analyze generalization, (3) the lack of reliable confidence estimates. In addition, the majority of work so far has focused on a small set of application domains and threat models.
This workshop will bring together experts from the computer security and machine learning communities in an attempt to highlight recent work that contribute to address these challenges. Our agenda will complement contributed papers with invited speakers. The latter will emphasize connections between ML security and other research areas such as accountability or formal verification, as well as stress social aspects of ML misuses. We hope this will help identify fundamental directions for future cross-community collaborations, thus charting a path towards secure and trustworthy ML.
Thank you to the Open Philanthropy Project for sponsoring this event. Their grant will fund a best paper award as well as support for travel.
Call For Papers
The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or spotlight presentation, in addition to a poster slot.
There are two tracks for submissions:
- Research Track: Submissions to this track will introduce novel ideas or results. Submissions should follow the NIPS format and not exceed 4 pages (excluding references, appendices or large figures).
- Encore Track: Papers already accepted at other venues can be submitted to this track. There are no format constraints.
We invite submissions on any aspect of machine learning that relates to computer security (and vice versa). This includes, but is not limited to:
- Training time attacks (e.g., data poisoning)
- Test time attacks (e.g., adversarial examples, model stealing)
- Cryptography for machine learning
- Theoretical foundations of secure machine learning
- Formal verification of machine learning systems
- Identifying bugs in machine learning systems
- Position papers raising new directions for secure machine learning
We particularly welcome submissions that introduce novel datasets and/or organize competitions on novel datasets. When relevant, submissions are encouraged to clearly state their threat model, release open-source code and take particular care in conducting ethical research. Reviewing will be performed in a single-blind fashion (reviewers will be anonymous but not authors). Reviewing criteria include (a) relevance, (b) quality of the methodology and experiments, (c) novelty.
Note that submissions on privacy would be best submitted to the workshop dedicated to this topic.
This workshop will not have proceedings.
Contact email@example.com for any questions.
The following is a tentative schedule and is subject to change prior to the workshop.
|Session 1: Provable methods for secure ML|
|9:00am||Contributed Talk #1||TBD|
|9:15am||Invited Talk #1||Aditi Raghunathan|
|9:45am||Contributed Talk #2||TBD|
|10:00am||Poster Spotlights #1||TBD|
|10:10am||Poster Session followed by coffee break|
|Session 2: ML security and society|
|11:45am||Contributed Talk #3||TBD|
|Session 3: On the connections between ML security, robust optimization, and accountability|
|1:30pm||Invited Talk #2||Been Kim|
|2:00pm||Contributed Talk #4||TBD|
|2:15pm||Invited Talk #3||Moustapha Cisse|
|2:45pm||Contributed Talk #5||TBD|
|3:00pm||Poster Spotlights #2||TBD|
|3:20pm||Poster Session followed by coffee break|
|Session 4: ML security from a formal verification perspective|
|4:15pm||Invited Talk #4||Marta Kwiatkowska|
|4:45pm||Invited Talk #5||Somesh Jha|
- Aditi Raghunathan (Stanford University)
- Alexey Kurakin (Google Brain)
- Ananth Raghunathan (Google Brain)
- Anish Athalye (Massachusetts Institute of Technology)
- Arunesh Sinha (University of Michigan)
- Battista Biggio (University of Cagliari)
- Berkay Celik (Pennsylvania State University)
- Catherine Olsson (Google Brain)
- Chang Liu (University of California, Berkeley)
- David Evans (University of Virginia)
- Dimitris Tsipras (Massachusetts Institute of Technology)
- Earlence Fernandes (University of Washington)
- Eric Wong (Carnegie Mellon University)
- Fartash Faghri (University of Toronto)
- Florian Tramer (Stanford University)
- Hadi Abdullah (University of Florida)
- Jonathan Uesato (DeepMind)
- Kassem Fawaz (University of Wisconsin-Madison)
- Kathrin Grosse (CISPA)
- Krishna Gummadi (MPI-SWS)
- Matthew Wicker (University of Georgia)
- Nicholas Carlini (Google Brain)
- Nicolas Papernot (Google Brain)
- Pin-Yu Chen (IBM)
- Pushmeet Kohli (DeepMind)
- Shreya Shankar (Stanford University)
- Suman Jana (Columbia University)
- Varun Chandrasekaran (University of Wisconsin-Madison)
- Xiaowei Huang (Liverpool University)
- Yanjun Qi (University of Virginia)
- Yizheng Chen (Georgia Tech)