NeurIPS 2018 Workshop on Security in Machine Learning
Date: December 7, 2018 (Friday)
Location: Montreal, Canada (co-located with NeurIPS 2018)
Contact: secml2018-org@googlegroups.com (this will email all organizers)
Room: 513DEF
Abstract—There is growing recognition that ML exposes new vulnerabilities in software systems. Some of the threat vectors explored so far include training data poisoning, adversarial examples or model extraction. Yet, the technical community's understanding of the nature and extent of the resulting vulnerabilities remains limited. This is due in part to (1) the large attack surface exposed by ML algorithms because they were designed for deployment in benign environments---as exemplified by the IID assumption for training and test data, (2) the limited availability of theoretical tools to analyze generalization, (3) the lack of reliable confidence estimates. In addition, the majority of work so far has focused on a small set of application domains and threat models.
This workshop will bring together experts from the computer security and machine learning communities in an attempt to highlight recent work that contribute to address these challenges. Our agenda will complement contributed papers with invited speakers. The latter will emphasize connections between ML security and other research areas such as accountability or formal verification, as well as stress social aspects of ML misuses. We hope this will help identify fundamental directions for future cross-community collaborations, thus charting a path towards secure and trustworthy ML.
Sponsor
Thank you to the Open Philanthropy Project for sponsoring this event. Their grant will fund a best paper award as well as support for travel.
Schedule
The following is a tentative schedule and is subject to change prior to the workshop.
8:45am | Opening Remarks | Organizers |
---|---|---|
Session 1: Provable methods for secure ML | ||
9:00am | Contributed Talk #1: Sever: A Robust Meta-Algorithm for Stochastic Optimization [video] | Jerry Li |
9:15am | Invited Talk #1: Semidefinite relaxations for certifying robustness to adversarial examples [slides, video] | Aditi Raghunathan |
9:45am | Contributed Talk #2: On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models [video] | Sven Gowal |
Poster session 1 | ||
10:00am | Poster Session followed by coffee break | |
Session 2: ML security and society | ||
11:00am | Keynote: A Sociotechnical Approach to Security in Machine Learning [video] | danah boyd |
11:45am | Contributed Talk #3: Law and Adversarial Machine Learning [video] | Salome Viljoen |
Lunch break | ||
Session 3: Attacks and interpretability | ||
1:30pm | Invited Talk #2: Interpretability for when NOT to use machine learning [video] | Been Kim |
2:00pm | Contributed Talk #5: Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures [video] | Csaba Szepesvari |
2:15pm | Invited Talk #3: Semantic Adversarial Examples [video] | Somesh Jha |
Poster session 2 | ||
2:45pm | Poster Session followed by coffee break | |
Session 4: ML security from a formal verification perspective | ||
4:15pm | Invited Talk #4: Safety verification for neural networks with provable guarantees [slides, video] | Marta Kwiatkowska |
4:45pm | Contributed Talk #4: Model Poisoning Attacks in Federated Learning [video] | Arjun Nitin Bhagoji |
Accepted papers
Research track:
- Sever: A Robust Meta-Algorithm for Stochastic Optimization morning Ilias Diakonikolas (USC); Gautam Kamath (MIT); Daniel M Kane (UCSD); Jerry Li (MIT); Jacob Steinhardt (Stanford); Alistair Stewart (University of Southern California) [paper]
- Evading classifiers in discrete domains with provable optimality guarantees afternoon Bogdan Kulynych (EPFL); Jamie Hayes (University College London); Nikita Samarin (UC Berkeley); Carmela Troncoso (EPFL) [paper]
- Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples afternoon Felix Kreuk, Assi Barak, Shir Aviv, Moran Baruch, Benny Pinkas, Joseph Keshet (Bar-Ilan University) [paper].
- A Surprising Density of Illusionable Natural Speech afternoon Melody Y Guan (Stanford University); Gregory Valiant (Stanford University) [paper]
- On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models morning Sven Gowal (DeepMind); Krishnamurthy Dvijotham (DeepMind); Robert Stanforth (Deepmind); Rudy R Bunel (University of Oxford); Chongli Qin (DeepMind); Jonathan Uesato (DeepMind); Relja Arandjelovic (DeepMind); Timothy Arthur Mann (DeepMind); Pushmeet Kohli (DeepMind) [paper]
- Evaluating and Understanding the Robustness of Adversarial Logit Pairing afternoon Logan Engstrom (MIT); Andrew Ilyas (MIT); Anish Athalye (Massachusetts Institute of Technology) [paper]
- Law and Adversarial Machine Learning afternoon Ram Shankar Siva Kumar (Microsoft Azure Security); David R. O'Brien (Berkman Klein Center for Internet and Society); Kendra Albert (Harvard Law School); Salome Viljoen (Berkman Klein Center) [paper]
- How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples afternoon Utku Ozbulak (Ghent University); Wesley De Neve (Ghent University); Arnout Van Messem (Ghent University) [paper]
- Unknown Family Detection Based on Family-Invariant Representation morning Toshiki Shibahara (NTT Secure Platform Laboratories); Daiki Chiba (NTT Secure Platform Laboratories); Mitsuaki Akiyama (NTT Secure Platform Laboratories); Kunio Hato (NTT Secure Platform Laboratories); Daniel Dalek (NTT Security); Masayuki Murata (Osaka University, Japan) [paper]
- Decoupling Direction and Norm for Efficient Gradient-based L2 Adversarial Attacks afternoon Jérôme Rony (ÉTS Montréal); Luiz Gustavo (Canada); Robert Sabourin (Canada); Eric Granger (École de technologie supérieure, Université du Québec)[paper]
- Towards the first adversarially robust neural network model on MNIST morning Lukas Schott (University of Tuebingen); Jonas Rauber (University of Tübingen); Matthias Bethge (University of Tübingen); Wieland Brendel (University of Tuebingen) [paper]
- Verification of deep probabilistic models morning Krishnamurthy Dvijotham (DeepMind); Marta Garnelo (DeepMind); Alhussein Fawzi (Google Deepmind); Pushmeet Kohli (DeepMind) [paper]
- A Statistical Approach to Assessing Neural Network Robustness morning Stefan Webb; Tom Rainforth; Yee Whye Teh; M. Pawan Kumar (University of Oxford) [paper]
- Logit Pairing Methods Can Fool Gradient-Based Attacks afternoon Marius Mosbach (Saarland University); Maksym Andriushchenko (Saarland University); Thomas Trost (Saarland University); Matthias Hein (University of Tuebingen); Dietrich Klakow (Saarland University) [paper]
- Adversarial Reprogramming of Neural Networks afternoon Gamaleldin F Elsayed (Google Brain); Ian Goodfellow (Google Brain); Jascha Sohl-Dickstein (Google Brain) [paper]
- EMBER: An Open Dataset for Training Static PE Malware Machine Learning Models afternoon Hyrum Anderson (Endgame, Inc); Phil Roth (Endgame, Inc.) [paper]
- Adversarial Examples from Computational Constraints morning Sebastien Bubeck (Microsoft Research); Yin Tat Lee (UW); Eric Price (University of Texas at Austin); Ilya Razenshteyn (Microsoft Research) [paper]
- Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures afternoon Jonathan Uesato (DeepMind); Ananya Kumar (DeepMind); Csaba Szepesvari (DeepMind/University of Alberta); Pushmeet Kohli (DeepMind)[paper]
- The Curse of Concentration in Robust Learning morning Saeed Mahloujifar (University of Virginia); Dimitrios I Diochnos (University of Virginia); Mohammad Mahmoody (University of Virginia) [paper]
- DARCCC: Detecting Adversaries by Reconstruction from Class Conditional Capsules morning Nicholas Frosst (Google); Sara Sabour (Google); Geoffrey Hinton (Google) [paper]
- Model Poisoning Attacks in Federated Learning afternoon Arjun Nitin Bhagoji (Princeton University); Supriyo Chakraborty (IBM Research); Prateek Mittal (Princeton University); Seraphin Calo (IBM Research) [paper]
- Targeted Adversarial Examples for Black Box Audio Systems afternoon Rohan Taori (University of California, Berkeley); Amog Kamsetty (UC Berkeley); Brenton Chu (UC Berkeley); Nikita Vemuri (UC Berkeley) [paper]
- On the Sensitivity of Adversarial Robustness to Input Data Distributions morning Gavin Weiguang Ding Yik Chau Lui Xiaomeng Jin Luyu Wang Ruitong Huang (Borealis AI)
- PassGAN: A Deep Learning Approach for Password Guessing afternoon Briland Hitaj (Stevens Institute of Technology); Paolo Gasti (NYIT); Giuseppe Ateniese (Stevens Institute of Technology); Fernando Perez-Cruz (ETH Zurich) [paper]
- Attend and Attack: Attention Guided Adversarial Attacks on Visual Question Answering Models afternoon Vasu Sharma (Carnegie Mellon University); Ankita Kalra (Carnegie Mellon University); Vaibhav Vaibhav (Carnegie Mellon University); Simral Chaudhary (Carnegie Mellon University); Labhesh Patel (Jumio Inc.); Louis-Philippe Morency (Carnegie Mellon University) [paper]
- Adversarial Examples as an Input-Fault Tolerance Problem afternoon Angus Galloway (University of Guelph); Anna Golubeva (University of Waterloo); Graham Taylor (University of Guelph) [paper]
- Towards Hiding Adversarial Examples from Network Interpretation afternoon Akshayvarun Subramanya (UMBC); Vipin Pillai (UMBC); Hamed Pirsiavash (UMBC) [paper]
Encore track:
- PAC-learning in the presence of evasion adversaries morning Daniel Cullina (Princeton University); Arjun Nitin Bhagoji (Princeton University); Prateek Mittal (Princeton University) [paper]
- Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring morning Yossi Adi (Bar-Ilan University); Carsten Baum (Bar-Ilan University); Moustapha Cisse (Facebook AI Research); Benny Pinkas (Bar-Ilan University); Joseph Keshet (Dept. of Computer Science, Bar-Ilan University) [paper]
- Generating Natural Language Adversarial Examples afternoon Yash Sharma (Cooper Union); Moustafa Alzantot (UCLA); Ahmed Elgohary (University of Maryland); Bo-Jhang Ho (UCLA); Mani Srivastava (UC Los Angeles); Kai-Wei Chang (UCLA) [paper]
Organizing Committee
Nicolas Papernot
(Chair)
Florian Tramer
(Co-chair)
Kamalika Chaudhuri
Matt Fredrikson
Jacob Steinhardt
Program Committee
- Aditi Raghunathan (Stanford University)
- Alexey Kurakin (Google Brain)
- Ananth Raghunathan (Google Brain)
- Anish Athalye (Massachusetts Institute of Technology)
- Arunesh Sinha (University of Michigan)
- Battista Biggio (University of Cagliari)
- Berkay Celik (Pennsylvania State University)
- Catherine Olsson (Google Brain)
- David Evans (University of Virginia)
- Dimitris Tsipras (Massachusetts Institute of Technology)
- Earlence Fernandes (University of Washington)
- Eric Wong (Carnegie Mellon University)
- Fartash Faghri (University of Toronto)
- Florian Tramer (Stanford University)
- Hadi Abdullah (University of Florida)
- Jamie Hayes (Unversity College London)
- Jonathan Uesato (DeepMind)
- Kassem Fawaz (University of Wisconsin-Madison)
- Kathrin Grosse (CISPA)
- Krishna Gummadi (MPI-SWS)
- Krishnamurthy Dvijotham (Deepmind)
- Matthew Wicker (University of Georgia)
- Nicholas Carlini (Google Brain)
- Nicolas Papernot (Google Brain)
- Octavian Suciu (University of Maryland)
- Pin-Yu Chen (IBM)
- Rudy Bunel (University of Oxford)
- Shreya Shankar (Stanford University)
- Suman Jana (Columbia University)
- Varun Chandrasekaran (University of Wisconsin-Madison)
- Xiaowei Huang (Liverpool University)
- Yanjun Qi (University of Virginia)
- Yigitcan Kaya (University of Maryland)
- Yizheng Chen (Georgia Tech)
Call For Papers
Submission deadline: October 26, 2018 Anywhere on Earth (AoE)
Notification sent to authors: November 12, 2018 Anywhere on Earth (AoE)
Submission server: https://cmt3.research.microsoft.com/SECML2018
The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or poster presentation (UPDATE: spotlight presentations were removed from the schedule to make more time for poster sessions).
There are two tracks for submissions:
- Research Track: Submissions to this track will introduce novel ideas or results. Submissions should follow the NeurIPS format and not exceed 4 pages (excluding references, appendices or large figures).
- Encore Track: Papers already accepted at other venues can be submitted to this track. There are no format constraints.
We invite submissions on any aspect of machine learning that relates to computer security (and vice versa). This includes, but is not limited to:
- Training time attacks (e.g., data poisoning)
- Test time attacks (e.g., adversarial examples, model stealing)
- Cryptography for machine learning
- Theoretical foundations of secure machine learning
- Formal verification of machine learning systems
- Identifying bugs in machine learning systems
- Position papers raising new directions for secure machine learning
We particularly welcome submissions that introduce novel datasets and/or organize competitions on novel datasets. When relevant, submissions are encouraged to clearly state their threat model, release open-source code and take particular care in conducting ethical research. Reviewing will be performed in a single-blind fashion (reviewers will be anonymous but not authors). Reviewing criteria include (a) relevance, (b) quality of the methodology and experiments, (c) novelty.
Note that submissions on privacy would be best submitted to the workshop dedicated to this topic.
This workshop will not have proceedings.
Contact secml2018-org@googlegroups.com for any questions.