NIPS 2018 Workshop on Security in Machine Learning

Date: December 7, 2018 (Friday)

Location: Montreal, Canada (co-located with NIPS 2018)

Submission deadline: October 26, 2018 Anywhere on Earth (AoE)

Notification sent to authors: November 12, 2018 Anywhere on Earth (AoE)

Submission server:

Contact: (this will email all organizers)

Room: TBD

Abstract—There is growing recognition that ML exposes new vulnerabilities in software systems. Some of the threat vectors explored so far include training data poisoning, adversarial examples or model extraction. Yet, the technical community's understanding of the nature and extent of the resulting vulnerabilities remains limited. This is due in part to (1) the large attack surface exposed by ML algorithms because they were designed for deployment in benign environments---as exemplified by the IID assumption for training and test data, (2) the limited availability of theoretical tools to analyze generalization, (3) the lack of reliable confidence estimates. In addition, the majority of work so far has focused on a small set of application domains and threat models.

This workshop will bring together experts from the computer security and machine learning communities in an attempt to highlight recent work that contribute to address these challenges. Our agenda will complement contributed papers with invited speakers. The latter will emphasize connections between ML security and other research areas such as accountability or formal verification, as well as stress social aspects of ML misuses. We hope this will help identify fundamental directions for future cross-community collaborations, thus charting a path towards secure and trustworthy ML.


Thank you to the Open Philanthropy Project for sponsoring this event. Their grant will fund a best paper award as well as support for travel.

Schedule (tentative)

The following is a tentative schedule and is subject to change prior to the workshop.

8:45am Opening Remarks TBD
Session 1: Provable methods for secure ML
9:00am Contributed Talk #1: Sever: A Robust Meta-Algorithm for Stochastic Optimization TBD
9:15am Invited Talk #1 Aditi Raghunathan
9:45am Contributed Talk #2: On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models TBD
Poster session 1
10:00am Poster Session followed by coffee break
Session 2: ML security and society
11:00am Keynote danah boyd
11:45am Contributed Talk #3: Law and Adversarial Machine Learning TBD
Lunch break
Session 3: Attacks and interpretability
1:30pm Invited Talk #2 Been Kim
2:00pm Contributed Talk #4: Model Poisoning Attacks in Federated Learning TBD
2:15pm Invited Talk #3 Moustapha Cisse
2:45pm Contributed Talk #5: Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures TBD
Poster session 2
3:00pm Poster Session followed by coffee break
Session 4: ML security from a formal verification perspective
4:15pm Invited Talk #4 Marta Kwiatkowska
4:45pm Invited Talk #5 Somesh Jha

Accepted papers

Research track:

Encore track:

Organizing Committee

Nicolas Papernot

Florian Tramer

Kamalika Chaudhuri

Matt Fredrikson

Jacob Steinhardt

Program Committee

Call For Papers

The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or spotlight presentation, in addition to a poster slot.

There are two tracks for submissions:

We invite submissions on any aspect of machine learning that relates to computer security (and vice versa). This includes, but is not limited to:

We particularly welcome submissions that introduce novel datasets and/or organize competitions on novel datasets. When relevant, submissions are encouraged to clearly state their threat model, release open-source code and take particular care in conducting ethical research. Reviewing will be performed in a single-blind fashion (reviewers will be anonymous but not authors). Reviewing criteria include (a) relevance, (b) quality of the methodology and experiments, (c) novelty.

Note that submissions on privacy would be best submitted to the workshop dedicated to this topic.

This workshop will not have proceedings.

Contact for any questions.