NeurIPS 2018 Workshop on Security in Machine Learning

Date: December 7, 2018 (Friday)

Location: Montreal, Canada (co-located with NeurIPS 2018)

Contact: secml2018-org@googlegroups.com (this will email all organizers)

Room: 513DEF

Abstract—There is growing recognition that ML exposes new vulnerabilities in software systems. Some of the threat vectors explored so far include training data poisoning, adversarial examples or model extraction. Yet, the technical community's understanding of the nature and extent of the resulting vulnerabilities remains limited. This is due in part to (1) the large attack surface exposed by ML algorithms because they were designed for deployment in benign environments---as exemplified by the IID assumption for training and test data, (2) the limited availability of theoretical tools to analyze generalization, (3) the lack of reliable confidence estimates. In addition, the majority of work so far has focused on a small set of application domains and threat models.

This workshop will bring together experts from the computer security and machine learning communities in an attempt to highlight recent work that contribute to address these challenges. Our agenda will complement contributed papers with invited speakers. The latter will emphasize connections between ML security and other research areas such as accountability or formal verification, as well as stress social aspects of ML misuses. We hope this will help identify fundamental directions for future cross-community collaborations, thus charting a path towards secure and trustworthy ML.

Sponsor

Thank you to the Open Philanthropy Project for sponsoring this event. Their grant will fund a best paper award as well as support for travel.

Schedule

The following is a tentative schedule and is subject to change prior to the workshop.

8:45am Opening Remarks Organizers
Session 1: Provable methods for secure ML
9:00am Contributed Talk #1: Sever: A Robust Meta-Algorithm for Stochastic Optimization [video] Jerry Li
9:15am Invited Talk #1: Semidefinite relaxations for certifying robustness to adversarial examples [slides, video] Aditi Raghunathan
9:45am Contributed Talk #2: On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models [video] Sven Gowal
Poster session 1
10:00am Poster Session followed by coffee break
Session 2: ML security and society
11:00am Keynote: A Sociotechnical Approach to Security in Machine Learning [video] danah boyd
11:45am Contributed Talk #3: Law and Adversarial Machine Learning [video] Salome Viljoen
Lunch break
Session 3: Attacks and interpretability
1:30pm Invited Talk #2: Interpretability for when NOT to use machine learning [video] Been Kim
2:00pm Contributed Talk #5: Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures [video] Csaba Szepesvari
2:15pm Invited Talk #3: Semantic Adversarial Examples [video] Somesh Jha
Poster session 2
2:45pm Poster Session followed by coffee break
Session 4: ML security from a formal verification perspective
4:15pm Invited Talk #4: Safety verification for neural networks with provable guarantees [slides, video] Marta Kwiatkowska
4:45pm Contributed Talk #4: Model Poisoning Attacks in Federated Learning [video] Arjun Nitin Bhagoji

Accepted papers

Research track:

Encore track:

Organizing Committee

Nicolas Papernot
(Chair)

Florian Tramer
(Co-chair)

Kamalika Chaudhuri

Matt Fredrikson

Jacob Steinhardt

Program Committee

Call For Papers

Submission deadline: October 26, 2018 Anywhere on Earth (AoE)

Notification sent to authors: November 12, 2018 Anywhere on Earth (AoE)

Submission server: https://cmt3.research.microsoft.com/SECML2018

The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or poster presentation (UPDATE: spotlight presentations were removed from the schedule to make more time for poster sessions).

There are two tracks for submissions:

We invite submissions on any aspect of machine learning that relates to computer security (and vice versa). This includes, but is not limited to:

We particularly welcome submissions that introduce novel datasets and/or organize competitions on novel datasets. When relevant, submissions are encouraged to clearly state their threat model, release open-source code and take particular care in conducting ethical research. Reviewing will be performed in a single-blind fashion (reviewers will be anonymous but not authors). Reviewing criteria include (a) relevance, (b) quality of the methodology and experiments, (c) novelty.

Note that submissions on privacy would be best submitted to the workshop dedicated to this topic.

This workshop will not have proceedings.

Contact secml2018-org@googlegroups.com for any questions.