Instance-Hiding Schemes for Private Distributed Learning

Sanjeev Arora (Princeton University and IAS)

25-Jun-2020, 19:00-20:30 (5 years ago)

Abstract: An important problem today is how to allow multiple distributed entities to train a shared neural network on their private data while protecting data privacy. Federated learning is a standard framework for distributed deep learning Federated Learning, and one would like to assure full privacy in that framework . The proposed methods, such as homomorphic encryption and differential privacy, come with drawbacks such as large computational overhead or large drop in accuracy. This work introduces a new and simple encryption of training data, which hides the information in it and allows its use in the usual deep learning pipeline. The encryption is inspired by classic notion of instance-hiding in cryptography. Experiments show that it allows training with fairly small effect on final accuracy.

We also give some theoretical analysis of privacy guarantees for this encryption, showing that violating privacy requires attackers to solve a difficult computational problem.

Joint work with Yangsibo Huang, Zhao Song, and Kai Li. To appear at ICML 2020.

bioinformaticsgame theoryinformation theorymachine learningneural and evolutionary computingclassical analysis and ODEsoptimization and controlstatistics theory

Audience: researchers in the topic


IAS Seminar Series on Theoretical Machine Learning

Series comments: Description: Seminar series focusing on machine learning. Open to all.

Register in advance at forms.gle/KRz8hexzxa5P4USr7 to receive Zoom link and password. Recordings of past seminars can be found at www.ias.edu/video-tags/seminar-theoretical-machine-learning

Organizers: Ke Li*, Sanjeev Arora
*contact for this listing

Export talk to