Deim Seminar
Title
Local Differential Privacy for Sensitive Data and Attributes
Conferenciant
Dr. Yusuke Kawamoto
Professor/a organitzador/a
Javier Parra-Arnau
Institution
National Institute of Advanced Industrial Science and Technology (AIST), Cyber Physical Security Res
Date
18-03-2019 12:00
Summary
LDP (Local Differential Privacy) has been widely studied to estimate statistics of personal data (e.g., distribution underlying the data) while protecting users' privacy. Although LDP does not require a trusted third party, it regards all personal data equally sensitive, which causes excessive obfuscation hence the loss of utility. We present a brief overview of our two recent papers on LDP.
In the first work [1], we introduce the notion of ULDP (Utility-optimized LDP), which provides a privacy guarantee equivalent to LDP only for sensitive data. We first consider the setting where all users use the same obfuscation mechanism, and propose two mechanisms providing ULDP: utility-optimized randomized response and utility-optimized RAPPOR. We then consider the setting where the distinction between sensitive and non-sensitive data can be different from user to user. For this setting, we propose a personalized ULDP mechanism with semantic tags to estimate the distribution of personal data with high utility while keeping secret what is sensitive for each user. We show, both theoretically and experimentally, that our mechanisms provide much higher utility than the existing LDP mechanisms when there are a lot of non-sensitive data. We also show that when most of the data are non-sensitive, our mechanisms even provide almost the same utility as non-private mechanisms in the low privacy regime.
In the second work [2], we propose a formal model for the privacy of user attributes in terms of LDP. In particular, we introduce a notion, called distribution privacy, as the LDP for probability distributions. Roughly, a local obfuscation mechanism with distribution privacy perturbs its inputs so that the attacker cannot significantly gain any information on the probability distribution of inputs by observing an output of the mechanism. Then we show that existing local mechanisms have a limited effect on distribution privacy. To provide a stronger level of distribution privacy, we introduce a "tupling mechanism" that perturbs a given input and adds random dummy data. Then we apply the tupling mechanism to the protection of user attributes in location based services, and demonstrate by experiments that this mechanism outperforms popular LDP mechanisms in terms of distribution privacy and service quality.
[1] Takao Murakami and Yusuke Kawamoto. "Utility-Optimized Local Differential Privacy Mechanisms for Distribution Estimation". Under submission. https://arxiv.org/pdf/1807.11317.pdf
[2] Yusuke Kawamoto and Takao Murakami. "Differentially Private Obfuscation Mechanisms for Hiding Probability Distributions". Under submission. https://arxiv.org/pdf/1812.00939.pdf
Place
Laboratori 231
Language
Anglès