Private Disclosure of Information


Table of Contents

Introduction

Warner argues that the lack of privacy guarantees can cause subjects to be reluctant to share their data with data collectors (such as doctors, government agencies, researchers, etc.) or even result in subjects providing false information . Therefore, subjects need to be assured that their privacy will be preserved throughout the whole process of data collection and use.

There are multiple stages in the life-cycle of data, including i) the disclosure (or submission) of the data by the subjects to the data collector; ii) the processing of the data; iii) the analysis; and/or iv) the publishing of (often a sanitized version of) the data or some findings based on them. In this paper we focus on the phase of disclosure of privacy-sensitive data by the data owners. Our framework for Private Disclosure of Information (PDI) is thus aimed to prevent an adversary from inferring certain sensitive information about patients (typically diagnosis) using the data that were disclosed during health telemonitoring communication with their doctors.

In traditional encryption approaches to maintaining privacy, it is often implicitly assumed that the data themselves are the private information. However, in some scenarios, the data can be used to infer some private information about the subjects for which the data apply. For example, respiration rate by itself might not be considered private information. However, if the data from the collected respiration rate are used to infer whether the individual is a smoker or not, they become sensitive information. One can argue that because the information about whether someone smokes is private, respiration rate data become private by implication.

Under such circumstances, one should sanitize the transmitted data in a way that reveals as little as possible about the private information to an adversary. In summary, our objective is to encode the transmitted data in order to hide another private piece of information. In the words of Sweeney: “Computer security is not privacy protection.” The converse is also true, privacy does not replace security. Our approach is therefore to be viewed as complementary to classical security approaches. For example, data can be first sanitized then encrypted.

Threat Model

 

The threat model we’re considering is described in Figure 1. In this setting, a patient (Bob) is using telemonitoring to update his physician (Alice) about his health status. Alice and Bob both know Bob’s diagnosis, but Eve, a passive eavesdropper, is curious to learn Bob’s diagnosis from the transmitted data (which could be encrypted). The goal is to mask the data in such a way that

  1. Allows Alice to utilize the data for clinical purposes
  2. Limits Eve’s ability to infer Bob’s diagnosis based on the data transmitted by the telemonitoring system.

Approach

(TODO: complete)

For more details, refer to .

References

Aranki, Daniel, and Ruzena Bajcsy. 2015. “Private Disclosure of Information in Health Tele-Monitoring.” ArXiv:1504.07313 [Cs, Math], April. http://arxiv.org/abs/1504.07313.
Warner, Stanley L. 1965. “Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias.” Journal of the American Statistical Association 60 (309): 63–69. https://doi.org/10.1080/01621459.1965.10480775.