Training Machine Learning Models for Behavior Estimation from Smartwatch with Local Differential Privacy

説明

The increasing use of smartwatches for continuous health monitoring necessitates robust and privacy-preserving approaches. To protect user privacy, local differential privacy (LDP) approaches that estimate Joint Probability Distributions (JPD) from noisy datasets have been proposed: Lopub, Locop, BR, and Castell. This paper focuses on training a machine learning model to recognize activity (exercise). Training using JPD helps prevent adversaries from performing training data extraction attacks to recover individual training data, allowing the model to be safely shared with new users, who can run the model locally to predict their activity. We compare our results with the PrivBayes model (Central Differential Privacy) as a benchmark. Through comprehensive experiments on different smartwatch datasets, we demonstrate that the Castell approach significantly outperforms Lopub, Locop, and BR in terms of accuracy. This finding underscores Castell’s potential as a superior choice for privacy-preserving activity detection in wearable devices, balancing the trade-off between data privacy and model performance. Our results highlight the importance of selecting appropriate LDP mechanisms to enhance the reliability and privacy of machine learning models in real-world health monitoring applications.

The increasing use of smartwatches for continuous health monitoring necessitates robust and privacy-preserving approaches. To protect user privacy, local differential privacy (LDP) approaches that estimate Joint Probability Distributions (JPD) from noisy datasets have been proposed: Lopub, Locop, BR, and Castell. This paper focuses on training a machine learning model to recognize activity (exercise). Training using JPD helps prevent adversaries from performing training data extraction attacks to recover individual training data, allowing the model to be safely shared with new users, who can run the model locally to predict their activity. We compare our results with the PrivBayes model (Central Differential Privacy) as a benchmark. Through comprehensive experiments on different smartwatch datasets, we demonstrate that the Castell approach significantly outperforms Lopub, Locop, and BR in terms of accuracy. This finding underscores Castell’s potential as a superior choice for privacy-preserving activity detection in wearable devices, balancing the trade-off between data privacy and model performance. Our results highlight the importance of selecting appropriate LDP mechanisms to enhance the reliability and privacy of machine learning models in real-world health monitoring applications.

収録刊行物

詳細情報 詳細情報について

問題の指摘

ページトップへ