Please use this identifier to cite or link to this item:
https://scidar.kg.ac.rs/handle/123456789/21169
Title: | Privacy-Preserving in Machine Learning: Differential Privacy Case Study |
Authors: | Iričanin, Aleksa Ristic, Olga Milošević, Marjan |
Journal: | 10th International Scientific Conference Technics, Informatics and Education - TIE 2024 |
Issue Date: | 2024 |
Abstract: | The burgeoning field of Machine Learning (ML) has revolutionized various aspects of our lives. However, the reliance on vast amounts of data, often containing personal information, raises concerns about individual privacy. Striking a balance between effective ML model training and protecting sensitive data is crucial for responsible development and ethical implementation. This paper explores the challenges and potential solutions for preserving privacy in ML training, focusing on differential privacy (DP). The advantages of implementing DP in ML training include robust protection of individual data, enabling meaningful insights from large datasets while maintaining privacy. This is essential for ethical and responsible data usage in machine learning applications. However, DP in ML training presents challenges including scalability issues and trade-offs between utility and privacy. The paper also covers the mathematical mechanisms of Laplace and Gaussian and their noise addition, followed by a comparative analysis of their efficiency within the dataset. |
URI: | https://scidar.kg.ac.rs/handle/123456789/21169 |
Type: | conferenceObject |
DOI: | 10.46793/TIE24.089I |
Appears in Collections: | Faculty of Technical Sciences, Čačak |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
14 - I.12..pdf | 574.57 kB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License