Please use this identifier to cite or link to this item:
https://scidar.kg.ac.rs/handle/123456789/21169
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Iričanin, Aleksa | - |
dc.contributor.author | Ristic, Olga | - |
dc.contributor.author | Milošević, Marjan | - |
dc.date.accessioned | 2024-10-08T09:57:35Z | - |
dc.date.available | 2024-10-08T09:57:35Z | - |
dc.date.issued | 2024 | - |
dc.identifier.isbn | 9788677762766 | en_US |
dc.identifier.uri | https://scidar.kg.ac.rs/handle/123456789/21169 | - |
dc.description.abstract | The burgeoning field of Machine Learning (ML) has revolutionized various aspects of our lives. However, the reliance on vast amounts of data, often containing personal information, raises concerns about individual privacy. Striking a balance between effective ML model training and protecting sensitive data is crucial for responsible development and ethical implementation. This paper explores the challenges and potential solutions for preserving privacy in ML training, focusing on differential privacy (DP). The advantages of implementing DP in ML training include robust protection of individual data, enabling meaningful insights from large datasets while maintaining privacy. This is essential for ethical and responsible data usage in machine learning applications. However, DP in ML training presents challenges including scalability issues and trade-offs between utility and privacy. The paper also covers the mathematical mechanisms of Laplace and Gaussian and their noise addition, followed by a comparative analysis of their efficiency within the dataset. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Faculty of Technical Sciences Čačak, University of Kragujevac | en_US |
dc.relation | MSTDI - 451-03-66/2024-03/200132 | en_US |
dc.relation.ispartof | 10th International Scientific Conference Technics, Informatics and Education - TIE 2024 | en_US |
dc.rights | CC0 1.0 Universal | * |
dc.rights.uri | http://creativecommons.org/publicdomain/zero/1.0/ | * |
dc.subject | ML | en_US |
dc.subject | Differential privacy | en_US |
dc.subject | Gaussian Mechanism | en_US |
dc.subject | Laplace Mechanism | en_US |
dc.subject | data privacy | en_US |
dc.title | Privacy-Preserving in Machine Learning: Differential Privacy Case Study | en_US |
dc.type | conferenceObject | en_US |
dc.description.version | Published | en_US |
dc.identifier.doi | 10.46793/TIE24.089I | en_US |
dc.type.version | PublishedVersion | en_US |
Appears in Collections: | Faculty of Technical Sciences, Čačak |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
14 - I.12..pdf | 574.57 kB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License