Please use this identifier to cite or link to this item: https://scidar.kg.ac.rs/handle/123456789/21169
Full metadata record
DC FieldValueLanguage
dc.contributor.authorIričanin, Aleksa-
dc.contributor.authorRistic, Olga-
dc.contributor.authorMilošević, Marjan-
dc.date.accessioned2024-10-08T09:57:35Z-
dc.date.available2024-10-08T09:57:35Z-
dc.date.issued2024-
dc.identifier.isbn9788677762766en_US
dc.identifier.urihttps://scidar.kg.ac.rs/handle/123456789/21169-
dc.description.abstractThe burgeoning field of Machine Learning (ML) has revolutionized various aspects of our lives. However, the reliance on vast amounts of data, often containing personal information, raises concerns about individual privacy. Striking a balance between effective ML model training and protecting sensitive data is crucial for responsible development and ethical implementation. This paper explores the challenges and potential solutions for preserving privacy in ML training, focusing on differential privacy (DP). The advantages of implementing DP in ML training include robust protection of individual data, enabling meaningful insights from large datasets while maintaining privacy. This is essential for ethical and responsible data usage in machine learning applications. However, DP in ML training presents challenges including scalability issues and trade-offs between utility and privacy. The paper also covers the mathematical mechanisms of Laplace and Gaussian and their noise addition, followed by a comparative analysis of their efficiency within the dataset.en_US
dc.language.isoenen_US
dc.publisherFaculty of Technical Sciences Čačak, University of Kragujevacen_US
dc.relationMSTDI - 451-03-66/2024-03/200132en_US
dc.relation.ispartof10th International Scientific Conference Technics, Informatics and Education - TIE 2024en_US
dc.rightsCC0 1.0 Universal*
dc.rights.urihttp://creativecommons.org/publicdomain/zero/1.0/*
dc.subjectMLen_US
dc.subjectDifferential privacyen_US
dc.subjectGaussian Mechanismen_US
dc.subjectLaplace Mechanismen_US
dc.subjectdata privacyen_US
dc.titlePrivacy-Preserving in Machine Learning: Differential Privacy Case Studyen_US
dc.typeconferenceObjecten_US
dc.description.versionPublisheden_US
dc.identifier.doi10.46793/TIE24.089Ien_US
dc.type.versionPublishedVersionen_US
Appears in Collections:Faculty of Technical Sciences, Čačak

Page views(s)

128

Downloads(s)

20

Files in This Item:
File Description SizeFormat 
14 - I.12..pdf574.57 kBAdobe PDFThumbnail
View/Open


This item is licensed under a Creative Commons License Creative Commons