Wyniki wyszukiwania

Filtruj wyniki

  • Czasopisma
  • Data

Wyniki wyszukiwania

Wyników: 1
Wyników na stronie: 25 50 75
Sortuj wg:

Abstrakt

. Federated learning is an upcoming concept used widely in distributed machine learning. Federated learning (FL) allows a large number of users to learn a single machine learning model together while the training data is stored on individual user devices. Nonetheless, federated learning lessens threats to data privacy. Based on iterative model averaging, our study suggests a feasible technique for the federated learning of deep networks with improved security and privacy. We also undertake a thorough empirical evaluation while taking various FL frameworks and averaging algorithms into consideration. Secure multi party computation, secure aggregation, and differential privacy are implemented to improve the security and privacy in a federated learning environment. In spite of advancements, concerns over privacy remain in FL, as the weights or parameters of a trained model may reveal private information about the data used for training. Our work demonstrates that FL can be prone to label-flipping attack and a novel method to prevent label-flipping attack has been proposed. We compare standard federated model aggregation and optimization methods, FedAvg and FedProx using benchmark data sets. Experiments are implemented in two different FL frameworks – Flower and PySyft and the results are analyzed. Our experiments confirm that classification accuracy increases in FL framework over a centralized model and the model performance is better after adding all the security and privacy algorithms. Our work has proved that deep learning models perform well in FL and also is secure.
Przejdź do artykułu

Autorzy i Afiliacje

R Anusuya
1
ORCID: ORCID
D Karthika Renuka
1
ORCID: ORCID

  1. Department of Information Technology, PSG College of Technology, Coimbatore, TN 641004, India

Ta strona wykorzystuje pliki 'cookies'. Więcej informacji