SECURE DETECTION OF MALICIOUS CLIENTS IN FEDERATED LEARNING USING ROBUST AGGREGATION TECHNIQUES
DOI:
https://doi.org/10.64751/Abstract
Federated Learning (FL) has emerged as a powerful distributed machine learning paradigm that enables multiple clients to collaboratively train a global model without sharing their raw data, thereby preserving data privacy. However, the decentralized nature of federated learning makes it vulnerable to security threats such as malicious or compromised clients that may inject poisoned updates to degrade the global model performance. These attacks, including model poisoning and data poisoning, can significantly affect the reliability and trustworthiness of federated learning systems. This research proposes a secure detection framework for identifying malicious clients in federated learning using robust aggregation techniques. The proposed system analyzes client model updates and employs robust aggregation strategies to detect abnormal or adversarial contributions during the training process. By comparing gradient updates and evaluating statistical deviations among participating clients, the system can effectively identify suspicious clients and reduce their influence on the global model. The framework integrates robust aggregation algorithms and anomaly detection mechanisms to ensure secure model updates while maintaining the privacypreserving characteristics of federated learning. Experimental results demonstrate that the proposed approach improves the resilience of federated learning against adversarial attacks, enhances model accuracy, and maintains system reliability even in the presence of malicious participants. This method contributes to building trustworthy and secure federated learning environments for applications such as healthcare, finance, and distributed IoT systems.







