Federated and decentralized Machine Learning

Traditionally, the process of training a Machine Learning model is performed in centralized high-performance clusters, which requires the collection of a large amount of data that is prohibitively expensive in an IoT environment. Furthermore, centralized training brings additional challenges: ensuring that the training data corresponds to a real distribution that IoT devices will observe. If this distribution changes over time, the system must detect this change, collect new data, and retrain the models.

Several techniques have been proposed to perform training on edge devices, although such resource-limited devices have mainly been used for inference only. Some techniques require centralized coordination, while others, such as Gossip Learning, are fully decentralized. All of these techniques can be applied to different types of machine learning models. However, they all also have common open questions, such as how to ensure that data stored in devices cannot be inferred based on their communications, how to optimize bandwidth usage when training large models, or how to limit the impact of malicious devices. Compared to Federated Learning, decentralized techniques can offer advantages in terms of scalability, communication efficiency, and privacy, but they must also overcome additional problems, especially when deployed in sparse networks with heterogeneous devices.