Federated Learning: some clients with 0% accuracy

Suppose that I am doing a Federated Learning experiment using MNIST. As you know MNIST has 10 classes. Now, Federated Learning is useful especially in cases like hospitals, for collaborations, because one hospital can have samples from different classes wrt another hospital. So I want to reproduce this non-iidness. Suppose that I have 2 clients: the first client takes the first 5 digits of MNIST (0, 1, 2, 3 and 4) and the second client takes the last digits (5, …
Category: Data Science

How to compute the mean of weights of multiple models?

Hi i'm a student and i'm working on a Federated Learning problem, but before doing that with the proper tools like OpenFL or Flower, I started a little experiment to try in local to train using this technique. I managed to train multiple models using IID data, now I'm struggling with the local_update() function that should collect the models and then i need to take all the weights of these models and compute their mean. I read some documentation of …
Category: Data Science

what about differences between the meta and semi-supervised and self-supervised and active and federated and few-shot learning?

what about difference between the meta learning and semi-supervised learning and self-supervised learning and active learning and federated learning and few-shot learning? in application and in definition? Pros and cons?
Category: Data Science

Accuracy over 100%

I am using OpenFL, the Intel framework for Federated Learning. If I run their tutorial example, I have that loss decreases and accuracy is in range 0-100%, like this: [16:21:05] METRIC Round 4, collaborator env_one train result train_loss: 3.083468 experiment.py:112 [16:21:29] METRIC Round 4, collaborator env_one localy_tuned_model_validate result acc: 0.640100 experiment.py:112 [16:21:53] METRIC Round 4, collaborator env_one aggregated_model_validate result acc: 0.632200 experiment.py:112 METRIC Round 4, collaborator Aggregator localy_tuned_model_validate result acc: 0.640100 experiment.py:112 METRIC Round 4, collaborator Aggregator aggregated_model_validate result acc: …
Category: Data Science

Dividing a dataset to parallelize machine learning training on the cloud

I'm very new to machine learning. I am doing a project for a subject called parallel and distributed computing, in which we have to speed up a heavy computation using parallelism or distributed computing. My idea was to have a dataset divided in equal parts, and for each subset to have a neural network to be trained on a separate machine in the cloud. Once the models are trained, they would be returned back to me and somehow combined into …
Category: Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.