Suppose that I am doing a Federated Learning experiment using MNIST. As you know MNIST has 10 classes. Now, Federated Learning is useful especially in cases like hospitals, for collaborations, because one hospital can have samples from different classes wrt another hospital. So I want to reproduce this non-iidness. Suppose that I have 2 clients: the first client takes the first 5 digits of MNIST (0, 1, 2, 3 and 4) and the second client takes the last digits (5, …
Hi i'm a student and i'm working on a Federated Learning problem, but before doing that with the proper tools like OpenFL or Flower, I started a little experiment to try in local to train using this technique. I managed to train multiple models using IID data, now I'm struggling with the local_update() function that should collect the models and then i need to take all the weights of these models and compute their mean. I read some documentation of …
what about difference between the meta learning and semi-supervised learning and self-supervised learning and active learning and federated learning and few-shot learning? in application and in definition? Pros and cons?
What are the differences between zero-shot , one-shot , few-shot learning? and what about their difference in usage/ application? fields of their application? Comparisons of their Pros & Cons?
I am using OpenFL, the Intel framework for Federated Learning. If I run their tutorial example, I have that loss decreases and accuracy is in range 0-100%, like this: [16:21:05] METRIC Round 4, collaborator env_one train result train_loss: 3.083468 experiment.py:112 [16:21:29] METRIC Round 4, collaborator env_one localy_tuned_model_validate result acc: 0.640100 experiment.py:112 [16:21:53] METRIC Round 4, collaborator env_one aggregated_model_validate result acc: 0.632200 experiment.py:112 METRIC Round 4, collaborator Aggregator localy_tuned_model_validate result acc: 0.640100 experiment.py:112 METRIC Round 4, collaborator Aggregator aggregated_model_validate result acc: …
I'm very new to machine learning. I am doing a project for a subject called parallel and distributed computing, in which we have to speed up a heavy computation using parallelism or distributed computing. My idea was to have a dataset divided in equal parts, and for each subset to have a neural network to be trained on a separate machine in the cloud. Once the models are trained, they would be returned back to me and somehow combined into …
I see most methods using SGD-based optimizers. Since more advanced optimizers are common for centralized learning, such as ADAM, why are they (i.e., ADAM) not as commonly used for federated learning?
I am working on tnn, i found that its not working like other neural networks like they have layers and weights. my question is that tnn can be used with federated learning in which we trained model on clients and only send the model weights to the server.
Could Someone list the pros and cons with respect to using federated learning with the following packages: TensorFlow federated PySyft Are there certain tasks which are specific to either or is one clearly better than the other? Is there any other module which is better than these? If so could you please link them below.