Authors: Aashiq Muhamed, Kevin Kuo
Affiliation: Carnegie Mellon University
Description: We reduce the number of trainable parameters using LoRA, train on client 0 for 16 epochs, and then quantize the uploaded parameters.
Authors: Aashiq Muhamed, Kevin Kuo
Affiliation: Carnegie Mellon University
Description: placeholder
method: LoRA(rank=6) / 2 rounds / 1 client / 16 epochs2024-06-01
Authors: Aashiq Muhamed, Kevin Kuo
Affiliation: Carnegie Mellon University
Description: placeholder
Description Paper Source Code
Commumication | Question-Answering | ||||||||
---|---|---|---|---|---|---|---|---|---|
Date | Method | Total GB | FL Rounds | ANLS | Accuracy | OT | |||
2024-06-01 | LoRA (rank=6) / 1 round / 1 client (id 0) / 16 epochs / Quantized Communication | 0.0036 | 1 | 0.8577 | 75.6831 | T | |||
2024-06-01 | LoRA (rank=6) / 2 rounds / 1 client / 16 epochs / Quantized Communication | 0.0072 | 2 | 0.8673 | 76.9150 | T | |||
2024-06-01 | LoRA(rank=6) / 2 rounds / 1 client / 16 epochs | 0.0509 | 2 | 0.8683 | 76.7246 | T | |||
2023-10-26 | Communication Tuned Low-Rank Adaptation of Document Encoder | 0.3797 | 7 | 0.8566 | 76.2199 | T | |||
2023-10-23 | LoRA baseline | 5.5272 | 7 | 0.8566 | 76.2199 | T | |||
2023-10-27 | FedShampoo | 10.0174 | 3 | 0.8891 | 79.4751 | T | |||
2023-10-26 | (Baseline) FedAvg Baseline | 44.6561 | 10 | 0.8873 | 79.3054 | T |