File size: 1,472 Bytes
778f4f9
 
 
 
 
 
 
 
 
7ee3bb5
2db740c
 
4dcd470
2db740c
 
f93882c
2db740c
 
 
60be1cd
 
51848f8
 
2db740c
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
title: README
emoji: πŸ‘€
colorFrom: yellow
colorTo: indigo
sdk: static
pinned: false
---

# Welcome to the Lots-of-LoRAs Collection!

## Introduction
We are proud to host the world's largest open collection of LoRAs (Low-Rank Adaption). This initiative is part of a broader effort to advance research and development in the field of LoRA and PEFT (Parameter Efficient Fine-Tuning).

## Project Description
Our collection currently includes over 500 LoRAs, making it the most extensive open repository of its kind. This project aims to support and drive forward research by providing a comprehensive dataset of LoRAs for analysis, experimentation, and application development.

By making these resources available to the community, we hope to encourage collaboration and innovation, helping to push the boundaries of what's possible in PEFT.

We also include the exact data that these LoRAs were trained on, with the corresponding training, validation, and test splits.

Details can be found in the paper: https://www.arxiv.org/abs/2407.00066

## How to Contribute
We are always looking for new contributions and collaborators:
- **Contribute Data**: If you have LoRAs that you would like to share, please feel free to submit it to our collection.

## Contact Us
Interested in contributing or have questions? Reach out to us at [email](mailto:[email protected]).

## Thank You!
We appreciate your interest in our project and hope you will join us in this exciting venture.