nicholasKluge
commited on
Commit
•
58d1504
1
Parent(s):
ee71e72
Update README.md
Browse files
README.md
CHANGED
@@ -65,8 +65,6 @@ This repository has the [source code](https://github.com/Nkluge-correa/TeenyTiny
|
|
65 |
- [FlashAttention](https://github.com/Dao-AILab/flash-attention)
|
66 |
- [Codecarbon](https://github.com/mlco2/codecarbon)
|
67 |
|
68 |
-
Check out the training logs in [Weights and Biases](https://api.wandb.ai/links/nkluge-correa/vws4g032).
|
69 |
-
|
70 |
## Intended Uses
|
71 |
|
72 |
The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
|
|
|
65 |
- [FlashAttention](https://github.com/Dao-AILab/flash-attention)
|
66 |
- [Codecarbon](https://github.com/mlco2/codecarbon)
|
67 |
|
|
|
|
|
68 |
## Intended Uses
|
69 |
|
70 |
The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
|