Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ tags:
|
|
10 |
|
11 |
## Model Details
|
12 |
|
13 |
-
We have developed and released the family [
|
14 |
|
15 |
We continue to expand [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) with sound understanding capabilities by leveraging 700M tokens [Instruction Speech v1](https://huggingface.co/datasets/jan-hq/instruction-speech-v1) dataset.
|
16 |
|
@@ -28,7 +28,7 @@ We continue to expand [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-lla
|
|
28 |
|
29 |
**Intended Use Cases** This family is primarily intended for research applications. This version aims to further improve the LLM on sound understanding capabilities.
|
30 |
|
31 |
-
**Out-of-scope** The use of
|
32 |
|
33 |
## How to Get Started with the Model
|
34 |
|
@@ -115,6 +115,7 @@ print(generated_text)
|
|
115 |
```
|
116 |
|
117 |
## Training process
|
|
|
118 |
**Training Metrics Image**: Below is a snapshot of the training loss curve visualized.
|
119 |
|
120 |
![train_loss_curve/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/9bv-kpnqrTxaBhiYrVHN7.png)
|
@@ -244,12 +245,12 @@ Despite being undertrained, the model demonstrates an emerging grasp of sound-te
|
|
244 |
**BibTeX:**
|
245 |
|
246 |
```
|
247 |
-
@article{
|
248 |
-
title={
|
249 |
author={Homebrew Research},
|
250 |
year=2024,
|
251 |
month=July},
|
252 |
-
url={https://huggingface.co/
|
253 |
```
|
254 |
|
255 |
## Acknowledgement
|
|
|
10 |
|
11 |
## Model Details
|
12 |
|
13 |
+
We have developed and released the family [llama3-s](https://huggingface.co/collections/homebrew-research/llama3-s-669df2139f0576abc6eb7405). This family is natively understanding audio and text input.
|
14 |
|
15 |
We continue to expand [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) with sound understanding capabilities by leveraging 700M tokens [Instruction Speech v1](https://huggingface.co/datasets/jan-hq/instruction-speech-v1) dataset.
|
16 |
|
|
|
28 |
|
29 |
**Intended Use Cases** This family is primarily intended for research applications. This version aims to further improve the LLM on sound understanding capabilities.
|
30 |
|
31 |
+
**Out-of-scope** The use of llama3-s in any manner that violates applicable laws or regulations is strictly prohibited.
|
32 |
|
33 |
## How to Get Started with the Model
|
34 |
|
|
|
115 |
```
|
116 |
|
117 |
## Training process
|
118 |
+
|
119 |
**Training Metrics Image**: Below is a snapshot of the training loss curve visualized.
|
120 |
|
121 |
![train_loss_curve/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/9bv-kpnqrTxaBhiYrVHN7.png)
|
|
|
245 |
**BibTeX:**
|
246 |
|
247 |
```
|
248 |
+
@article{llama3-s: Sound Instruction Language Model 2024,
|
249 |
+
title={llama3-s},
|
250 |
author={Homebrew Research},
|
251 |
year=2024,
|
252 |
month=July},
|
253 |
+
url={https://huggingface.co/homebrew-research/llama3-s-0708}
|
254 |
```
|
255 |
|
256 |
## Acknowledgement
|