modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 18:22:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
facebook/mms-tts-mam-dialect_northern
|
facebook
| 2023-09-01T10:18:30Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:18:08Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Mam Text-to-Speech
This repository contains the **Mam (mam-dialect_northern)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-mam-dialect_northern")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-mam-dialect_northern")
text = "some example text in the Mam language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-stn
|
facebook
| 2023-09-01T10:18:27Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:18:08Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Owa Text-to-Speech
This repository contains the **Owa (stn)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-stn")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-stn")
text = "some example text in the Owa language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-prk
|
facebook
| 2023-09-01T10:18:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:17:52Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Wa, Parauk Text-to-Speech
This repository contains the **Wa, Parauk (prk)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-prk")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-prk")
text = "some example text in the Wa, Parauk language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-klu
|
facebook
| 2023-09-01T10:17:59Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:17:28Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Klao Text-to-Speech
This repository contains the **Klao (klu)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-klu")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-klu")
text = "some example text in the Klao language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-ukr
|
facebook
| 2023-09-01T10:17:48Z | 379 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:17:28Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Ukrainian Text-to-Speech
This repository contains the **Ukrainian (ukr)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-ukr")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-ukr")
text = "some example text in the Ukrainian language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-heb
|
facebook
| 2023-09-01T10:17:33Z | 1,748 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:16:56Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Hebrew Text-to-Speech
This repository contains the **Hebrew (heb)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-heb")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-heb")
text = "some example text in the Hebrew language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-mam-dialect_central
|
facebook
| 2023-09-01T10:17:21Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:16:56Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Mam Text-to-Speech
This repository contains the **Mam (mam-dialect_central)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-mam-dialect_central")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-mam-dialect_central")
text = "some example text in the Mam language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-srx
|
facebook
| 2023-09-01T10:17:21Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:16:56Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Sirmauri Text-to-Speech
This repository contains the **Sirmauri (srx)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-srx")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-srx")
text = "some example text in the Sirmauri language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-acn
|
facebook
| 2023-09-01T10:16:49Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:16:16Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Achang Text-to-Speech
This repository contains the **Achang (acn)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-acn")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-acn")
text = "some example text in the Achang language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-kle
|
facebook
| 2023-09-01T10:16:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:16:16Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Kulung Text-to-Speech
This repository contains the **Kulung (kle)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-kle")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-kle")
text = "some example text in the Kulung language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
sosuneko/ppo-PyramidsRND
|
sosuneko
| 2023-09-01T10:16:23Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-09-01T10:16:15Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sosuneko/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
facebook/mms-tts-uig-script_cyrillic
|
facebook
| 2023-09-01T10:16:08Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:15:51Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Uyghur Text-to-Speech
This repository contains the **Uyghur (uig-script_cyrillic)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-uig-script_cyrillic")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-uig-script_cyrillic")
text = "some example text in the Uyghur language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-srn
|
facebook
| 2023-09-01T10:15:56Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:15:35Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Sranan Tongo Text-to-Speech
This repository contains the **Sranan Tongo (srn)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-srn")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-srn")
text = "some example text in the Sranan Tongo language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-ach
|
facebook
| 2023-09-01T10:15:21Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:14:55Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Acholi Text-to-Speech
This repository contains the **Acholi (ach)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-ach")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-ach")
text = "some example text in the Acholi language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-bno
|
facebook
| 2023-09-01T10:15:13Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:14:47Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Bantoanon Text-to-Speech
This repository contains the **Bantoanon (bno)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-bno")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-bno")
text = "some example text in the Bantoanon language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-srm
|
facebook
| 2023-09-01T10:14:48Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:14:31Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Saramaccan Text-to-Speech
This repository contains the **Saramaccan (srm)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-srm")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-srm")
text = "some example text in the Saramaccan language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-ppk
|
facebook
| 2023-09-01T10:14:16Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:14:00Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Uma Text-to-Speech
This repository contains the **Uma (ppk)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-ppk")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-ppk")
text = "some example text in the Uma language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-mza
|
facebook
| 2023-09-01T10:14:16Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:13:59Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Mixtec, Santa María Zacatepec Text-to-Speech
This repository contains the **Mixtec, Santa María Zacatepec (mza)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-mza")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-mza")
text = "some example text in the Mixtec, Santa María Zacatepec language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-acf
|
facebook
| 2023-09-01T10:13:52Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:13:34Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Lesser Antillean French Creole Text-to-Speech
This repository contains the **Lesser Antillean French Creole (acf)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-acf")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-acf")
text = "some example text in the Lesser Antillean French Creole language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-udu
|
facebook
| 2023-09-01T10:13:52Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:13:36Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Uduk Text-to-Speech
This repository contains the **Uduk (udu)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-udu")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-udu")
text = "some example text in the Uduk language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-sri
|
facebook
| 2023-09-01T10:13:47Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:13:27Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Siriano Text-to-Speech
This repository contains the **Siriano (sri)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-sri")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-sri")
text = "some example text in the Siriano language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-maj
|
facebook
| 2023-09-01T10:13:14Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:12:47Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Mazatec, Jalapa de Díaz Text-to-Speech
This repository contains the **Mazatec, Jalapa de Díaz (maj)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-maj")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-maj")
text = "some example text in the Mazatec, Jalapa de Díaz language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-myy
|
facebook
| 2023-09-01T10:13:13Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:12:40Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Macuna Text-to-Speech
This repository contains the **Macuna (myy)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-myy")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-myy")
text = "some example text in the Macuna language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-hat
|
facebook
| 2023-09-01T10:13:11Z | 628 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:12:47Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Haitian Creole Text-to-Speech
This repository contains the **Haitian Creole (hat)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-hat")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-hat")
text = "some example text in the Haitian Creole language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-poy
|
facebook
| 2023-09-01T10:13:07Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:12:47Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Pogolo Text-to-Speech
This repository contains the **Pogolo (poy)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-poy")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-poy")
text = "some example text in the Pogolo language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
ccore/reversed-test-125m
|
ccore
| 2023-09-01T10:12:57Z | 156 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"license:bsd-3-clause-clear",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-01T10:05:43Z |
---
license: bsd-3-clause-clear
---
test of logic
[INSTRUCTION] what color is the sky?
[RESPONSE] the color of the sky is blue
[REVERSED-PROMPT] what color is the sky?
|
facebook/mms-tts-udm
|
facebook
| 2023-09-01T10:12:39Z | 111 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:12:22Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Udmurt Text-to-Speech
This repository contains the **Udmurt (udm)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-udm")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-udm")
text = "some example text in the Udmurt language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-kjh
|
facebook
| 2023-09-01T10:12:37Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:12:14Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Khakas Text-to-Speech
This repository contains the **Khakas (kjh)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-kjh")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-kjh")
text = "some example text in the Khakas language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-sqi
|
facebook
| 2023-09-01T10:12:36Z | 528 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:12:07Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Albanian Text-to-Speech
This repository contains the **Albanian (sqi)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-sqi")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-sqi")
text = "some example text in the Albanian language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-ace
|
facebook
| 2023-09-01T10:12:32Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:12:07Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Aceh Text-to-Speech
This repository contains the **Aceh (ace)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-ace")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-ace")
text = "some example text in the Aceh language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-por
|
facebook
| 2023-09-01T10:11:51Z | 3,670 | 14 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:11:35Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Portuguese Text-to-Speech
This repository contains the **Portuguese (por)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-por")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-por")
text = "some example text in the Portuguese language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-myx
|
facebook
| 2023-09-01T10:11:47Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:11:22Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Masaaba Text-to-Speech
This repository contains the **Masaaba (myx)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-myx")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-myx")
text = "some example text in the Masaaba language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-kjg
|
facebook
| 2023-09-01T10:11:23Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:10:47Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Khmu Text-to-Speech
This repository contains the **Khmu (kjg)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-kjg")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-kjg")
text = "some example text in the Khmu language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
facebook/mms-tts-acd
|
facebook
| 2023-09-01T10:11:12Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:10:47Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Gikyode Text-to-Speech
This repository contains the **Gikyode (acd)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-acd")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-acd")
text = "some example text in the Gikyode language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
ProomptEngineer/shocked-face-meme-one-piece
|
ProomptEngineer
| 2023-09-01T10:11:03Z | 3 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-01T10:10:58Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PEOPShockedFace
widget:
- text: PEOPShockedFace
---
# Shocked Face [Meme] [One Piece]

<p>give you characters the funny shocked face from one piece...</p><p>weights 0.-1</p><h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a> </h2>
## Image examples for the model:









|
facebook/mms-tts-bmu
|
facebook
| 2023-09-01T10:10:56Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:10:32Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Somba-Siawari Text-to-Speech
This repository contains the **Somba-Siawari (bmu)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-bmu")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-bmu")
text = "some example text in the Somba-Siawari language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
ProomptEngineer/pe-balloon-diffusion-style
|
ProomptEngineer
| 2023-09-01T10:10:19Z | 74 | 12 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-01T10:10:15Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PEBalloonStyle
widget:
- text: PEBalloonStyle
---
# PE Balloon Diffusion [Style]

<h2 id="heading-5">Wondered what things would look like if their made of ballons? then try this one!</h2><h2 id="heading-6">Weights 0.8-1</h2><h2 id="heading-7">If you want to donate:</h2><h2 id="heading-8"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><h2 id="heading-10">Add "Ballon Sculpture" if effect is not strong enough</h2><p></p>
## Image examples for the model:









|
facebook/mms-tts-bmr
|
facebook
| 2023-09-01T10:10:14Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T10:09:48Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Muinane Text-to-Speech
This repository contains the **Muinane (bmr)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-bmr")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-bmr")
text = "some example text in the Muinane language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
rjindal/rohit-bloom-finetuned_SMALL
|
rjindal
| 2023-09-01T10:09:07Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-01T10:09:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
ProomptEngineer/pe-neon-uv-diffusion-style
|
ProomptEngineer
| 2023-09-01T10:08:11Z | 44 | 4 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-01T10:08:07Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PENeonUV
widget:
- text: PENeonUV
---
# PE Neon UV Diffusion [Style]

<p>Neon UV Style inspired by rave makeup and outfits...</p><p>weights 0.8-1</p><h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2>
## Image examples for the model:









|
77xiaoyuanzi8/code_reviewer_demo
|
77xiaoyuanzi8
| 2023-09-01T10:03:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-01T08:17:47Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
EmirhanExecute/Pixelcopter-t2
|
EmirhanExecute
| 2023-09-01T09:59:37Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-01T09:59:34Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-t2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 16.10 +/- 19.55
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
s3nh/NousResearch-Yarn-Llama-2-7b-128k-GGUF
|
s3nh
| 2023-09-01T09:55:07Z | 2 | 2 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-01T09:14:51Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
FredericProtat/poca-SoccerTwos
|
FredericProtat
| 2023-09-01T09:49:22Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-09-01T09:48:48Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: FredericProtat/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Yoshimitsujhi/1-09-falcon7b-health
|
Yoshimitsujhi
| 2023-09-01T09:42:04Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"base_model:finetune:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2023-09-01T07:49:53Z |
---
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- generated_from_trainer
model-index:
- name: 1-09-falcon7b-health
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1-09-falcon7b-health
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
larabe/tester
|
larabe
| 2023-09-01T09:39:20Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-31T23:02:39Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: tester
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tester
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
hetpatel-7/ppo-LunarLander-v2
|
hetpatel-7
| 2023-09-01T09:24:42Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-01T09:24:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.20 +/- 16.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PawanKrGunjan/whisper-tiny-finetuned-gtzan
|
PawanKrGunjan
| 2023-09-01T09:20:27Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-01T02:52:07Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: whisper-tiny-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.53
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-gtzan
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3365
- Accuracy: 0.53
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3484 | 1.0 | 113 | 1.8521 | 0.26 |
| 1.9419 | 2.0 | 226 | 1.9107 | 0.3 |
| 1.8627 | 3.0 | 339 | 1.5300 | 0.49 |
| 1.8178 | 4.0 | 452 | 1.5152 | 0.41 |
| 1.5341 | 5.0 | 565 | 1.3365 | 0.53 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jbilcke-hf/sdxl-zelda64
|
jbilcke-hf
| 2023-09-01T09:19:44Z | 535 | 10 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-31T16:58:10Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: <s0><s1>
inference: false
---
# sdxl-zelda64 LoRA by Julian BILCKE (HF: [jbilcke-hf](https://huggingface.co/jbilcke-hf), Replicate: [jbilcke](https://replicate.com/jbilcke))
### A SDXL LoRA inspired by Zelda games on Nintendo 64

>
## Inference with Replicate API
Grab your replicate token [here](https://replicate.com/account)
```bash
pip install replicate
export REPLICATE_API_TOKEN=r8_*************************************
```
```py
import replicate
output = replicate.run(
"sdxl-zelda64@sha256:435913219645a80ee6743ca500940ab8708889172ca5c4c71bbb701309bb4a60",
input={"prompt": "Link working as a pizza delivery driver, on a scooter, in new york, in the style of TOK"}
)
print(output)
```
You may also do inference via the API with Node.js or curl, and locally with COG and Docker, [check out the Replicate API page for this model](https://replicate.com/jbilcke/sdxl-zelda64/api)
## Inference with 🧨 diffusers
Replicate SDXL LoRAs are trained with Pivotal Tuning, which combines training a concept via Dreambooth LoRA with training a new token with Textual Inversion.
As `diffusers` doesn't yet support textual inversion for SDXL, we will use cog-sdxl `TokenEmbeddingsHandler` class.
The trigger tokens for your prompt will be `<s0><s1>`
```shell
pip install diffusers transformers accelerate safetensors huggingface_hub
git clone https://github.com/replicate/cog-sdxl cog_sdxl
```
```py
import torch
from huggingface_hub import hf_hub_download
from diffusers import DiffusionPipeline
from cog_sdxl.dataset_and_utils import TokenEmbeddingsHandler
from diffusers.models import AutoencoderKL
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
load_lora_weights("jbilcke-hf/sdxl-zelda64", weight_name="lora.safetensors")
text_encoders = [pipe.text_encoder, pipe.text_encoder_2]
tokenizers = [pipe.tokenizer, pipe.tokenizer_2]
embedding_path = hf_hub_download(repo_id="jbilcke-hf/sdxl-zelda64", filename="embeddings.pti", repo_type="model")
embhandler = TokenEmbeddingsHandler(text_encoders, tokenizers)
embhandler.load_embeddings(embedding_path)
prompt="Link working as a pizza delivery driver, on a scooter, in new york, in the style of <s0><s1>"
images = pipe(
prompt,
cross_attention_kwargs={"scale": 0.8},
).images
#your output image
images[0]
```
|
nightdude/config_80091
|
nightdude
| 2023-09-01T09:17:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-01T09:16:34Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
chrisluo5311/falcon-7b-sharded-bf16-english-quote-qlora
|
chrisluo5311
| 2023-09-01T09:09:54Z | 6 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T04:03:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
bongo2112/sdxl-db-richtilebati
|
bongo2112
| 2023-09-01T09:09:19Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-01T09:09:18Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of richtilebati roof sheet
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
vikasvmane/myfirstDreamboothModel
|
vikasvmane
| 2023-09-01T08:51:13Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-01T03:09:05Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of VM
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
lynguyenminh/test-base
|
lynguyenminh
| 2023-09-01T08:51:08Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-31T07:12:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: test-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-base
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1445.1470
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6920.0094 | 1.0 | 50 | 8437.0029 | 1.0711 |
| 4915.5925 | 2.0 | 100 | 3042.6738 | 1.0 |
| 2169.443 | 3.0 | 150 | 1845.2919 | 1.0 |
| 1729.4778 | 4.0 | 200 | 1625.1453 | 1.0 |
| 1533.998 | 5.0 | 250 | 1445.1470 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Bazaar/cv_forest_pest_detection
|
Bazaar
| 2023-09-01T08:50:43Z | 198 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-01T08:41:58Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: cv_forest_pest_detection
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8042704463005066
---
# cv_forest_pest_detection
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### ActiasDubernardiOberthur

#### ActiasSeleneNingpoanaFelder

#### AgriusConvolvuli

#### AmsactaLactinea

#### AnoplophoraChinensisForster

#### AnoplophoraGlabripennisMotschulsky

#### AprionaGermari

#### AprionaSwainsoni

#### ArnpelophagaRubiginosaBremerEtGrey

#### AromiaBungiiFald

#### AtaturaIlia

#### BatoceraHorsfieldiHope

#### ByasaAlcinousKlug

#### CalospilosSuspectaWarren

#### CamptolomaInteriorata

#### CarposinaNiponensisWalsingham

#### CatharsiusMolossusLinnaeus

#### CeruraMencianaMoore

#### ChalcophoraJaponica

#### CicadellaViridis

#### ClanisBilineata

#### CletusPunctigerDallas

#### ClosteraAnachoreta

#### ClosteraAnastomosis

#### CnidocampaFlavescens

#### ConogethesPunctiferalis

#### CorythuchaCiliata

#### CreatonotusTransiens

#### CryptotympanaAtrataFabricius

#### CyclidiaSubstigmariaSubstigmaria

#### CyclopeltaObscura

#### CystidiaCouaggariaGuenee

#### DanausChrysippusLinnaeus

#### DanausGenutia

#### DasychiraGroteiMoore

#### DendrolimusPunctatusWalker

#### DiaphaniaPerspectalis

#### DicranocephalusWallichi

#### DictyopharaSinica

#### DorcusTitanusPlatymelus

#### DrosichaCorpulenta

#### EligmaNarcissus

#### EnmonodiaVespertiliFabricius

#### ErthesinaFullo

#### EuricaniaClara

#### EurostusValidusDallas

#### EurydemaDominulus

#### GeishaDistinctissima

#### GraphiumSarpedonLinnaeue

#### GraphosomaRubrolineata

#### HalyomorphaPicusFabricius

#### HestinaAssimilis

#### HistiaRhodopeCramer

#### HyphantriaCunea

#### JacobiascaFormosana

#### LatoriaConsociaWalker

#### LethocerusDeyrolliVuillefroy

#### LocastraMuscosalisWalker

#### LycormaDelicatula

#### MegopisSinicaSinicaWhite

#### MeimunaMongolica

#### MicromelalophaTroglodyta

#### MiltochristaStriata

#### MonochamusAlternatusHope

#### Ophthalmitisirrorataria

#### OrthagaAchatina

#### PapilioBianorCramer

#### PapilioMachaonLinnaeus

#### PapilioPolytesLinnaeus

#### PapilioProtenorCramer

#### PapilioXuthusLinnaeus

#### ParocneriaFurva

#### PergesaElpenorlewisi

#### PidorusAtratusButter

#### PierisRapae

#### PlagioderaVersicolora

#### PlatypleuraKaempferi

#### PlinachtusBicoloripesScott

#### PlinachtusDissimilis

#### PolygoniaCaureum

#### PolyuraNarcaeaHewitson

#### PorthesiaSimilis

#### ProdeniaLitura

#### ProtaetiaBrevitarsisLewis

#### PsilogrammaMenephron

#### RicaniaSublimata

#### RiptortusPedestris

#### SemanotusBifasciatusBifasciatus

#### SericinusMontelusGrey

#### SinnaExtrema

#### SmerinthusPlanusWalker

#### SpeiredoniaRetorta

#### SpilarctiaRobusta

#### SpilarctiaSubcarnea

#### StilprotiaSalicis

#### TheretraJaponica

#### ThoseaSinensisWalker

#### UropyiaMeticulodina

#### VanessaIndicaHerbst

|
Toflamus/GPT-2_para3M_2epoch_256
|
Toflamus
| 2023-09-01T08:42:02Z | 154 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-01T00:27:15Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: GPT-2_para3M_512
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT-2_para3M_2epoch_256
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.1873 | 0.01 | 500 | 4.0187 |
| 3.5461 | 0.02 | 1000 | 3.4287 |
| 3.2706 | 0.04 | 1500 | 3.1495 |
| 3.105 | 0.05 | 2000 | 2.9773 |
| 2.9885 | 0.06 | 2500 | 2.8566 |
| 2.8931 | 0.07 | 3000 | 2.7720 |
| 2.8307 | 0.08 | 3500 | 2.7016 |
| 2.7912 | 0.09 | 4000 | 2.6474 |
| 2.7295 | 0.11 | 4500 | 2.5972 |
| 2.6927 | 0.12 | 5000 | 2.5641 |
| 2.6756 | 0.13 | 5500 | 2.5248 |
| 2.6536 | 0.14 | 6000 | 2.4972 |
| 2.6186 | 0.15 | 6500 | 2.4730 |
| 2.5947 | 0.17 | 7000 | 2.4492 |
| 2.591 | 0.18 | 7500 | 2.4313 |
| 2.5706 | 0.19 | 8000 | 2.4172 |
| 2.5441 | 0.2 | 8500 | 2.3991 |
| 2.5266 | 0.21 | 9000 | 2.3838 |
| 2.5259 | 0.22 | 9500 | 2.3740 |
| 2.5173 | 0.24 | 10000 | 2.3629 |
| 2.5122 | 0.25 | 10500 | 2.3549 |
| 2.5004 | 0.26 | 11000 | 2.3409 |
| 2.4902 | 0.27 | 11500 | 2.3364 |
| 2.4735 | 0.28 | 12000 | 2.3242 |
| 2.4784 | 0.29 | 12500 | 2.3193 |
| 2.4754 | 0.31 | 13000 | 2.3126 |
| 2.4587 | 0.32 | 13500 | 2.3077 |
| 2.4613 | 0.33 | 14000 | 2.3050 |
| 2.4562 | 0.34 | 14500 | 2.2968 |
| 2.4422 | 0.35 | 15000 | 2.2913 |
| 2.4307 | 0.37 | 15500 | 2.2870 |
| 2.4339 | 0.38 | 16000 | 2.2814 |
| 2.445 | 0.39 | 16500 | 2.2801 |
| 2.4257 | 0.4 | 17000 | 2.2747 |
| 2.425 | 0.41 | 17500 | 2.2709 |
| 2.4095 | 0.42 | 18000 | 2.2672 |
| 2.4137 | 0.44 | 18500 | 2.2632 |
| 2.4284 | 0.45 | 19000 | 2.2601 |
| 2.419 | 0.46 | 19500 | 2.2569 |
| 2.4221 | 0.47 | 20000 | 2.2504 |
| 2.3951 | 0.48 | 20500 | 2.2507 |
| 2.4054 | 0.5 | 21000 | 2.2515 |
| 2.3977 | 0.51 | 21500 | 2.2442 |
| 2.4009 | 0.52 | 22000 | 2.2422 |
| 2.3941 | 0.53 | 22500 | 2.2388 |
| 2.3909 | 0.54 | 23000 | 2.2349 |
| 2.4016 | 0.55 | 23500 | 2.2380 |
| 2.389 | 0.57 | 24000 | 2.2326 |
| 2.3864 | 0.58 | 24500 | 2.2287 |
| 2.3795 | 0.59 | 25000 | 2.2285 |
| 2.3817 | 0.6 | 25500 | 2.2266 |
| 2.3789 | 0.61 | 26000 | 2.2256 |
| 2.3801 | 0.62 | 26500 | 2.2210 |
| 2.3687 | 0.64 | 27000 | 2.2189 |
| 2.378 | 0.65 | 27500 | 2.2194 |
| 2.3735 | 0.66 | 28000 | 2.2157 |
| 2.3758 | 0.67 | 28500 | 2.2142 |
| 2.3616 | 0.68 | 29000 | 2.2133 |
| 2.3731 | 0.7 | 29500 | 2.2085 |
| 2.3606 | 0.71 | 30000 | 2.2115 |
| 2.3516 | 0.72 | 30500 | 2.2072 |
| 2.3551 | 0.73 | 31000 | 2.2067 |
| 2.3626 | 0.74 | 31500 | 2.2033 |
| 2.3516 | 0.75 | 32000 | 2.2031 |
| 2.3658 | 0.77 | 32500 | 2.2008 |
| 2.3554 | 0.78 | 33000 | 2.1992 |
| 2.3524 | 0.79 | 33500 | 2.1988 |
| 2.3509 | 0.8 | 34000 | 2.1996 |
| 2.3474 | 0.81 | 34500 | 2.1949 |
| 2.3431 | 0.83 | 35000 | 2.1943 |
| 2.3413 | 0.84 | 35500 | 2.1907 |
| 2.3592 | 0.85 | 36000 | 2.1917 |
| 2.3636 | 0.86 | 36500 | 2.1919 |
| 2.3529 | 0.87 | 37000 | 2.1881 |
| 2.3371 | 0.88 | 37500 | 2.1875 |
| 2.3413 | 0.9 | 38000 | 2.1856 |
| 2.3463 | 0.91 | 38500 | 2.1839 |
| 2.3303 | 0.92 | 39000 | 2.1859 |
| 2.3432 | 0.93 | 39500 | 2.1790 |
| 2.3455 | 0.94 | 40000 | 2.1801 |
| 2.344 | 0.95 | 40500 | 2.1761 |
| 2.3442 | 0.97 | 41000 | 2.1759 |
| 2.3331 | 0.98 | 41500 | 2.1760 |
| 2.3391 | 0.99 | 42000 | 2.1748 |
| 2.3275 | 1.0 | 42500 | 2.1760 |
| 2.3308 | 1.01 | 43000 | 2.1712 |
| 2.3191 | 1.03 | 43500 | 2.1727 |
| 2.3182 | 1.04 | 44000 | 2.1682 |
| 2.3184 | 1.05 | 44500 | 2.1683 |
| 2.3177 | 1.06 | 45000 | 2.1668 |
| 2.3163 | 1.07 | 45500 | 2.1643 |
| 2.321 | 1.08 | 46000 | 2.1631 |
| 2.3164 | 1.1 | 46500 | 2.1655 |
| 2.3231 | 1.11 | 47000 | 2.1631 |
| 2.3139 | 1.12 | 47500 | 2.1591 |
| 2.3223 | 1.13 | 48000 | 2.1588 |
| 2.3133 | 1.14 | 48500 | 2.1588 |
| 2.2995 | 1.16 | 49000 | 2.1569 |
| 2.308 | 1.17 | 49500 | 2.1578 |
| 2.3062 | 1.18 | 50000 | 2.1539 |
| 2.3203 | 1.19 | 50500 | 2.1538 |
| 2.3116 | 1.2 | 51000 | 2.1526 |
| 2.294 | 1.21 | 51500 | 2.1520 |
| 2.2941 | 1.23 | 52000 | 2.1499 |
| 2.3053 | 1.24 | 52500 | 2.1502 |
| 2.3154 | 1.25 | 53000 | 2.1507 |
| 2.3057 | 1.26 | 53500 | 2.1485 |
| 2.3106 | 1.27 | 54000 | 2.1464 |
| 2.3035 | 1.28 | 54500 | 2.1457 |
| 2.304 | 1.3 | 55000 | 2.1445 |
| 2.2985 | 1.31 | 55500 | 2.1439 |
| 2.296 | 1.32 | 56000 | 2.1421 |
| 2.2917 | 1.33 | 56500 | 2.1411 |
| 2.2936 | 1.34 | 57000 | 2.1406 |
| 2.2866 | 1.36 | 57500 | 2.1383 |
| 2.2973 | 1.37 | 58000 | 2.1396 |
| 2.2865 | 1.38 | 58500 | 2.1378 |
| 2.2929 | 1.39 | 59000 | 2.1370 |
| 2.2858 | 1.4 | 59500 | 2.1351 |
| 2.2857 | 1.41 | 60000 | 2.1350 |
| 2.3019 | 1.43 | 60500 | 2.1338 |
| 2.289 | 1.44 | 61000 | 2.1330 |
| 2.2874 | 1.45 | 61500 | 2.1318 |
| 2.2858 | 1.46 | 62000 | 2.1305 |
| 2.2875 | 1.47 | 62500 | 2.1298 |
| 2.2859 | 1.49 | 63000 | 2.1294 |
| 2.28 | 1.5 | 63500 | 2.1275 |
| 2.2866 | 1.51 | 64000 | 2.1277 |
| 2.2851 | 1.52 | 64500 | 2.1281 |
| 2.2806 | 1.53 | 65000 | 2.1258 |
| 2.2889 | 1.54 | 65500 | 2.1245 |
| 2.2745 | 1.56 | 66000 | 2.1249 |
| 2.2739 | 1.57 | 66500 | 2.1230 |
| 2.2853 | 1.58 | 67000 | 2.1226 |
| 2.2773 | 1.59 | 67500 | 2.1228 |
| 2.2742 | 1.6 | 68000 | 2.1214 |
| 2.2656 | 1.61 | 68500 | 2.1200 |
| 2.2756 | 1.63 | 69000 | 2.1194 |
| 2.2806 | 1.64 | 69500 | 2.1193 |
| 2.271 | 1.65 | 70000 | 2.1186 |
| 2.2671 | 1.66 | 70500 | 2.1185 |
| 2.2718 | 1.67 | 71000 | 2.1168 |
| 2.2781 | 1.69 | 71500 | 2.1172 |
| 2.2744 | 1.7 | 72000 | 2.1164 |
| 2.2744 | 1.71 | 72500 | 2.1156 |
| 2.2603 | 1.72 | 73000 | 2.1154 |
| 2.2703 | 1.73 | 73500 | 2.1141 |
| 2.267 | 1.74 | 74000 | 2.1141 |
| 2.2614 | 1.76 | 74500 | 2.1141 |
| 2.263 | 1.77 | 75000 | 2.1133 |
| 2.2668 | 1.78 | 75500 | 2.1128 |
| 2.2642 | 1.79 | 76000 | 2.1128 |
| 2.2637 | 1.8 | 76500 | 2.1128 |
| 2.2692 | 1.82 | 77000 | 2.1118 |
| 2.2631 | 1.83 | 77500 | 2.1117 |
| 2.2567 | 1.84 | 78000 | 2.1116 |
| 2.2707 | 1.85 | 78500 | 2.1112 |
| 2.2707 | 1.86 | 79000 | 2.1109 |
| 2.2664 | 1.87 | 79500 | 2.1114 |
| 2.266 | 1.89 | 80000 | 2.1113 |
| 2.2645 | 1.9 | 80500 | 2.1108 |
| 2.2767 | 1.91 | 81000 | 2.1106 |
| 2.274 | 1.92 | 81500 | 2.1102 |
| 2.2587 | 1.93 | 82000 | 2.1102 |
| 2.2736 | 1.94 | 82500 | 2.1100 |
| 2.2633 | 1.96 | 83000 | 2.1102 |
| 2.2652 | 1.97 | 83500 | 2.1100 |
| 2.2655 | 1.98 | 84000 | 2.1101 |
| 2.2683 | 1.99 | 84500 | 2.1100 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.2
|
Toflamus/GPT-2_para3M
|
Toflamus
| 2023-09-01T08:39:46Z | 165 | 3 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-26T06:26:19Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: GPT-2_para3M
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT-2_para3M
This model is a pretrained version of [gpt2](https://huggingface.co/gpt2) on an [Tinystory](https://huggingface.co/datasets/roneneldan/TinyStories) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3207
## Model description
More information needed
## Intended uses & limitations
The limitation of this model are mainly 2 aspects.
* The number of parameter of the model is only around 3.6 million which is not large. As a result the model cannot generate text in all perspectives.
* The dataset is only composed of stories, this greatly hinder the performance of the model. Only stories can be generated.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 9.6976 | 0.01 | 100 | 7.7754 |
| 6.488 | 0.02 | 200 | 5.7795 |
| 5.3705 | 0.03 | 300 | 4.8609 |
| 4.5632 | 0.04 | 400 | 4.2544 |
| 4.141 | 0.05 | 500 | 3.9425 |
| 3.902 | 0.06 | 600 | 3.7189 |
| 3.7074 | 0.07 | 700 | 3.5514 |
| 3.5716 | 0.08 | 800 | 3.4291 |
| 3.4695 | 0.08 | 900 | 3.3253 |
| 3.3847 | 0.09 | 1000 | 3.2311 |
| 3.2974 | 0.1 | 1100 | 3.1595 |
| 3.2318 | 0.11 | 1200 | 3.0909 |
| 3.1698 | 0.12 | 1300 | 3.0329 |
| 3.1258 | 0.13 | 1400 | 2.9879 |
| 3.0802 | 0.14 | 1500 | 2.9396 |
| 3.046 | 0.15 | 1600 | 2.9017 |
| 3.0047 | 0.16 | 1700 | 2.8652 |
| 2.9701 | 0.17 | 1800 | 2.8320 |
| 2.9425 | 0.18 | 1900 | 2.8048 |
| 2.9141 | 0.19 | 2000 | 2.7757 |
| 2.8896 | 0.2 | 2100 | 2.7515 |
| 2.8667 | 0.21 | 2200 | 2.7263 |
| 2.8443 | 0.22 | 2300 | 2.7066 |
| 2.8288 | 0.23 | 2400 | 2.6815 |
| 2.8044 | 0.24 | 2500 | 2.6620 |
| 2.7886 | 0.25 | 2600 | 2.6471 |
| 2.7732 | 0.25 | 2700 | 2.6283 |
| 2.7576 | 0.26 | 2800 | 2.6101 |
| 2.7479 | 0.27 | 2900 | 2.5978 |
| 2.7256 | 0.28 | 3000 | 2.5819 |
| 2.7179 | 0.29 | 3100 | 2.5688 |
| 2.707 | 0.3 | 3200 | 2.5595 |
| 2.6921 | 0.31 | 3300 | 2.5471 |
| 2.6809 | 0.32 | 3400 | 2.5329 |
| 2.6779 | 0.33 | 3500 | 2.5232 |
| 2.663 | 0.34 | 3600 | 2.5154 |
| 2.6554 | 0.35 | 3700 | 2.5030 |
| 2.6437 | 0.36 | 3800 | 2.4967 |
| 2.6346 | 0.37 | 3900 | 2.4859 |
| 2.6293 | 0.38 | 4000 | 2.4768 |
| 2.6221 | 0.39 | 4100 | 2.4709 |
| 2.6178 | 0.4 | 4200 | 2.4623 |
| 2.6076 | 0.41 | 4300 | 2.4586 |
| 2.6025 | 0.41 | 4400 | 2.4492 |
| 2.5907 | 0.42 | 4500 | 2.4409 |
| 2.5896 | 0.43 | 4600 | 2.4369 |
| 2.5816 | 0.44 | 4700 | 2.4316 |
| 2.5783 | 0.45 | 4800 | 2.4256 |
| 2.577 | 0.46 | 4900 | 2.4204 |
| 2.5685 | 0.47 | 5000 | 2.4150 |
| 2.567 | 0.48 | 5100 | 2.4093 |
| 2.5564 | 0.49 | 5200 | 2.4059 |
| 2.5556 | 0.5 | 5300 | 2.4012 |
| 2.5496 | 0.51 | 5400 | 2.3997 |
| 2.545 | 0.52 | 5500 | 2.3956 |
| 2.5473 | 0.53 | 5600 | 2.3905 |
| 2.5389 | 0.54 | 5700 | 2.3856 |
| 2.5373 | 0.55 | 5800 | 2.3818 |
| 2.5318 | 0.56 | 5900 | 2.3787 |
| 2.5313 | 0.57 | 6000 | 2.3751 |
| 2.5285 | 0.58 | 6100 | 2.3722 |
| 2.5318 | 0.58 | 6200 | 2.3687 |
| 2.5229 | 0.59 | 6300 | 2.3666 |
| 2.5194 | 0.6 | 6400 | 2.3632 |
| 2.5174 | 0.61 | 6500 | 2.3598 |
| 2.5169 | 0.62 | 6600 | 2.3567 |
| 2.511 | 0.63 | 6700 | 2.3552 |
| 2.5093 | 0.64 | 6800 | 2.3546 |
| 2.5114 | 0.65 | 6900 | 2.3528 |
| 2.5064 | 0.66 | 7000 | 2.3492 |
| 2.507 | 0.67 | 7100 | 2.3483 |
| 2.502 | 0.68 | 7200 | 2.3445 |
| 2.4964 | 0.69 | 7300 | 2.3448 |
| 2.4999 | 0.7 | 7400 | 2.3423 |
| 2.4961 | 0.71 | 7500 | 2.3407 |
| 2.489 | 0.72 | 7600 | 2.3386 |
| 2.4926 | 0.73 | 7700 | 2.3384 |
| 2.4919 | 0.74 | 7800 | 2.3365 |
| 2.491 | 0.74 | 7900 | 2.3349 |
| 2.4893 | 0.75 | 8000 | 2.3333 |
| 2.4909 | 0.76 | 8100 | 2.3318 |
| 2.4862 | 0.77 | 8200 | 2.3305 |
| 2.4884 | 0.78 | 8300 | 2.3299 |
| 2.49 | 0.79 | 8400 | 2.3280 |
| 2.4788 | 0.8 | 8500 | 2.3286 |
| 2.4865 | 0.81 | 8600 | 2.3272 |
| 2.4823 | 0.82 | 8700 | 2.3263 |
| 2.4844 | 0.83 | 8800 | 2.3255 |
| 2.4826 | 0.84 | 8900 | 2.3251 |
| 2.4844 | 0.85 | 9000 | 2.3243 |
| 2.4798 | 0.86 | 9100 | 2.3231 |
| 2.4864 | 0.87 | 9200 | 2.3231 |
| 2.4755 | 0.88 | 9300 | 2.3228 |
| 2.4735 | 0.89 | 9400 | 2.3228 |
| 2.4786 | 0.9 | 9500 | 2.3224 |
| 2.4791 | 0.91 | 9600 | 2.3222 |
| 2.4809 | 0.91 | 9700 | 2.3214 |
| 2.4778 | 0.92 | 9800 | 2.3213 |
| 2.4777 | 0.93 | 9900 | 2.3211 |
| 2.4798 | 0.94 | 10000 | 2.3209 |
| 2.4768 | 0.95 | 10100 | 2.3212 |
| 2.4808 | 0.96 | 10200 | 2.3209 |
| 2.4762 | 0.97 | 10300 | 2.3208 |
| 2.4778 | 0.98 | 10400 | 2.3208 |
| 2.4816 | 0.99 | 10500 | 2.3207 |
| 2.4728 | 1.0 | 10600 | 2.3207 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.2
|
WizardLMTeam/WizardMath-7B-V1.0
|
WizardLMTeam
| 2023-09-01T08:18:09Z | 3,710 | 52 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-11T04:32:31Z |
---
license: llama2
---
## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF)
<p align="center">
🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
| Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>**WizardLM-70B-V1.0**</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>📃**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6 pass@1**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
</font>
**Github Repo**: https://github.com/nlpxucan/WizardLM/tree/main/WizardMath
**Twitter**: https://twitter.com/WizardLM_AI/status/1689998428200112128
**Discord**: https://discord.gg/VZjjHtWrKs
## Comparing WizardMath-V1.0 with Other LLMs.
🔥 The following figure shows that our **WizardMath-70B-V1.0 attains the fifth position in this benchmark**, surpassing ChatGPT (81.6 vs. 80.8) , Claude Instant (81.6 vs. 80.9), PaLM 2 540B (81.6 vs. 80.7).
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardMath/images/wizardmath_gsm8k.png" alt="WizardMath" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
❗<b>Note for model system prompts usage:</b>
Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**.
**Default version:**
```
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
```
**CoT Version:** (❗For the **simple** math questions, we do NOT recommend to use the CoT prompt.)
```
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
```
## Inference WizardMath Demo Script
We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
❗<b>To commen concern about dataset:</b>
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
Our researchers have no authority to publicly release them without authorization.
Thank you for your understanding.
## Citation
Please cite the repo if you use the data, method or code in this repo.
```
@article{luo2023wizardmath,
title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct},
author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei},
journal={arXiv preprint arXiv:2308.09583},
year={2023}
}
```
|
Devis2awe/llama2-qlora-finetunined-query-resolver-mk2
|
Devis2awe
| 2023-09-01T08:14:00Z | 1 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-08-31T08:14:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
mdance/bert-finetuned-ner
|
mdance
| 2023-09-01T08:11:38Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-01T04:05:31Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9348652669862787
- name: Recall
type: recall
value: 0.9516997643890945
- name: F1
type: f1
value: 0.9432074055541656
- name: Accuracy
type: accuracy
value: 0.986504385706717
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0615
- Precision: 0.9349
- Recall: 0.9517
- F1: 0.9432
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0764 | 1.0 | 1756 | 0.0867 | 0.9092 | 0.9303 | 0.9196 | 0.9794 |
| 0.032 | 2.0 | 3512 | 0.0603 | 0.9266 | 0.9453 | 0.9359 | 0.9856 |
| 0.0181 | 3.0 | 5268 | 0.0615 | 0.9349 | 0.9517 | 0.9432 | 0.9865 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
kyungmin011029/category_last
|
kyungmin011029
| 2023-09-01T08:10:57Z | 62 | 1 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-01T08:09:52Z |
---
license: cc-by-sa-4.0
base_model: klue/bert-base
tags:
- generated_from_keras_callback
model-index:
- name: category_last
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# category_last
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
kyungmin011029/code_last
|
kyungmin011029
| 2023-09-01T08:10:34Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-01T08:09:52Z |
---
license: cc-by-sa-4.0
base_model: klue/bert-base
tags:
- generated_from_keras_callback
model-index:
- name: code_last
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# code_last
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
s3nh/Sentdex-WSB-GPT-13B-GGUF
|
s3nh
| 2023-09-01T08:09:40Z | 42 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-01T07:51:55Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Sentdex/WSB-GPT-13B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, gguf_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
|
nadcy/bloomz-1b7_MONA_LORA
|
nadcy
| 2023-09-01T08:02:26Z | 6 | 1 |
peft
|
[
"peft",
"arxiv:2203.00148",
"region:us"
] | null | 2023-08-01T07:47:41Z |
---
library_name: peft
---
# BLOOMZ-角色LORA(mona)
BLOOMZ是由开源社区主导训练的一系列跨语言模型,能够在数十种语言中无监督学习执行人类指令。这些模型经过微调后,能在未见过的任务和语言中实现跨语言泛化。
在本项目中,我们该模型令该模型根据三个描述环境的信息(天气,特殊日子以及一天内的时段)以及一个字符串形式的日程提示生成角色的对话。
例如输入:
human:天气:晴天,日期:双休日,时间:凌晨,提示我去做:买咖啡assistant:
模型将会输出该角色的日程提示:
双休日的凌晨,阳光照耀着大地,就像星辰在闪烁。你打算去买咖啡,记得带上你的咖啡杯。
## Prompt介绍
具体为:
"天气"描述了诸如晴天、多云、雨天、雪天、雾、霾等各类天气状况。
"日期"涵盖了众多节假日、纪念日等,如春节、元旦、春假、暑假等。
"时间"提供了一天中不同时间段的描述,如早晨、中午、晚上等。
"日程提示"如"需要在9点时赶飞机","下午3点有重要的会议",描述待提醒的事项。
在指令微调截断我们使用的数据使用了严格的格式
human:天气:[...],日期:[...],时间:[...],提示我去做:[...]assistant:[...]
所以建议推理采取 __相同的格式(prompt)__ 执行任务,当然我们也方向仅执行该任务指令微调也有助于其它对话能力提示,具体见 __模型泛化__ 一节。
## 训练数据
训练过程受到论文 [LIMA: Less Is More for Alignment](https://arxiv.org/abs/2203.00148) 的启发。与LIMA一样,BLOOMZ-LORA主要关注大型语言模型训练的指令微调阶段。试图用尽可能少的高质量指令达到训练任务。
我们使用GPT4模型获得高质量的指令微调数据,在生成数据的过程中我们展示了多个基准样本,并设定prompt模板生成随机生成了一系列的输入,由GPT4这个巨型模型生成1000条指令,一下是一个数据样例:
{"text":"human:今天天气:雷暴,日期:中秋节,时间:深夜,提示我去做:'去科技展览会'assistant:在深夜雷鸣电闪的中秋节里,你计划去科技展览会。请牢记,正是因为无法更改,无可违逆,只能接受,命运才会被称之为命运。end"}
我们选择学习的角色是游戏原神中的角色莫娜,具体才角色的初始语料可参考官方Wiki。
## 量化,性能与硬件要求
LORA模型,可以在边缘设备上部署,只需要4GB的内存。这样可以直接在用户的设备上提供低延迟的个性化服务,无需高带宽的互联网连接或高性能的服务器。
在测试中,BLOOMZ-LORA表现出强大的性能,生成的对话密切匹配《原神》中"Mona"角色的风格和语气。它从训练数据中的少量例子中学会遵循特定的回答格式,并能很好地泛化到训练数据中没有出现的未见任务。
我们在训练和测试阶段都采用bitsandbytes的int8量化选项(总token长度<200,不讨论transformer模型二次复杂度在长序列上带来的性能消耗)。
模型运行要求<4GB,已在t5, jetson nano, A100下测试推理。
## 模型泛化
我们发现即使在上述非常垂域的任务进行训练也能够大幅改善其它任务的性能,例如一下这个示例:
基座模型效果:
human:今天天气很好,我应该去做什么assistant: I should go to work
微调后效果:
human:今天天气很好,我应该去做什么assistant:今天是个好天气,去外面走走吧,去外面走走,去享受阳光吧。
微调有助于缓解先前模型错误回答另一种语言的缺陷,以及能够生成更为丰富的内容。
## 结论
1. BLOOMZ-LORA展示了大型语言模型中几乎所有的知识都是在预训练阶段学习的,只需要少量的指令调整数据就可以教授模型产生高质量的输出。它为基于流行虚构角色创建AI个人助手奠定了坚实的基础。
2. 蒸馏某个大模型进行微调的成本极低,作者消耗的GPT4标注费用小于10美元,在一张A100内训练Lora模型能够在10分钟内完成。而且,数据还能够通过更为细致的规划使得模型获得更为丰富的能力,这建立在预训练模型强大的先验之上。
3. 后续我们将补充模型大小与性能变化的测试结果以及训练细节,并尝试重新设计1000条指令的分布,探讨任务设计对模型性能的影响。
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
## 在线使用连接,测试T5-普通内存下可用,运行约5min(谷歌colab,需外网)
https://colab.research.google.com/drive/12zKnvIAEqGCt2Qi_IS99GfTBbrzdMX8L?usp=sharing

|
Hamzaabbas77/FINAL2-GPT2
|
Hamzaabbas77
| 2023-09-01T07:56:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-01T07:56:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
tomjam/my_awesome_peft_model
|
tomjam
| 2023-09-01T07:42:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-01T07:41:46Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
dkqjrm/20230901120149
|
dkqjrm
| 2023-09-01T07:40:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-01T03:02:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230901120149'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230901120149
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1576
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 340 | 0.1594 | 0.5 |
| 0.1863 | 2.0 | 680 | 0.1639 | 0.5 |
| 0.1705 | 3.0 | 1020 | 0.1604 | 0.5 |
| 0.1705 | 4.0 | 1360 | 0.1572 | 0.5 |
| 0.1659 | 5.0 | 1700 | 0.1604 | 0.5 |
| 0.1635 | 6.0 | 2040 | 0.1674 | 0.5 |
| 0.1635 | 7.0 | 2380 | 0.1568 | 0.5 |
| 0.1633 | 8.0 | 2720 | 0.1633 | 0.5 |
| 0.1599 | 9.0 | 3060 | 0.1611 | 0.5 |
| 0.1599 | 10.0 | 3400 | 0.1636 | 0.5 |
| 0.1615 | 11.0 | 3740 | 0.1574 | 0.5 |
| 0.1606 | 12.0 | 4080 | 0.1632 | 0.5 |
| 0.1606 | 13.0 | 4420 | 0.1579 | 0.5 |
| 0.1594 | 14.0 | 4760 | 0.1623 | 0.5 |
| 0.1698 | 15.0 | 5100 | 0.1623 | 0.5 |
| 0.1698 | 16.0 | 5440 | 0.1614 | 0.5 |
| 0.168 | 17.0 | 5780 | 0.1579 | 0.5 |
| 0.1626 | 18.0 | 6120 | 0.1586 | 0.5 |
| 0.1626 | 19.0 | 6460 | 0.1565 | 0.5 |
| 0.1604 | 20.0 | 6800 | 0.1574 | 0.5 |
| 0.1595 | 21.0 | 7140 | 0.1601 | 0.5 |
| 0.1595 | 22.0 | 7480 | 0.1675 | 0.5 |
| 0.1615 | 23.0 | 7820 | 0.1602 | 0.5 |
| 0.1669 | 24.0 | 8160 | 0.1604 | 0.5 |
| 0.1677 | 25.0 | 8500 | 0.1635 | 0.5 |
| 0.1677 | 26.0 | 8840 | 0.1603 | 0.5 |
| 0.1666 | 27.0 | 9180 | 0.1614 | 0.5 |
| 0.1656 | 28.0 | 9520 | 0.1609 | 0.5 |
| 0.1656 | 29.0 | 9860 | 0.1625 | 0.5 |
| 0.1668 | 30.0 | 10200 | 0.1624 | 0.5 |
| 0.1658 | 31.0 | 10540 | 0.1702 | 0.5 |
| 0.1658 | 32.0 | 10880 | 0.1606 | 0.5 |
| 0.166 | 33.0 | 11220 | 0.1657 | 0.5 |
| 0.1674 | 34.0 | 11560 | 0.1619 | 0.5 |
| 0.1674 | 35.0 | 11900 | 0.1585 | 0.5 |
| 0.1636 | 36.0 | 12240 | 0.1592 | 0.5 |
| 0.1612 | 37.0 | 12580 | 0.1568 | 0.5 |
| 0.1612 | 38.0 | 12920 | 0.1607 | 0.5 |
| 0.159 | 39.0 | 13260 | 0.1577 | 0.5 |
| 0.1586 | 40.0 | 13600 | 0.1566 | 0.5 |
| 0.1586 | 41.0 | 13940 | 0.1584 | 0.5 |
| 0.1587 | 42.0 | 14280 | 0.1620 | 0.5 |
| 0.1577 | 43.0 | 14620 | 0.1571 | 0.5 |
| 0.1577 | 44.0 | 14960 | 0.1610 | 0.5 |
| 0.1587 | 45.0 | 15300 | 0.1576 | 0.5 |
| 0.1578 | 46.0 | 15640 | 0.1577 | 0.5 |
| 0.1578 | 47.0 | 15980 | 0.1570 | 0.5 |
| 0.1592 | 48.0 | 16320 | 0.1578 | 0.5 |
| 0.1578 | 49.0 | 16660 | 0.1565 | 0.5 |
| 0.1582 | 50.0 | 17000 | 0.1581 | 0.5 |
| 0.1582 | 51.0 | 17340 | 0.1571 | 0.5 |
| 0.1569 | 52.0 | 17680 | 0.1585 | 0.5 |
| 0.1586 | 53.0 | 18020 | 0.1566 | 0.5 |
| 0.1586 | 54.0 | 18360 | 0.1579 | 0.5 |
| 0.1576 | 55.0 | 18700 | 0.1578 | 0.5 |
| 0.1577 | 56.0 | 19040 | 0.1581 | 0.5 |
| 0.1577 | 57.0 | 19380 | 0.1566 | 0.5 |
| 0.1571 | 58.0 | 19720 | 0.1572 | 0.5 |
| 0.1578 | 59.0 | 20060 | 0.1562 | 0.5 |
| 0.1578 | 60.0 | 20400 | 0.1579 | 0.5 |
| 0.157 | 61.0 | 20740 | 0.1578 | 0.5 |
| 0.157 | 62.0 | 21080 | 0.1566 | 0.5 |
| 0.157 | 63.0 | 21420 | 0.1572 | 0.5 |
| 0.1562 | 64.0 | 21760 | 0.1594 | 0.5 |
| 0.1584 | 65.0 | 22100 | 0.1582 | 0.5 |
| 0.1584 | 66.0 | 22440 | 0.1566 | 0.5 |
| 0.1549 | 67.0 | 22780 | 0.1579 | 0.5 |
| 0.1582 | 68.0 | 23120 | 0.1587 | 0.5 |
| 0.1582 | 69.0 | 23460 | 0.1580 | 0.5 |
| 0.157 | 70.0 | 23800 | 0.1580 | 0.5 |
| 0.1563 | 71.0 | 24140 | 0.1585 | 0.5 |
| 0.1563 | 72.0 | 24480 | 0.1576 | 0.5 |
| 0.1562 | 73.0 | 24820 | 0.1570 | 0.5 |
| 0.1566 | 74.0 | 25160 | 0.1576 | 0.5 |
| 0.156 | 75.0 | 25500 | 0.1570 | 0.5 |
| 0.156 | 76.0 | 25840 | 0.1575 | 0.5 |
| 0.1566 | 77.0 | 26180 | 0.1584 | 0.5 |
| 0.1561 | 78.0 | 26520 | 0.1572 | 0.5 |
| 0.1561 | 79.0 | 26860 | 0.1580 | 0.5 |
| 0.1561 | 80.0 | 27200 | 0.1576 | 0.5 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Yoshimitsujhi/finetuned
|
Yoshimitsujhi
| 2023-09-01T07:40:06Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"base_model:finetune:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2023-08-31T12:31:41Z |
---
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- generated_from_trainer
model-index:
- name: finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 20
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
sosuneko/ppo-SnowballTarget
|
sosuneko
| 2023-09-01T07:38:53Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-01T07:38:46Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sosuneko/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Hamzaabbas77/FINAL-GPT2
|
Hamzaabbas77
| 2023-09-01T07:35:38Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-01T07:35:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
yrajm1997/medical-qa-fine-tuned-gpt2
|
yrajm1997
| 2023-09-01T07:31:20Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-31T07:03:51Z |
---
license: mit
language:
- en
library_name: transformers
---
|
sosuneko/Reinforce-Pixelcopter-PLE-v0
|
sosuneko
| 2023-09-01T07:27:02Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-01T07:26:57Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.70 +/- 23.29
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dg845/diffusers-cd_imagenet64_lpips
|
dg845
| 2023-09-01T07:23:28Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"generative model",
"unconditional image generation",
"arxiv:2303.01469",
"arxiv:2206.00364",
"arxiv:1506.03365",
"arxiv:1512.00567",
"license:mit",
"diffusers:ConsistencyModelPipeline",
"region:us"
] | null | 2023-06-21T10:57:25Z |
---
license: mit
tags:
- generative model
- unconditional image generation
---
Consistency models are a new class of generative models introduced in ["Consistency Models"](https://arxiv.org/abs/2303.01469) ([paper](https://arxiv.org/pdf/2303.01469.pdf), [code](https://github.com/openai/consistency_models)) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever.
From the paper abstract:
> Diffusion models have significantly advanced the fields of image, audio, and video generation, but
they depend on an iterative sampling process that causes slow generation. To overcome this limitation,
we propose consistency models, a new family of models that generate high quality samples by directly
mapping noise to data. They support fast one-step generation by design, while still allowing multistep
sampling to trade compute for sample quality. They also support zero-shot data editing, such as image
inpainting, colorization, and super-resolution, without requiring explicit training on these tasks.
Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone
generative models altogether. Through extensive experiments, we demonstrate that they outperform
existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new
state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64 x 64 for one-step generation. When
trained in isolation, consistency models become a new family of generative models that can outperform
existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet
64 x 64 and LSUN 256 x 256.
Intuitively, a consistency model can be thought of as a model which, when evaluated on a noisy image and timestep, returns an output image sample similar to that which would be returned by running a sampling algorithm on a diffusion model.
Consistency models can be parameterized by any neural network whose input has the same dimensionality as its output, such as a U-Net.
More precisely, given a teacher diffusion model and fixed sampler, we can train ("distill") a consistency model such that when it is given a noisy image and its corresponding timestep, the output sample of the consistency model will be close to the output that would result by using the sampler on the diffusion model to produce a sample, starting at the same noisy image and timestep.
The authors call this procedure "consistency distillation (CD)".
Consistency models can also be trained from scratch to generate clean images from a noisy image and timestep, which the authors call "consistency training (CT)".
This model is a `diffusers`-compatible version of the [cd_imagenet64_lpips.pt](https://github.com/openai/consistency_models#pre-trained-models) checkpont from the [original code and model release](https://github.com/openai/consistency_models).
This model was distilled (via consistency distillation (CD)) from an [EDM model](https://arxiv.org/pdf/2206.00364.pdf) trained on the ImageNet 64x64 dataset, using [LPIPS](https://richzhang.github.io/PerceptualSimilarity/) as the measure of closeness.
See the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information.
## Download
The original PyTorch model checkpoint can be downloaded from the [original code and model release](https://github.com/openai/consistency_models#pre-trained-models).
The `diffusers` pipeline for the `cd-imagenet64-lpips` model can be downloaded as follows:
```python
from diffusers import ConsistencyModelPipeline
pipe = ConsistencyModelPipeline.from_pretrained("dg845/diffusers-cd_imagenet64_lpips")
```
## Usage
The original model checkpoint can be used with the [original consistency models codebase](https://github.com/openai/consistency_models).
Here is an example of using the `cd_imagenet64_lpips` checkpoint with `diffusers`:
```python
import torch
from diffusers import ConsistencyModelPipeline
device = "cuda"
# Load the cd_imagenet64_lpips checkpoint.
model_id_or_path = "dg845/diffusers-cd_imagenet64_lpips"
pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)
# Onestep Sampling
image = pipe(num_inference_steps=1).images[0]
image.save("cd_imagenet64_lpips_onestep_sample.png")
# Onestep sampling, class-conditional image generation
# ImageNet-64 class label 145 corresponds to king penguins
image = pipe(num_inference_steps=1, class_labels=145).images[0]
image.save("cd_imagenet64_lpips_onestep_sample_penguin.png")
# Multistep sampling, class-conditional image generation
# Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo:
# https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L74
image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0]
image.save("cd_imagenet64_lpips_multistep_sample_penguin.png")
```
## Model Details
- **Model type:** Consistency model unconditional image generation model, distilled from a diffusion model
- **Dataset:** ImageNet 64x64
- **License:** MIT
- **Model Description:** This model performs unconditional image generation. Its main component is a U-Net, which parameterizes the consistency model. This model was distilled by the Consistency Model authors from an EDM diffusion model, also originally trained by the authors.
- **Resources for more information:**: [Paper](https://arxiv.org/abs/2303.01469), [GitHub Repository](https://github.com/openai/consistency_models), [Original Model Card](/openai/consistency_models/blob/main/model-card.md)
## Datasets
_Note: This section is taken from the ["Datasets" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#datasets)_.
The models that we are making available have been trained on the [ILSVRC 2012 subset of ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) or on individual categories from [LSUN](https://arxiv.org/abs/1506.03365). Here we outline the characteristics of these datasets that influence the behavior of the models:
**ILSVRC 2012 subset of ImageNet**: This dataset was curated in 2012 and has around a million pictures, each of which belongs to one of 1,000 categories. A significant number of the categories in this dataset are animals, plants, and other naturally occurring objects. Although many photographs include humans, these humans are typically not represented by the class label (for example, the category "Tench, tinca tinca" includes many photographs of individuals holding fish).
**LSUN**: This dataset was collected in 2015 by a combination of human labeling via Amazon Mechanical Turk and automated data labeling. Both classes that we consider have more than a million images. The dataset creators discovered that when assessed by trained experts, the label accuracy was approximately 90% throughout the entire LSUN dataset. The pictures are gathered from the internet, and those in the cat class often follow a "meme" format. Occasionally, people, including faces, appear in these photographs.
## Performance
_Note: This section is taken from the ["Performance" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#performance)_.
These models are intended to generate samples consistent with their training distributions.
This has been measured in terms of FID, Inception Score, Precision, and Recall.
These metrics all rely on the representations of a [pre-trained Inception-V3 model](https://arxiv.org/abs/1512.00567),
which was trained on ImageNet, and so is likely to focus more on the ImageNet classes (such as animals) than on other visual features (such as human faces).
## Intended Use
_Note: This section is taken from the ["Intended Use" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#intended-use)_.
These models are intended to be used for research purposes only. In particular, they can be used as a baseline for generative modeling research, or as a starting point for advancing such research. These models are not intended to be commercially deployed. Additionally, they are not intended to be used to create propaganda or offensive imagery.
## Limitations
_Note: This section is taken from the ["Limitations" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#limitations)_.
These models sometimes produce highly unrealistic outputs, particularly when generating images containing human faces.
This may stem from ImageNet's emphasis on non-human objects.
In consistency distillation and training, minimizing LPIPS results in better sample quality, as evidenced by improved FID and Inception scores. However, it also carries the risk of overestimating model performance, because LPIPS uses a VGG network pre-trained on ImageNet, while FID and Inception scores also rely on convolutional neural networks (the Inception network in particular) pre-trained on the same ImageNet dataset. Although these two convolutional neural networks do not share the same architecture and we extract latents from them in substantially different ways, knowledge leakage is still plausible which can undermine the fidelity of FID and Inception scores.
Because ImageNet and LSUN contain images from the internet, they include photos of real people, and the model may have memorized some of the information contained in these photos. However, these images are already publicly available, and existing generative models trained on ImageNet have not demonstrated significant leakage of this information.
|
Yntec/Reddit
|
Yntec
| 2023-09-01T07:17:35Z | 686 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"nutbutter",
"acheong08",
"license:creativeml-openrail-m",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-26T11:20:49Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- nutbutter
- acheong08
inference: false
---
Warning: This model is horny! Add "nude, naked" to the negative prompt if want to avoid NSFW.
# Reddit
A mix of RedditAlpha and REV 1.0, with the Color101VAE baked in.
Sample and prompt:

cute pretty girl, sitting, detailed chibi eyes, holding super soaker, beautiful detailed legs, cowgirl, gorgeous detailed hair, cowboy hat, magazine ad, iconic, 1943, from the movie, sharp focus. visible brushstrokes by kyoani and clay mann
Original page:
https://civitai.com/models/5216?modelVersionId=6048
# RedditOmega
A model made by mistake by using Weighted Sum 0.3 instead of 0.7, but it's a nice model still.

# RedditAlpha
A mix of F222 wih subreddit-v3 (many attempts were done to implement subreddit-v4 to v6 but all of them failed.) This is an unsafe model and should only be be used for research purposes.
# Recipes
Weighted Sum 0.5 F222 + subreddit-v3 = RedditBeta
Add Difference 1.0 sd-1.5 + (RedditBeta - sd-1.4) = RedditAlpha
Weighted Sum 0.3 REV + RedditAlpha = RedditOmega
Weighted Sum 0.7 REV + RedditAlpha = RedditZeta
Bake VAE Color 101 = Reddit
|
satchel/videomae-base-finetuned-ucf101-subset
|
satchel
| 2023-09-01T07:11:32Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-09-01T06:47:10Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4748
- Accuracy: 0.7857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 148
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1473 | 0.26 | 38 | 1.8958 | 0.4143 |
| 0.9542 | 1.26 | 76 | 0.9257 | 0.6714 |
| 0.4408 | 2.26 | 114 | 0.5608 | 0.8 |
| 0.2181 | 3.23 | 148 | 0.4748 | 0.7857 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
yrajm1997/gpt_model
|
yrajm1997
| 2023-09-01T07:10:11Z | 156 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-01T07:08:30Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
dreamboat26/Behind_the_pipeline_Tensorflow
|
dreamboat26
| 2023-09-01T07:08:44Z | 0 | 0 | null |
[
"license:afl-3.0",
"region:us"
] | null | 2023-09-01T07:02:31Z |
---
license: afl-3.0
---
The pipeline groups together three steps: preprocessing, passing the inputs through the model, and postprocessing.
# Preprocessing with tokenizer
Like other neural networks, Transformer models can’t process raw text directly, so the first step of our pipeline is to convert the text inputs into numbers that the model can make sense of. To do this we use a tokenizer, which will be responsible for:
Splitting the input into words, subwords, or symbols (like punctuation) that are called tokens
Mapping each token to an integer
Adding additional inputs that may be useful to the model
# Going through the model
We can download our pretrained model the same way we did with our tokenizer. 🤗 Transformers provides an TFAutoModel class which also has a from_pretrained method.
This architecture contains only the base Transformer module: given some inputs, it outputs what we’ll call hidden states, also known as features. For each model input, we’ll retrieve a high-dimensional vector representing the contextual understanding of that input by the Transformer model.
# Model heads: Making sense out of numbers
he model heads take the high-dimensional vector of hidden states as input and project them onto a different dimension. They are usually composed of one or a few linear layers
The output of the Transformer model is sent directly to the model head to be processed.
For our example, we will need a model with a sequence classification head (to be able to classify the sentences as positive or negative) which is TFAutoModelForSequenceClassification.
# Postprocessing the output
The outputs are not probabilities but logits, the raw, unnormalized scores outputted by the last layer of the model. To be converted to probabilities, they need to go through a SoftMax layer
We have successfully reproduced the three steps of the pipeline: preprocessing with tokenizers, passing the inputs through the model, and postprocessing!
|
nightdude/config_80090
|
nightdude
| 2023-09-01T06:54:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-01T06:54:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
fnlp/SpeechTokenizer
|
fnlp
| 2023-09-01T06:52:14Z | 0 | 10 | null |
[
"arxiv:2308.16692",
"region:us"
] | null | 2023-09-01T04:52:32Z |
# SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models
<a href='https://github.com/ZhangXInFD/SpeechTokenizer'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/abs/2308.16692'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
## Introduction
This is the code for the SpeechTokenizer presented in the [SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models](https://arxiv.org/abs/2308.16692). SpeechTokenizer is a unified speech tokenizer for speech large language models, which adopts the Encoder-Decoder architecture with residual vector quantization (RVQ). Unifying semantic and acoustic tokens, SpeechTokenizer disentangles different aspects of speech information hierarchically across different RVQ layers. Specifically, The code indices that the first quantizer of RVQ outputs can be considered as semantic tokens and the output of the remaining quantizers can be regarded as acoustic tokens, which serve as supplements for the information lost by the first quantizer. We provide our models:
* A model operated at 16khz on monophonic speech trained on Librispeech with average representation across all HuBERT layers as semantic teacher.
<br>
<p align="center">
<img src="images/overview.png" width="95%"> <br>
Overview
</p>
<p align="center">
<img src="images/speechtokenizer_framework.jpg" width="95%"> <br>
The SpeechTokenizer framework.
</p>
<br>
Welcome to try our [SLMTokBench](https://github.com/0nutation/SLMTokBench)
and we will also open source our [USLM](https://github.com/0nutation/USLM) !!
## Samples
Samples are provided on [our demo page](https://0nutation.github.io/SpeechTokenizer.github.io/).
## Installation
SpeechTokenizer requires Python>=3.8, and a reasonly recent version of PyTorch.
To install SpeechTokenizer, you can run from this repository:
```bash
pip install -U speechtokenizer
# or you can clone the repo and install locally
git clone https://github.com/ZhangXInFD/SpeechTokenizer.git
cd SpeechTokenizer
pip install .
```
## Usage
### Model storage
| Model |Discription|
|:----|:----|
|[speechtokenizer_hubert_avg](https://huggingface.co/fnlp/SpeechTokenizer/tree/main/speechtokenizer_hubert_avg)|Adopt average representation across all HuBERT layers as semantic teacher |
### load model
```python
from speechtokenizer import SpeechTokenizer
config_path = '/path/config.json'
ckpt_path = '/path/SpeechTokenizer.pt'
model = SpeechTokenizer.load_from_checkpoint(config_path, ckpt_path)
model.eval()
```
### Extracting discrete representions
```python
import torchaudio
import torch
# Load and pre-process speech waveform
wav, sr = torchaudio.load('<SPEECH_FILE_PATH>')
if sr != model.sample_rate:
wav = torchaudio.functional.resample(wav, sr, model.sample_rate)
wav = wav.unsqueeze(0)
# Extract discrete codes from SpeechTokenizer
with torch.no_grad():
codes = model.encode(wav) # codes: (n_q, B, T)
semantic_tokens = codes[0, :, :]
acoustic_tokens = codes[1:, :, :]
```
### Decoding discrete representions
```python
# Decoding from the first quantizers to ith quantizers
wav = model.decode(codes[:(i + 1)]) # wav: (B, 1, T)
# Decoding from ith quantizers to jth quantizers
wav = model.decode(codes[i: (j + 1)], st=i)
# Cancatenating semantic tokens and acoustic tokens and then decoding
semantic_tokens = ... # (..., B, T)
acoustic_tokens = ... # (..., B, T)
wav = model.decode(torch.cat([semantic_tokens, acoustic_tokens], axis=0))
```
## Citation
If you use this code or result in your paper, please cite our work as:
```tex
@misc{zhang2023speechtokenizer,
title={SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models},
author={Xin Zhang and Dong Zhang and Shimin Li and Yaqian Zhou and Xipeng Qiu},
year={2023},
eprint={2308.16692},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
The code in this repository is released under the Apache 2.0 license as found in the
[LICENSE](LICENSE) file.
|
ryanyip7777/pmc_vit_l_14
|
ryanyip7777
| 2023-09-01T06:45:52Z | 47 | 3 |
open_clip
|
[
"open_clip",
"safetensors",
"clip",
"biology",
"chemistry",
"medical",
"text-to-image",
"en",
"dataset:axiong/pmc_oa_beta",
"region:us"
] |
text-to-image
| 2023-07-23T23:56:43Z |
---
datasets:
- axiong/pmc_oa_beta
language:
- en
library_name: open_clip
pipeline_tag: text-to-image
tags:
- biology
- chemistry
- medical
---
### Model Description
The model is fine-tuned from openai's ViT-L-14 using PMC_OA_beta and roco's data sets, using the tool open_clip(https://github.com/mlfoundations/open_clip).
### Training
```python
python -m training.main \
--save-frequency 2 \
--zeroshot-frequency 1 \
--report-to tensorboard \
--train-data="/home/data1/ryanyip/huggingface-models/pmc_oa_beta/train.csv" \
--val-data="/home/data1/ryanyip/huggingface-models/pmc_oa_beta/sample_valid.csv" \
--csv-separator "," \
--csv-img-key image \
--csv-caption-key caption \
--warmup 10000 \
--batch-size=128 \
--lr=1e-5 \
--wd=0.2 \
--epochs=30 \
--workers=8 \
--model "ViT-L-14" \
--name "pmc_vit_l_14" \
--pretrained "ViT-L-14_state_dict.pt" \
--save-most-recent
````
*ViT-L-14_state_dict.pt is the pretrained weight from openai/ViT-L-14*
|
vesteinn/vit-mae-inat21
|
vesteinn
| 2023-09-01T06:35:20Z | 225 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-31T13:10:25Z |
Note that this model does not work directly with HF, a modification that does mean pooling before the layernorm and classification head is needed.
```python
from transformers import (
ViTForImageClassification,
pipeline,
AutoImageProcessor,
ViTConfig,
ViTModel,
)
from transformers.modeling_outputs import (
ImageClassifierOutput,
BaseModelOutputWithPooling,
)
from PIL import Image
import torch
from torch import nn
from typing import Optional, Union, Tuple
class CustomViTModel(ViTModel):
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
bool_masked_pos: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_encoding: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, BaseModelOutputWithPooling]:
r"""
bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, num_patches)`, *optional*):
Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).
"""
output_attentions = (
output_attentions
if output_attentions is not None
else self.config.output_attentions
)
output_hidden_states = (
output_hidden_states
if output_hidden_states is not None
else self.config.output_hidden_states
)
return_dict = (
return_dict if return_dict is not None else self.config.use_return_dict
)
if pixel_values is None:
raise ValueError("You have to specify pixel_values")
# Prepare head mask if needed
# 1.0 in head_mask indicate we keep the head
# attention_probs has shape bsz x n_heads x N x N
# input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
# and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
# TODO: maybe have a cleaner way to cast the input (from `ImageProcessor` side?)
expected_dtype = self.embeddings.patch_embeddings.projection.weight.dtype
if pixel_values.dtype != expected_dtype:
pixel_values = pixel_values.to(expected_dtype)
embedding_output = self.embeddings(
pixel_values,
bool_masked_pos=bool_masked_pos,
interpolate_pos_encoding=interpolate_pos_encoding,
)
encoder_outputs = self.encoder(
embedding_output,
head_mask=head_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
sequence_output = encoder_outputs[0]
sequence_output = sequence_output[:, 1:, :].mean(dim=1)
sequence_output = self.layernorm(sequence_output)
pooled_output = (
self.pooler(sequence_output) if self.pooler is not None else None
)
if not return_dict:
head_outputs = (
(sequence_output, pooled_output)
if pooled_output is not None
else (sequence_output,)
)
return head_outputs + encoder_outputs[1:]
return BaseModelOutputWithPooling(
last_hidden_state=sequence_output,
pooler_output=pooled_output,
hidden_states=encoder_outputs.hidden_states,
attentions=encoder_outputs.attentions,
)
class CustomViTForImageClassification(ViTForImageClassification):
def __init__(self, config: ViTConfig) -> None:
super().__init__(config)
self.num_labels = config.num_labels
self.vit = CustomViTModel(config, add_pooling_layer=False)
# Classifier head
self.classifier = (
nn.Linear(config.hidden_size, config.num_labels)
if config.num_labels > 0
else nn.Identity()
)
# Initialize weights and apply final processing
self.post_init()
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_encoding: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[tuple, ImageClassifierOutput]:
r"""
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
"""
return_dict = (
return_dict if return_dict is not None else self.config.use_return_dict
)
outputs = self.vit(
pixel_values,
head_mask=head_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
interpolate_pos_encoding=interpolate_pos_encoding,
return_dict=return_dict,
)
sequence_output = outputs[0]
logits = self.classifier(sequence_output)
loss = None
return ImageClassifierOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
if __name__ == "__main__":
model = CustomViTForImageClassification.from_pretrained("vesteinn/vit-mae-inat21")
image_processor = AutoImageProcessor.from_pretrained("vesteinn/vit-mae-inat21")
classifier = pipeline(
"image-classification", model=model, image_processor=image_processor
)
```
|
uukuguy/speechless-llama2-luban-orca-platypus-13b
|
uukuguy
| 2023-09-01T06:28:52Z | 1,410 | 4 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"dataset:garage-bAInd/Open-Platypus",
"arxiv:2307.09288",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-01T02:43:40Z |
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
datasets:
- garage-bAInd/Open-Platypus
library_name: transformers
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
<p><h1> speechless-llama2-orca-platypus-13b </h1></p>
speechless-llama2-orca-platypus-13b is a merge of AIDC-ai-business/Luban-13B and Open-Orca/OpenOrca-Platypus2-13B.
| Metric | Value |
| --- | --- |
| ARC | 62.54 |
| HellaSwag | 82.76 |
| MMLU | 59.23 |
| TruthfulQA | 54.66 |
| Average | 64.80 |
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
nikcheerla/amd-power-dialer-v1
|
nikcheerla
| 2023-09-01T06:28:43Z | 2,976 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-25T19:04:32Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# nikcheerla/amd-power-dialer-v1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nikcheerla/amd-power-dialer-v1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
emotibot-inc/Moli-Pro
|
emotibot-inc
| 2023-09-01T06:25:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-01T04:05:07Z |
# README
# Moli-Pro
[Hugging Face](https://huggingface.co/emotibot-inc/Moli-Pro) | [GitHub](https://github.com/emotibot-inc/Moli-Pro) | [Model Scope](https://modelscope.cn/models/emotibotinc/Moli-Pro/summary) | [Emotibrain](https://brain.emotibot.com/?source=molipro_huggingface)
# **模型介绍**
魔力-Pro是竹间智能基于超过2亿token的基础语料训练的基础模型。它具备以下特点:
1. 上下文长度:魔力大模型具有强大的上下文理解能力,其上下文长度可以达到4096个token。这意味着它可以处理和理解更长的文本段落,从而在生成或翻译长篇文章时提供更准确的结果。
2. 训练数据:魔力大模型接受了超过100万条人类标注进行训练。这使得该模型能够更好地理解和生成人类语言,提高了其在各种任务中的表现。
3. 模型优化:相比于llama模型,魔力大模型使用了优化的自回归Transformer。这种Transformer使得魔力大模型在处理复杂任务时更加高效。
4. 数据清理和混合更新:为了进一步提升性能,魔力大模型进行了更强大的数据清理,并更新了数据混合。这两项改进都有助于提高模型对输入数据的理解和处理能力,从而产生更准确、质量更高的输出结果。
# Model **benchmark**
## **中文评测** - **CMMLU**
### Result
| Model 5-shot | STEM | Humanities | Social Science | Other | China-specific | Average |
| --- | --- | --- | --- | --- | --- | --- |
| Multilingual-oriented | | | | | | |
| [GPT4](https://openai.com/gpt4) | 65.23 | 72.11 | 72.06 | 74.79 | 66.12 | 70.95 |
| [ChatGPT](https://openai.com/chatgpt) | 47.81 | 55.68 | 56.50 | 62.66 | 50.69 | 55.51 |
| [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | 33.33 | 43.46 | 44.28 | 44.75 | 39.46 | 41.45 |
| [LLaMA-65B](https://github.com/facebookresearch/llama) | 34.47 | 40.24 | 41.55 | 42.88 | 37.00 | 39.80 |
| [BLOOMZ-7B](https://github.com/bigscience-workshop/xmtf) | 30.56 | 39.10 | 38.59 | 40.32 | 37.15 | 37.04 |
| [Bactrian-LLaMA-13B](https://github.com/mbzuai-nlp/bactrian-x) | 27.52 | 32.47 | 32.27 | 35.77 | 31.56 | 31.88 |
| Chinese-oriented | | | | | | |
| [Zhuzhi-6B](https://github.com/emotibot-inc/Zhuzhi-6B) | 40.30 | 48.08 | 46.72 | 47.41 | 45.51 | 45.60 |
| [Zhuhai-13B](https://github.com/emotibot-inc/Zhuhai-13B) | 42.39 | 61.57 | 60.48 | 58.57 | 55.68 | 55.74 |
| [Moli-7B](https://github.com/emotibot-inc/Moli-7B) | 28.44 | 29.45 | 31.28 | 32.54 | 28.65 | 30.07 |
| [Moli-Pro](https://github.com/emotibot-inc/Moli-Pro) | 30.2 | 37.5 | 36.22 | 39.71 | 33.55 | 35.44 |
| [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) | 42.38 | 61.61 | 60.44 | 59.26 | 56.62 | 55.82 |
| [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) | 42.55 | 50.98 | 50.99 | 50.80 | 48.37 | 48.80 |
| [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | 35.25 | 48.07 | 47.88 | 46.61 | 44.14 | 44.43 |
| [ChatGLM-6B](https://github.com/THUDM/GLM-130B) | 32.35 | 39.22 | 39.65 | 38.62 | 37.70 | 37.48 |
| [BatGPT-15B](https://github.com/haonan-li/CMMLU/blob/master) | 34.96 | 35.45 | 36.31 | 42.14 | 37.89 | 37.16 |
| [Chinese-LLaMA-13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | 27.12 | 33.18 | 34.87 | 35.10 | 32.97 | 32.63 |
| [MOSS-SFT-16B](https://github.com/OpenLMLab/MOSS) | 27.23 | 30.41 | 28.84 | 32.56 | 28.68 | 29.57 |
| [Chinese-GLM-10B](https://github.com/THUDM/GLM) | 25.49 | 27.05 | 27.42 | 29.21 | 28.05 | 27.26 |
| Random | 25.00 | 25.00 | 25.00 | 25.00 | 25.00 | 25.00 |
| Model 0-shot | STEM | Humanities | Social Science | Other | China-specific | Average |
| --- | --- | --- | --- | --- | --- | --- |
| Multilingual-oriented | | | | | | |
| [GPT4](https://openai.com/gpt4) | 63.16 | 69.19 | 70.26 | 73.16 | 63.47 | 68.9 |
| [ChatGPT](https://openai.com/chatgpt) | 44.8 | 53.61 | 54.22 | 59.95 | 49.74 | 53.22 |
| [BLOOMZ-7B](https://github.com/bigscience-workshop/xmtf) | 33.03 | 45.74 | 45.74 | 46.25 | 41.58 | 42.8 |
| [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | 31.11 | 41.3 | 40.87 | 40.61 | 36.05 | 38.5 |
| [LLaMA-65B](https://github.com/facebookresearch/llama) | 31.09 | 34.45 | 36.05 | 37.94 | 32.89 | 34.88 |
| [Bactrian-LLaMA-13B](https://github.com/mbzuai-nlp/bactrian-x) | 26.46 | 29.36 | 31.81 | 31.55 | 29.17 | 30.06 |
| Chinese-oriented | | | | | | |
| [Zhuzhi-6B](https://github.com/emotibot-inc/Zhuzhi-6B) | 42.51 | 48.91 | 48.85 | 50.25 | 47.57 | 47.62 |
| [Zhuhai-13B](https://github.com/emotibot-inc/Zhuhai-13B) | 42.37 | 60.97 | 59.71 | 56.35 | 54.81 | 54.84 |
| [Moli-7B](https://github.com/emotibot-inc/Moli-7B) | 28.48 | 32.53 | 33.45 | 35.8 | 31.09 | 32.27 |
| [Moli-Pro](https://github.com/emotibot-inc/Moli-Pro) | 30.46 | 36.05 | 37.07 | 38.72 | 32.62 | 34.98 |
| [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) | 42.04 | 60.49 | 59.55 | 56.6 | 55.72 | 54.63 |
| [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) | 41.28 | 52.85 | 53.37 | 52.24 | 50.58 | 49.95 |
| [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | 32.79 | 44.43 | 46.78 | 44.79 | 43.11 | 42.33 |
| [ChatGLM-6B](https://github.com/THUDM/GLM-130B) | 32.22 | 42.91 | 44.81 | 42.6 | 41.93 | 40.79 |
| [BatGPT-15B](https://github.com/haonan-li/CMMLU/blob/master) | 33.72 | 36.53 | 38.07 | 46.94 | 38.32 | 38.51 |
| [Chinese-LLaMA-13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | 26.76 | 26.57 | 27.42 | 28.33 | 26.73 | 27.34 |
| [MOSS-SFT-16B](https://github.com/OpenLMLab/MOSS) | 25.68 | 26.35 | 27.21 | 27.92 | 26.7 | 26.88 |
| [Chinese-GLM-10B](https://github.com/THUDM/GLM) | 25.57 | 25.01 | 26.33 | 25.94 | 25.81 | 25.8 |
| Random | 25 | 25 | 25 | 25 | 25 | 25 |
# **推理对话**
您可以直接注册并登录竹间智能科技发布的大模型产品 [Emotibrain](https://brain.emotibot.com/?source=molipro_huggingface),并选择 **CoPilot**(**KKBot**) 进行的在线测试,注册即可立即使用;

# **模型训练**
您可以直接注册并登录竹间智能科技发布的大模型产品 [Emotibrain](https://brain.emotibot.com/?source=molipro_huggingface),并选择 Fine-tune 进行 **0 代码微调**,注册即可立即使用;
详细的训练流程您可以浏览此文档:[Emotibrain 快速入门](https://brain.emotibot.com/supports/model-factory/dash-into.html)(大约 5 分钟)


# **更多信息**
若您想了解更多 大模型训练平台 的相关信息,请访问 [Emotibrain 官网](https://brain.emotibot.com/?source=molipro_huggingface) 进行了解;
|
dkqjrm/20230901103238
|
dkqjrm
| 2023-09-01T06:14:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-01T01:32:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230901103238'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230901103238
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1604
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 340 | 0.1601 | 0.5 |
| 0.1929 | 2.0 | 680 | 0.1664 | 0.5 |
| 0.1697 | 3.0 | 1020 | 0.1642 | 0.5 |
| 0.1697 | 4.0 | 1360 | 0.1567 | 0.5 |
| 0.1668 | 5.0 | 1700 | 0.1592 | 0.5 |
| 0.1636 | 6.0 | 2040 | 0.1606 | 0.5 |
| 0.1636 | 7.0 | 2380 | 0.1562 | 0.5 |
| 0.1636 | 8.0 | 2720 | 0.1562 | 0.5 |
| 0.1598 | 9.0 | 3060 | 0.1607 | 0.5 |
| 0.1598 | 10.0 | 3400 | 0.1642 | 0.5 |
| 0.1643 | 11.0 | 3740 | 0.1606 | 0.5 |
| 0.1677 | 12.0 | 4080 | 0.1649 | 0.5 |
| 0.1677 | 13.0 | 4420 | 0.1603 | 0.5 |
| 0.1651 | 14.0 | 4760 | 0.1602 | 0.5 |
| 0.1672 | 15.0 | 5100 | 0.1600 | 0.5 |
| 0.1672 | 16.0 | 5440 | 0.1602 | 0.5 |
| 0.1669 | 17.0 | 5780 | 0.1603 | 0.5 |
| 0.1642 | 18.0 | 6120 | 0.1600 | 0.5 |
| 0.1642 | 19.0 | 6460 | 0.1601 | 0.5 |
| 0.1666 | 20.0 | 6800 | 0.1615 | 0.5 |
| 0.1655 | 21.0 | 7140 | 0.1600 | 0.5 |
| 0.1655 | 22.0 | 7480 | 0.1601 | 0.5 |
| 0.1664 | 23.0 | 7820 | 0.1602 | 0.5 |
| 0.1655 | 24.0 | 8160 | 0.1608 | 0.5 |
| 0.1667 | 25.0 | 8500 | 0.1624 | 0.5 |
| 0.1667 | 26.0 | 8840 | 0.1606 | 0.5 |
| 0.1656 | 27.0 | 9180 | 0.1642 | 0.5 |
| 0.1647 | 28.0 | 9520 | 0.1600 | 0.5 |
| 0.1647 | 29.0 | 9860 | 0.1645 | 0.5 |
| 0.1665 | 30.0 | 10200 | 0.1618 | 0.5 |
| 0.1655 | 31.0 | 10540 | 0.1601 | 0.5 |
| 0.1655 | 32.0 | 10880 | 0.1606 | 0.5 |
| 0.1653 | 33.0 | 11220 | 0.1631 | 0.5 |
| 0.1655 | 34.0 | 11560 | 0.1623 | 0.5 |
| 0.1655 | 35.0 | 11900 | 0.1632 | 0.5 |
| 0.1655 | 36.0 | 12240 | 0.1609 | 0.5 |
| 0.1652 | 37.0 | 12580 | 0.1600 | 0.5 |
| 0.1652 | 38.0 | 12920 | 0.1601 | 0.5 |
| 0.1643 | 39.0 | 13260 | 0.1615 | 0.5 |
| 0.1652 | 40.0 | 13600 | 0.1634 | 0.5 |
| 0.1652 | 41.0 | 13940 | 0.1603 | 0.5 |
| 0.1655 | 42.0 | 14280 | 0.1600 | 0.5 |
| 0.1644 | 43.0 | 14620 | 0.1605 | 0.5 |
| 0.1644 | 44.0 | 14960 | 0.1612 | 0.5 |
| 0.166 | 45.0 | 15300 | 0.1609 | 0.5 |
| 0.1646 | 46.0 | 15640 | 0.1612 | 0.5 |
| 0.1646 | 47.0 | 15980 | 0.1631 | 0.5 |
| 0.1659 | 48.0 | 16320 | 0.1603 | 0.5 |
| 0.1648 | 49.0 | 16660 | 0.1606 | 0.5 |
| 0.1651 | 50.0 | 17000 | 0.1604 | 0.5 |
| 0.1651 | 51.0 | 17340 | 0.1605 | 0.5 |
| 0.1643 | 52.0 | 17680 | 0.1602 | 0.5 |
| 0.1658 | 53.0 | 18020 | 0.1643 | 0.5 |
| 0.1658 | 54.0 | 18360 | 0.1609 | 0.5 |
| 0.1648 | 55.0 | 18700 | 0.1607 | 0.5 |
| 0.1649 | 56.0 | 19040 | 0.1601 | 0.5 |
| 0.1649 | 57.0 | 19380 | 0.1618 | 0.5 |
| 0.1642 | 58.0 | 19720 | 0.1601 | 0.5 |
| 0.1654 | 59.0 | 20060 | 0.1667 | 0.5 |
| 0.1654 | 60.0 | 20400 | 0.1609 | 0.5 |
| 0.1644 | 61.0 | 20740 | 0.1603 | 0.5 |
| 0.1643 | 62.0 | 21080 | 0.1621 | 0.5 |
| 0.1643 | 63.0 | 21420 | 0.1600 | 0.5 |
| 0.1638 | 64.0 | 21760 | 0.1600 | 0.5 |
| 0.1661 | 65.0 | 22100 | 0.1601 | 0.5 |
| 0.1661 | 66.0 | 22440 | 0.1616 | 0.5 |
| 0.1626 | 67.0 | 22780 | 0.1600 | 0.5 |
| 0.166 | 68.0 | 23120 | 0.1601 | 0.5 |
| 0.166 | 69.0 | 23460 | 0.1600 | 0.5 |
| 0.1645 | 70.0 | 23800 | 0.1600 | 0.5 |
| 0.1644 | 71.0 | 24140 | 0.1601 | 0.5 |
| 0.1644 | 72.0 | 24480 | 0.1604 | 0.5 |
| 0.1638 | 73.0 | 24820 | 0.1612 | 0.5 |
| 0.1646 | 74.0 | 25160 | 0.1604 | 0.5 |
| 0.164 | 75.0 | 25500 | 0.1607 | 0.5 |
| 0.164 | 76.0 | 25840 | 0.1602 | 0.5 |
| 0.1644 | 77.0 | 26180 | 0.1603 | 0.5 |
| 0.1644 | 78.0 | 26520 | 0.1608 | 0.5 |
| 0.1644 | 79.0 | 26860 | 0.1603 | 0.5 |
| 0.1643 | 80.0 | 27200 | 0.1604 | 0.5 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230901101200
|
dkqjrm
| 2023-09-01T05:54:51Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-01T01:12:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230901101200'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230901101200
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1593
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 340 | 0.1696 | 0.5 |
| 0.1874 | 2.0 | 680 | 0.1654 | 0.5 |
| 0.1712 | 3.0 | 1020 | 0.1626 | 0.5 |
| 0.1712 | 4.0 | 1360 | 0.1604 | 0.5 |
| 0.1706 | 5.0 | 1700 | 0.1658 | 0.5 |
| 0.1677 | 6.0 | 2040 | 0.1600 | 0.5 |
| 0.1677 | 7.0 | 2380 | 0.1608 | 0.5 |
| 0.1695 | 8.0 | 2720 | 0.1604 | 0.5 |
| 0.1669 | 9.0 | 3060 | 0.1605 | 0.5 |
| 0.1669 | 10.0 | 3400 | 0.1694 | 0.5 |
| 0.168 | 11.0 | 3740 | 0.1618 | 0.5 |
| 0.168 | 12.0 | 4080 | 0.1641 | 0.5 |
| 0.168 | 13.0 | 4420 | 0.1601 | 0.5 |
| 0.1667 | 14.0 | 4760 | 0.1601 | 0.5 |
| 0.1679 | 15.0 | 5100 | 0.1640 | 0.5 |
| 0.1679 | 16.0 | 5440 | 0.1638 | 0.5 |
| 0.1681 | 17.0 | 5780 | 0.1636 | 0.5 |
| 0.1655 | 18.0 | 6120 | 0.1645 | 0.5 |
| 0.1655 | 19.0 | 6460 | 0.1627 | 0.5 |
| 0.1672 | 20.0 | 6800 | 0.1601 | 0.5 |
| 0.1672 | 21.0 | 7140 | 0.1618 | 0.5 |
| 0.1672 | 22.0 | 7480 | 0.1668 | 0.5 |
| 0.1675 | 23.0 | 7820 | 0.1599 | 0.5 |
| 0.1663 | 24.0 | 8160 | 0.1608 | 0.5 |
| 0.168 | 25.0 | 8500 | 0.1617 | 0.5 |
| 0.168 | 26.0 | 8840 | 0.1601 | 0.5 |
| 0.1667 | 27.0 | 9180 | 0.1604 | 0.5 |
| 0.1655 | 28.0 | 9520 | 0.1643 | 0.5 |
| 0.1655 | 29.0 | 9860 | 0.1605 | 0.5 |
| 0.1675 | 30.0 | 10200 | 0.1603 | 0.5 |
| 0.1664 | 31.0 | 10540 | 0.1602 | 0.5 |
| 0.1664 | 32.0 | 10880 | 0.1631 | 0.5 |
| 0.1666 | 33.0 | 11220 | 0.1611 | 0.5 |
| 0.167 | 34.0 | 11560 | 0.1616 | 0.5 |
| 0.167 | 35.0 | 11900 | 0.1613 | 0.5 |
| 0.1667 | 36.0 | 12240 | 0.1600 | 0.5 |
| 0.1662 | 37.0 | 12580 | 0.1600 | 0.5 |
| 0.1662 | 38.0 | 12920 | 0.1702 | 0.5 |
| 0.1652 | 39.0 | 13260 | 0.1599 | 0.5 |
| 0.1659 | 40.0 | 13600 | 0.1600 | 0.5 |
| 0.1659 | 41.0 | 13940 | 0.1605 | 0.5 |
| 0.1661 | 42.0 | 14280 | 0.1601 | 0.5 |
| 0.165 | 43.0 | 14620 | 0.1622 | 0.5 |
| 0.165 | 44.0 | 14960 | 0.1607 | 0.5 |
| 0.1664 | 45.0 | 15300 | 0.1621 | 0.5 |
| 0.1654 | 46.0 | 15640 | 0.1600 | 0.5 |
| 0.1654 | 47.0 | 15980 | 0.1606 | 0.5 |
| 0.1666 | 48.0 | 16320 | 0.1612 | 0.5 |
| 0.1652 | 49.0 | 16660 | 0.1600 | 0.5 |
| 0.1658 | 50.0 | 17000 | 0.1605 | 0.5 |
| 0.1658 | 51.0 | 17340 | 0.1604 | 0.5 |
| 0.1647 | 52.0 | 17680 | 0.1606 | 0.5 |
| 0.1657 | 53.0 | 18020 | 0.1641 | 0.5 |
| 0.1657 | 54.0 | 18360 | 0.1613 | 0.5 |
| 0.1644 | 55.0 | 18700 | 0.1605 | 0.5 |
| 0.1643 | 56.0 | 19040 | 0.1592 | 0.5 |
| 0.1643 | 57.0 | 19380 | 0.1600 | 0.5 |
| 0.1632 | 58.0 | 19720 | 0.1633 | 0.5 |
| 0.1643 | 59.0 | 20060 | 0.1612 | 0.5 |
| 0.1643 | 60.0 | 20400 | 0.1604 | 0.5 |
| 0.163 | 61.0 | 20740 | 0.1616 | 0.5 |
| 0.1623 | 62.0 | 21080 | 0.1598 | 0.5 |
| 0.1623 | 63.0 | 21420 | 0.1597 | 0.5 |
| 0.1616 | 64.0 | 21760 | 0.1655 | 0.5 |
| 0.1636 | 65.0 | 22100 | 0.1595 | 0.5 |
| 0.1636 | 66.0 | 22440 | 0.1599 | 0.5 |
| 0.1599 | 67.0 | 22780 | 0.1598 | 0.5 |
| 0.163 | 68.0 | 23120 | 0.1602 | 0.5 |
| 0.163 | 69.0 | 23460 | 0.1587 | 0.5 |
| 0.1613 | 70.0 | 23800 | 0.1604 | 0.5 |
| 0.1608 | 71.0 | 24140 | 0.1599 | 0.5 |
| 0.1608 | 72.0 | 24480 | 0.1587 | 0.5 |
| 0.1604 | 73.0 | 24820 | 0.1610 | 0.5 |
| 0.1606 | 74.0 | 25160 | 0.1592 | 0.5 |
| 0.1599 | 75.0 | 25500 | 0.1587 | 0.5 |
| 0.1599 | 76.0 | 25840 | 0.1593 | 0.5 |
| 0.1604 | 77.0 | 26180 | 0.1589 | 0.5 |
| 0.16 | 78.0 | 26520 | 0.1602 | 0.5 |
| 0.16 | 79.0 | 26860 | 0.1596 | 0.5 |
| 0.1599 | 80.0 | 27200 | 0.1593 | 0.5 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jalaluddin94/baseline_nli_bert-large
|
jalaluddin94
| 2023-09-01T05:46:15Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-01T05:45:15Z |
---
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: baseline_nli_bert-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline_nli_bert-large
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9293
- Accuracy: 0.6163
- Precision: 0.6163
- Recall: 0.6163
- F1 Score: 0.6185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:--------:|
| 1.0447 | 1.0 | 2583 | 0.9867 | 0.4602 | 0.4602 | 0.4602 | 0.4166 |
| 0.9632 | 2.0 | 5166 | 0.9132 | 0.5926 | 0.5926 | 0.5926 | 0.5965 |
| 0.9063 | 3.0 | 7749 | 0.8976 | 0.6076 | 0.6076 | 0.6076 | 0.6116 |
| 0.846 | 4.0 | 10332 | 0.8826 | 0.6218 | 0.6218 | 0.6218 | 0.6212 |
| 0.7975 | 5.0 | 12915 | 0.9189 | 0.6136 | 0.6136 | 0.6136 | 0.6169 |
| 0.7605 | 6.0 | 15498 | 0.9293 | 0.6163 | 0.6163 | 0.6163 | 0.6185 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Korkkork/seungyeonkara
|
Korkkork
| 2023-09-01T05:42:25Z | 0 | 0 | null |
[
"kara",
"Kpop",
"license:openrail",
"region:us"
] | null | 2023-09-01T05:36:15Z |
---
license: openrail
tags:
- kara
- Kpop
---
|
Korkkork/Hyejeong
|
Korkkork
| 2023-09-01T05:41:44Z | 0 | 0 | null |
[
"aoa",
"Kpop",
"license:openrail",
"region:us"
] | null | 2023-08-31T04:40:15Z |
---
license: openrail
tags:
- aoa
- Kpop
---
|
Korkkork/yunaaoa
|
Korkkork
| 2023-09-01T05:40:32Z | 0 | 0 | null |
[
"aoa",
"Kpop",
"license:openrail",
"region:us"
] | null | 2023-08-31T17:58:39Z |
---
license: openrail
tags:
- aoa
- Kpop
---
|
mitchyAI/haerinlora
|
mitchyAI
| 2023-09-01T05:08:30Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-01T05:06:42Z |
---
license: creativeml-openrail-m
---
|
tMako/sd-class-butterflies-32
|
tMako
| 2023-09-01T04:21:03Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-09-01T04:20:09Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('tMako/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
EllaHong/query_change_0831_v0.2
|
EllaHong
| 2023-09-01T04:14:29Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-01T04:14:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
emotibot-inc/Moli-7B
|
emotibot-inc
| 2023-09-01T03:59:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-01T02:55:54Z |
# README
# Moli-7B
[Hugging Face](https://huggingface.co/emotibot-inc/Moli-7B) | [GitHub](https://github.com/emotibot-inc/Moli-7B) | [Model Scope](https://modelscope.cn/models/emotibotinc/Moli-7B/summary) | [Emotibrain](https://brain.emotibot.com/?source=moli7b_huggingface)
# **模型介绍**
魔力-7B是竹间智能基于超过1.5亿token的基础语料训练的基础模型。它具备以下特点:
1. 上下文长度:魔力大模型具有强大的上下文理解能力,其上下文长度可以达到4096个token。这意味着它可以处理和理解更长的文本段落,从而在生成或翻译长篇文章时提供更准确的结果。
2. 模型优化:相比于llama模型,魔力大模型使用了优化的自回归Transformer。这种Transformer使得魔力大模型在处理复杂任务时更加高效。
3. 数据清理和混合更新:为了进一步提升性能,魔力大模型进行了更强大的数据清理,并更新了数据混合。这两项改进都有助于提高模型对输入数据的理解和处理能力,从而产生更准确、质量更高的输出结果。
4. 更高的效率:魔力-7B拥有高效的推理架构,推理速度比上一代提升了60%。
# Model **benchmark**
## **中文评测** - **CMMLU**
### Result
| Model 5-shot | STEM | Humanities | Social Science | Other | China-specific | Average |
| --- | --- | --- | --- | --- | --- | --- |
| Multilingual-oriented | | | | | | |
| [GPT4](https://openai.com/gpt4) | 65.23 | 72.11 | 72.06 | 74.79 | 66.12 | 70.95 |
| [ChatGPT](https://openai.com/chatgpt) | 47.81 | 55.68 | 56.50 | 62.66 | 50.69 | 55.51 |
| [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | 33.33 | 43.46 | 44.28 | 44.75 | 39.46 | 41.45 |
| [LLaMA-65B](https://github.com/facebookresearch/llama) | 34.47 | 40.24 | 41.55 | 42.88 | 37.00 | 39.80 |
| [BLOOMZ-7B](https://github.com/bigscience-workshop/xmtf) | 30.56 | 39.10 | 38.59 | 40.32 | 37.15 | 37.04 |
| [Bactrian-LLaMA-13B](https://github.com/mbzuai-nlp/bactrian-x) | 27.52 | 32.47 | 32.27 | 35.77 | 31.56 | 31.88 |
| Chinese-oriented | | | | | | |
| [Zhuzhi-6B](https://github.com/emotibot-inc/Zhuzhi-6B) | 40.30 | 48.08 | 46.72 | 47.41 | 45.51 | 45.60 |
| [Zhuhai-13B](https://github.com/emotibot-inc/Zhuhai-13B) | 42.39 | 61.57 | 60.48 | 58.57 | 55.68 | 55.74 |
| [Moli-7B](https://github.com/emotibot-inc/Moli-7B) | 28.44 | 29.45 | 31.28 | 32.54 | 28.65 | 30.07 |
| [Moli-Pro](https://github.com/emotibot-inc/Moli-Pro) | 30.2 | 37.5 | 36.22 | 39.71 | 33.55 | 35.44 |
| [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) | 42.38 | 61.61 | 60.44 | 59.26 | 56.62 | 55.82 |
| [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) | 42.55 | 50.98 | 50.99 | 50.80 | 48.37 | 48.80 |
| [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | 35.25 | 48.07 | 47.88 | 46.61 | 44.14 | 44.43 |
| [ChatGLM-6B](https://github.com/THUDM/GLM-130B) | 32.35 | 39.22 | 39.65 | 38.62 | 37.70 | 37.48 |
| [BatGPT-15B](https://github.com/haonan-li/CMMLU/blob/master) | 34.96 | 35.45 | 36.31 | 42.14 | 37.89 | 37.16 |
| [Chinese-LLaMA-13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | 27.12 | 33.18 | 34.87 | 35.10 | 32.97 | 32.63 |
| [MOSS-SFT-16B](https://github.com/OpenLMLab/MOSS) | 27.23 | 30.41 | 28.84 | 32.56 | 28.68 | 29.57 |
| [Chinese-GLM-10B](https://github.com/THUDM/GLM) | 25.49 | 27.05 | 27.42 | 29.21 | 28.05 | 27.26 |
| Random | 25.00 | 25.00 | 25.00 | 25.00 | 25.00 | 25.00 |
| Model 0-shot | STEM | Humanities | Social Science | Other | China-specific | Average |
| --- | --- | --- | --- | --- | --- | --- |
| Multilingual-oriented | | | | | | |
| [GPT4](https://openai.com/gpt4) | 63.16 | 69.19 | 70.26 | 73.16 | 63.47 | 68.9 |
| [ChatGPT](https://openai.com/chatgpt) | 44.8 | 53.61 | 54.22 | 59.95 | 49.74 | 53.22 |
| [BLOOMZ-7B](https://github.com/bigscience-workshop/xmtf) | 33.03 | 45.74 | 45.74 | 46.25 | 41.58 | 42.8 |
| [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | 31.11 | 41.3 | 40.87 | 40.61 | 36.05 | 38.5 |
| [LLaMA-65B](https://github.com/facebookresearch/llama) | 31.09 | 34.45 | 36.05 | 37.94 | 32.89 | 34.88 |
| [Bactrian-LLaMA-13B](https://github.com/mbzuai-nlp/bactrian-x) | 26.46 | 29.36 | 31.81 | 31.55 | 29.17 | 30.06 |
| Chinese-oriented | | | | | | |
| [Zhuzhi-6B](https://github.com/emotibot-inc/Zhuzhi-6B) | 42.51 | 48.91 | 48.85 | 50.25 | 47.57 | 47.62 |
| [Zhuhai-13B](https://github.com/emotibot-inc/Zhuhai-13B) | 42.37 | 60.97 | 59.71 | 56.35 | 54.81 | 54.84 |
| [Moli-7B](https://github.com/emotibot-inc/Moli-7B) | 28.48 | 32.53 | 33.45 | 35.8 | 31.09 | 32.27 |
| [Moli-Pro](https://github.com/emotibot-inc/Moli-Pro) | 30.46 | 36.05 | 37.07 | 38.72 | 32.62 | 34.98 |
| [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) | 42.04 | 60.49 | 59.55 | 56.6 | 55.72 | 54.63 |
| [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) | 41.28 | 52.85 | 53.37 | 52.24 | 50.58 | 49.95 |
| [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | 32.79 | 44.43 | 46.78 | 44.79 | 43.11 | 42.33 |
| [ChatGLM-6B](https://github.com/THUDM/GLM-130B) | 32.22 | 42.91 | 44.81 | 42.6 | 41.93 | 40.79 |
| [BatGPT-15B](https://github.com/haonan-li/CMMLU/blob/master) | 33.72 | 36.53 | 38.07 | 46.94 | 38.32 | 38.51 |
| [Chinese-LLaMA-13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | 26.76 | 26.57 | 27.42 | 28.33 | 26.73 | 27.34 |
| [MOSS-SFT-16B](https://github.com/OpenLMLab/MOSS) | 25.68 | 26.35 | 27.21 | 27.92 | 26.7 | 26.88 |
| [Chinese-GLM-10B](https://github.com/THUDM/GLM) | 25.57 | 25.01 | 26.33 | 25.94 | 25.81 | 25.8 |
| Random | 25 | 25 | 25 | 25 | 25 | 25 |
# **推理对话**
您可以直接注册并登录竹间智能科技发布的大模型产品 [Emotibrain](https://brain.emotibot.com/?source=moli7b_huggingface),并选择 **CoPilot**(**KKBot**) 进行的在线测试,注册即可立即使用;

# **模型训练**
您可以直接注册并登录竹间智能科技发布的大模型产品 [Emotibrain](https://brain.emotibot.com/?source=moli7b_huggingface),并选择 Fine-tune 进行 **0 代码微调**,注册即可立即使用;
详细的训练流程您可以浏览此文档:[Emotibrain 快速入门](https://brain.emotibot.com/supports/model-factory/dash-into.html)(大约 5 分钟)


# **更多信息**
若您想了解更多 大模型训练平台 的相关信息,请访问 [Emotibrain 官网](https://brain.emotibot.com/?source=moli7b_huggingface) 进行了解;
|
foxxy-hm/wav2vec2-base-finetune-vi
|
foxxy-hm
| 2023-09-01T03:59:04Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"vi",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-29T06:55:37Z |
---
language: vi
tags:
- audio
- automatic-speech-recognition
license: cc-by-nc-4.0
widget:
- example_title: SOICT 2023 - SLU public test 1
src: https://huggingface.co/foxxy-hm/wav2vec2-base-finetune-vi/raw/main/audio-test/055R7BruAa333g9teFfamQH.wav
- example_title: SOICT 2023 - SLU public test 2
src: https://huggingface.co/foxxy-hm/wav2vec2-base-finetune-vi/raw/main/audio-test/0BLHhoJexE8THB8BrsZxWbh.wav
- example_title: SOICT 2023 - SLU public test 3
src: https://huggingface.co/foxxy-hm/wav2vec2-base-finetune-vi/raw/main/audio-test/1ArUTGWJQ9YALH2xaNhU6GV.wav
---
|
germla/satoken
|
germla
| 2023-09-01T03:55:12Z | 3 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"en",
"fr",
"ko",
"zh",
"ja",
"pt",
"ru",
"dataset:imdb",
"doi:10.57967/hf/0905",
"license:apache-2.0",
"model-index",
"region:us"
] |
text-classification
| 2023-07-19T16:09:11Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
library_name: sentence-transformers
metrics:
- accuracy
- f1
- precision
- recall
language:
- en
- fr
- ko
- zh
- ja
- pt
- ru
datasets:
- imdb
model-index:
- name: germla/satoken
results:
- task:
type: text-classification
name: sentiment-analysis
dataset:
type: imdb
name: imdb
split: test
metrics:
- type: accuracy
value: 73.976
name: Accuracy
- type: f1
value: 73.1667079105832
name: F1
- type: precision
value: 75.51506895964584
name: Precision
- type: recall
value: 70.96
name: Recall
- task:
type: text-classification
name: sentiment-analysis
dataset:
type: sepidmnorozy/Russian_sentiment
name: sepidmnorozy/Russian_sentiment
split: train
metrics:
- type: accuracy
value: 75.66371681415929
name: Accuracy
- type: f1
value: 83.64218714253031
name: F1
- type: precision
value: 75.25730753396459
name: Precision
- type: recall
value: 94.129763130793
name: Recall
---
# Satoken
This is a [SetFit model](https://github.com/huggingface/setfit) trained on multilingual datasets (mentioned below) for Sentiment classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
It is utilized by [Germla](https://github.com/germla) for it's feedback analysis tool. (specifically the Sentiment analysis feature)
For other models (specific language-basis) check [here](https://github.com/germla/satoken#available-models)
# Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("germla/satoken")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
# Training Details
## Training Data
- [IMDB](https://huggingface.co/datasets/imdb)
- [RuReviews](https://github.com/sismetanin/rureviews)
- [chABSA](https://github.com/chakki-works/chABSA-dataset)
- [Glyph](https://github.com/zhangxiangxiao/glyph)
- [nsmc](https://github.com/e9t/nsmc)
- [Allocine](https://huggingface.co/datasets/allocine)
- [Portuguese Tweets for Sentiment Analysis](https://www.kaggle.com/datasets/augustop/portuguese-tweets-for-sentiment-analysis)
## Training Procedure
We made sure to have a balanced dataset.
The model was trained on only 35% (50% for chinese) of the train split of all datasets.
### Preprocessing
- Basic Cleaning (removal of dups, links, mentions, hashtags, etc.)
- Removal of stopwords using [nltk](https://www.nltk.org/)
### Speeds, Sizes, Times
The training procedure took 6hours on the NVIDIA T4 GPU.
## Evaluation
### Testing Data, Factors & Metrics
- [IMDB test split](https://huggingface.co/datasets/imdb)
# Environmental Impact
- Hardware Type: NVIDIA T4 GPU
- Hours used: 6
- Cloud Provider: Amazon Web Services
- Compute Region: ap-south-1 (Mumbai)
- Carbon Emitted: 0.39 [kg co2 eq.](https://mlco2.github.io/impact/#co2eq)
|
Serotina/ppo-SnowballTarget1
|
Serotina
| 2023-09-01T03:31:41Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-01T03:31:32Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Serotina/ppo-SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
batman555/layer_1_classifier_google
|
batman555
| 2023-09-01T03:21:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T03:42:56Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: layer_1_classifier_google
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layer_1_classifier_google
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4568
- Accuracy: 0.9022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.5103 | 1.0 |
| No log | 2.0 | 4 | 0.4784 | 1.0 |
| No log | 3.0 | 6 | 0.4533 | 1.0 |
| No log | 4.0 | 8 | 0.4340 | 1.0 |
| No log | 5.0 | 10 | 0.4168 | 1.0 |
| No log | 6.0 | 12 | 0.4040 | 1.0 |
| No log | 7.0 | 14 | 0.3956 | 1.0 |
| No log | 8.0 | 16 | 0.3921 | 1.0 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
mingkom/distilbert-base-uncased-finetuned-emotion
|
mingkom
| 2023-09-01T03:09:22Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-01T02:45:20Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.925605036699702
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2157
- Accuracy: 0.9255
- F1: 0.9256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8152 | 1.0 | 250 | 0.3179 | 0.908 | 0.9057 |
| 0.2525 | 2.0 | 500 | 0.2157 | 0.9255 | 0.9256 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dkqjrm/20230901065829
|
dkqjrm
| 2023-09-01T03:01:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T21:58:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230901065829'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230901065829
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1567
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 340 | 0.1589 | 0.5 |
| 0.1842 | 2.0 | 680 | 0.1722 | 0.5 |
| 0.1701 | 3.0 | 1020 | 0.1582 | 0.5 |
| 0.1701 | 4.0 | 1360 | 0.1563 | 0.5 |
| 0.1657 | 5.0 | 1700 | 0.1575 | 0.5 |
| 0.163 | 6.0 | 2040 | 0.1586 | 0.5 |
| 0.163 | 7.0 | 2380 | 0.1568 | 0.5 |
| 0.1627 | 8.0 | 2720 | 0.1596 | 0.5 |
| 0.1588 | 9.0 | 3060 | 0.1578 | 0.5 |
| 0.1588 | 10.0 | 3400 | 0.1597 | 0.5 |
| 0.1598 | 11.0 | 3740 | 0.1567 | 0.5 |
| 0.159 | 12.0 | 4080 | 0.1583 | 0.5 |
| 0.159 | 13.0 | 4420 | 0.1567 | 0.5 |
| 0.157 | 14.0 | 4760 | 0.1640 | 0.5 |
| 0.1588 | 15.0 | 5100 | 0.1564 | 0.5 |
| 0.1588 | 16.0 | 5440 | 0.1555 | 0.5 |
| 0.1595 | 17.0 | 5780 | 0.1556 | 0.5 |
| 0.1566 | 18.0 | 6120 | 0.1562 | 0.5 |
| 0.1566 | 19.0 | 6460 | 0.1562 | 0.5 |
| 0.1578 | 20.0 | 6800 | 0.1559 | 0.5 |
| 0.1573 | 21.0 | 7140 | 0.1605 | 0.5 |
| 0.1573 | 22.0 | 7480 | 0.1802 | 0.5 |
| 0.1629 | 23.0 | 7820 | 0.1601 | 0.5 |
| 0.1669 | 24.0 | 8160 | 0.1598 | 0.5 |
| 0.1678 | 25.0 | 8500 | 0.1600 | 0.5 |
| 0.1678 | 26.0 | 8840 | 0.1604 | 0.5 |
| 0.1659 | 27.0 | 9180 | 0.1600 | 0.5 |
| 0.1653 | 28.0 | 9520 | 0.1565 | 0.5 |
| 0.1653 | 29.0 | 9860 | 0.1561 | 0.5 |
| 0.1593 | 30.0 | 10200 | 0.1555 | 0.5 |
| 0.1573 | 31.0 | 10540 | 0.1601 | 0.5 |
| 0.1573 | 32.0 | 10880 | 0.1568 | 0.5 |
| 0.157 | 33.0 | 11220 | 0.1621 | 0.5 |
| 0.1569 | 34.0 | 11560 | 0.1580 | 0.5 |
| 0.1569 | 35.0 | 11900 | 0.1565 | 0.5 |
| 0.1575 | 36.0 | 12240 | 0.1565 | 0.5 |
| 0.1566 | 37.0 | 12580 | 0.1592 | 0.5 |
| 0.1566 | 38.0 | 12920 | 0.1584 | 0.5 |
| 0.1557 | 39.0 | 13260 | 0.1572 | 0.5 |
| 0.156 | 40.0 | 13600 | 0.1580 | 0.5 |
| 0.156 | 41.0 | 13940 | 0.1587 | 0.5 |
| 0.1566 | 42.0 | 14280 | 0.1573 | 0.5 |
| 0.1553 | 43.0 | 14620 | 0.1565 | 0.5 |
| 0.1553 | 44.0 | 14960 | 0.1621 | 0.5 |
| 0.1567 | 45.0 | 15300 | 0.1576 | 0.5 |
| 0.1557 | 46.0 | 15640 | 0.1574 | 0.5 |
| 0.1557 | 47.0 | 15980 | 0.1558 | 0.5 |
| 0.1571 | 48.0 | 16320 | 0.1557 | 0.5 |
| 0.1558 | 49.0 | 16660 | 0.1556 | 0.5 |
| 0.1559 | 50.0 | 17000 | 0.1569 | 0.5 |
| 0.1559 | 51.0 | 17340 | 0.1558 | 0.5 |
| 0.1549 | 52.0 | 17680 | 0.1561 | 0.5 |
| 0.1566 | 53.0 | 18020 | 0.1557 | 0.5 |
| 0.1566 | 54.0 | 18360 | 0.1563 | 0.5 |
| 0.1557 | 55.0 | 18700 | 0.1562 | 0.5 |
| 0.1557 | 56.0 | 19040 | 0.1568 | 0.5 |
| 0.1557 | 57.0 | 19380 | 0.1558 | 0.5 |
| 0.1553 | 58.0 | 19720 | 0.1557 | 0.5 |
| 0.1561 | 59.0 | 20060 | 0.1551 | 0.5 |
| 0.1561 | 60.0 | 20400 | 0.1575 | 0.5 |
| 0.1551 | 61.0 | 20740 | 0.1570 | 0.5 |
| 0.155 | 62.0 | 21080 | 0.1559 | 0.5 |
| 0.155 | 63.0 | 21420 | 0.1558 | 0.5 |
| 0.1544 | 64.0 | 21760 | 0.1577 | 0.5 |
| 0.1566 | 65.0 | 22100 | 0.1565 | 0.5 |
| 0.1566 | 66.0 | 22440 | 0.1554 | 0.5 |
| 0.153 | 67.0 | 22780 | 0.1561 | 0.5 |
| 0.1565 | 68.0 | 23120 | 0.1574 | 0.5 |
| 0.1565 | 69.0 | 23460 | 0.1574 | 0.5 |
| 0.1552 | 70.0 | 23800 | 0.1571 | 0.5 |
| 0.1548 | 71.0 | 24140 | 0.1572 | 0.5 |
| 0.1548 | 72.0 | 24480 | 0.1563 | 0.5 |
| 0.1546 | 73.0 | 24820 | 0.1563 | 0.5 |
| 0.1547 | 74.0 | 25160 | 0.1570 | 0.5 |
| 0.1542 | 75.0 | 25500 | 0.1563 | 0.5 |
| 0.1542 | 76.0 | 25840 | 0.1571 | 0.5 |
| 0.155 | 77.0 | 26180 | 0.1571 | 0.5 |
| 0.1545 | 78.0 | 26520 | 0.1561 | 0.5 |
| 0.1545 | 79.0 | 26860 | 0.1570 | 0.5 |
| 0.1544 | 80.0 | 27200 | 0.1567 | 0.5 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.