modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-28 00:48:09
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
534 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-28 00:47:12
card
stringlengths
11
1.01M
facebook/mms-tts-cya
facebook
2023-09-01T10:12:30Z
107
0
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
2023-09-01T10:11:59Z
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Chatino, Nopala Text-to-Speech This repository contains the **Chatino, Nopala (cya)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-cya") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-cya") text = "some example text in the Chatino, Nopala language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
facebook/mms-tts-bmv
facebook
2023-09-01T10:12:00Z
105
0
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
2023-09-01T10:11:35Z
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Bum Text-to-Speech This repository contains the **Bum (bmv)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-bmv") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-bmv") text = "some example text in the Bum language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
facebook/mms-tts-mai
facebook
2023-09-01T10:11:34Z
197
0
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
2023-09-01T10:11:09Z
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Maithili Text-to-Speech This repository contains the **Maithili (mai)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-mai") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-mai") text = "some example text in the Maithili language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
ProomptEngineer/pe-caricature-style
ProomptEngineer
2023-09-01T10:11:30Z
9
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
2023-09-01T10:11:27Z
--- license: other tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: PECaricature widget: - text: PECaricature --- # PE Caricature [Style] ![Image 0](1888525.jpeg) <h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><h2 id="heading-4">Model creates a cartoonish real caricature.</h2><h2 id="heading-5">Recommended weights 0.8-1</h2><h2 id="heading-6">Sometimes creats random person idk why.</h2> ## Image examples for the model: ![Image 1](1888514.jpeg) ![Image 2](1888516.jpeg) ![Image 3](1888517.jpeg) ![Image 4](1888515.jpeg) ![Image 5](1888518.jpeg) ![Image 6](1888520.jpeg) ![Image 7](1888523.jpeg) ![Image 8](1888519.jpeg) ![Image 9](1888521.jpeg)
facebook/mms-tts-acd
facebook
2023-09-01T10:11:12Z
107
0
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
2023-09-01T10:10:47Z
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Gikyode Text-to-Speech This repository contains the **Gikyode (acd)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-acd") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-acd") text = "some example text in the Gikyode language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
ProomptEngineer/shocked-face-meme-one-piece
ProomptEngineer
2023-09-01T10:11:03Z
3
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
2023-09-01T10:10:58Z
--- license: other tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: PEOPShockedFace widget: - text: PEOPShockedFace --- # Shocked Face [Meme] [One Piece] ![Image 0](2266917.jpeg) <p>give you characters the funny shocked face from one piece...</p><p>weights 0.-1</p><h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a> </h2> ## Image examples for the model: ![Image 1](2266915.jpeg) ![Image 2](2266914.jpeg) ![Image 3](2266929.jpeg) ![Image 4](2266916.jpeg) ![Image 5](2266923.jpeg) ![Image 6](2266925.jpeg) ![Image 7](2266930.jpeg) ![Image 8](2266922.jpeg) ![Image 9](2266931.jpeg)
facebook/mms-tts-bmu
facebook
2023-09-01T10:10:56Z
111
0
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
2023-09-01T10:10:32Z
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Somba-Siawari Text-to-Speech This repository contains the **Somba-Siawari (bmu)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-bmu") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-bmu") text = "some example text in the Somba-Siawari language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
facebook/mms-tts-hak
facebook
2023-09-01T10:10:39Z
115
1
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
2023-09-01T10:10:21Z
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Chinese, Hakka Text-to-Speech This repository contains the **Chinese, Hakka (hak)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-hak") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-hak") text = "some example text in the Chinese, Hakka language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
ProomptEngineer/pe-balloon-diffusion-style
ProomptEngineer
2023-09-01T10:10:19Z
74
12
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
2023-09-01T10:10:15Z
--- license: other tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: PEBalloonStyle widget: - text: PEBalloonStyle --- # PE Balloon Diffusion [Style] ![Image 0](2095180.jpeg) <h2 id="heading-5">Wondered what things would look like if their made of ballons? then try this one!</h2><h2 id="heading-6">Weights 0.8-1</h2><h2 id="heading-7">If you want to donate:</h2><h2 id="heading-8"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><h2 id="heading-10">Add "Ballon Sculpture" if effect is not strong enough</h2><p></p> ## Image examples for the model: ![Image 1](2095175.jpeg) ![Image 2](2095174.jpeg) ![Image 3](2095173.jpeg) ![Image 4](2095176.jpeg) ![Image 5](2095177.jpeg) ![Image 6](2095178.jpeg) ![Image 7](2095182.jpeg) ![Image 8](2095183.jpeg) ![Image 9](2095181.jpeg)
facebook/mms-tts-aca
facebook
2023-09-01T10:10:14Z
111
0
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
2023-09-01T10:09:58Z
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Achagua Text-to-Speech This repository contains the **Achagua (aca)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-aca") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-aca") text = "some example text in the Achagua language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
facebook/mms-tts-abp
facebook
2023-09-01T10:09:41Z
111
0
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
2023-09-01T09:55:51Z
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Ayta, Abellen Text-to-Speech This repository contains the **Ayta, Abellen (abp)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-abp") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-abp") text = "some example text in the Ayta, Abellen language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
rjindal/rohit-bloom-finetuned_SMALL
rjindal
2023-09-01T10:09:07Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-01T10:09:06Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
EmirhanExecute/Pixelcopter-t2
EmirhanExecute
2023-09-01T09:59:37Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-09-01T09:59:34Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-t2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 16.10 +/- 19.55 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ldos/text_shortening_model_v2
ldos
2023-09-01T09:58:28Z
103
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-01T08:23:23Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: text_shortening_model_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text_shortening_model_v2 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4449 - Rouge1: 0.581 - Rouge2: 0.3578 - Rougel: 0.5324 - Rougelsum: 0.5317 - Bert precision: 0.8885 - Bert recall: 0.8981 - Average word count: 11.5929 - Max word count: 17 - Min word count: 3 - Average token count: 16.7071 ## Model description No "summarize" prefix ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:| | 1.7498 | 1.0 | 8 | 1.9424 | 0.4725 | 0.2644 | 0.4207 | 0.4216 | 0.8343 | 0.8502 | 11.7357 | 18 | 0 | 17.5143 | | 1.5236 | 2.0 | 16 | 1.7731 | 0.5185 | 0.2961 | 0.4661 | 0.4665 | 0.8566 | 0.8646 | 11.05 | 18 | 0 | 16.6143 | | 1.4381 | 3.0 | 24 | 1.6880 | 0.5459 | 0.3212 | 0.4947 | 0.4942 | 0.8773 | 0.8862 | 11.5857 | 18 | 3 | 16.8143 | | 1.3895 | 4.0 | 32 | 1.6405 | 0.5537 | 0.3275 | 0.506 | 0.5061 | 0.8815 | 0.8894 | 11.7 | 18 | 3 | 16.6571 | | 1.353 | 5.0 | 40 | 1.5941 | 0.5579 | 0.3347 | 0.5124 | 0.5119 | 0.8839 | 0.8933 | 11.7643 | 18 | 4 | 16.7429 | | 1.3026 | 6.0 | 48 | 1.5568 | 0.5585 | 0.3379 | 0.5132 | 0.5129 | 0.8823 | 0.8945 | 11.9714 | 18 | 4 | 16.95 | | 1.2624 | 7.0 | 56 | 1.5359 | 0.5696 | 0.3466 | 0.5202 | 0.5195 | 0.8837 | 0.897 | 12.0143 | 18 | 5 | 17.1143 | | 1.2481 | 8.0 | 64 | 1.5186 | 0.5736 | 0.3517 | 0.5241 | 0.523 | 0.8849 | 0.898 | 12.0214 | 17 | 6 | 17.1714 | | 1.2089 | 9.0 | 72 | 1.5055 | 0.5732 | 0.3499 | 0.5256 | 0.5246 | 0.8846 | 0.8979 | 12.0357 | 17 | 5 | 17.2214 | | 1.1845 | 10.0 | 80 | 1.4898 | 0.5761 | 0.3548 | 0.5284 | 0.5276 | 0.886 | 0.8977 | 11.9 | 17 | 5 | 17.0786 | | 1.1882 | 11.0 | 88 | 1.4787 | 0.5768 | 0.3573 | 0.5291 | 0.5288 | 0.8862 | 0.8986 | 11.8071 | 17 | 5 | 17.05 | | 1.1649 | 12.0 | 96 | 1.4720 | 0.5784 | 0.3592 | 0.5319 | 0.531 | 0.8868 | 0.8988 | 11.7786 | 17 | 5 | 17.0 | | 1.1643 | 13.0 | 104 | 1.4637 | 0.5785 | 0.3592 | 0.5314 | 0.5308 | 0.8875 | 0.8977 | 11.6571 | 17 | 3 | 16.8214 | | 1.129 | 14.0 | 112 | 1.4565 | 0.5794 | 0.3585 | 0.5324 | 0.5315 | 0.8883 | 0.8984 | 11.6571 | 17 | 3 | 16.8 | | 1.136 | 15.0 | 120 | 1.4516 | 0.5826 | 0.3598 | 0.537 | 0.5363 | 0.8898 | 0.8995 | 11.5857 | 17 | 3 | 16.6786 | | 1.1191 | 16.0 | 128 | 1.4491 | 0.5828 | 0.3579 | 0.5357 | 0.535 | 0.8895 | 0.899 | 11.5929 | 17 | 3 | 16.6857 | | 1.1192 | 17.0 | 136 | 1.4471 | 0.5794 | 0.355 | 0.5312 | 0.5307 | 0.8883 | 0.898 | 11.6143 | 17 | 3 | 16.7286 | | 1.1085 | 18.0 | 144 | 1.4456 | 0.5808 | 0.3557 | 0.5315 | 0.5307 | 0.8883 | 0.8982 | 11.6286 | 17 | 3 | 16.7429 | | 1.1063 | 19.0 | 152 | 1.4451 | 0.5808 | 0.3571 | 0.5321 | 0.5314 | 0.8884 | 0.8981 | 11.6 | 17 | 3 | 16.7143 | | 1.0965 | 20.0 | 160 | 1.4449 | 0.581 | 0.3578 | 0.5324 | 0.5317 | 0.8885 | 0.8981 | 11.5929 | 17 | 3 | 16.7071 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
FredericProtat/poca-SoccerTwos
FredericProtat
2023-09-01T09:49:22Z
6
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-09-01T09:48:48Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: FredericProtat/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
larabe/tester
larabe
2023-09-01T09:39:20Z
47
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-08-31T23:02:39Z
--- license: mit tags: - generated_from_trainer model-index: - name: tester results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tester This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.30.1 - Pytorch 2.0.1+cu117 - Datasets 2.14.0 - Tokenizers 0.13.3
trieudemo11/llama_7b_attrb_cate_b6_l320_low_8
trieudemo11
2023-09-01T09:38:59Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-01T09:38:44Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 rsions - PEFT 0.6.0.dev0
hetpatel-7/ppo-LunarLander-v2
hetpatel-7
2023-09-01T09:24:42Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-01T09:24:23Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 277.20 +/- 16.97 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
PawanKrGunjan/whisper-tiny-finetuned-gtzan
PawanKrGunjan
2023-09-01T09:20:27Z
108
1
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-09-01T02:52:07Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: whisper-tiny-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.53 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-finetuned-gtzan This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 1.3365 - Accuracy: 0.53 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.3484 | 1.0 | 113 | 1.8521 | 0.26 | | 1.9419 | 2.0 | 226 | 1.9107 | 0.3 | | 1.8627 | 3.0 | 339 | 1.5300 | 0.49 | | 1.8178 | 4.0 | 452 | 1.5152 | 0.41 | | 1.5341 | 5.0 | 565 | 1.3365 | 0.53 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
jbilcke-hf/sdxl-botw
jbilcke-hf
2023-09-01T09:19:48Z
7
6
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-08-31T11:23:53Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <s0><s1> inference: false --- # sdxl-botw LoRA by Julian BILCKE (HF: [jbilcke-hf](https://huggingface.co/jbilcke-hf), Replicate: [jbilcke](https://replicate.com/jbilcke)) ### A SDXL LoRA inspired by Breath of the Wild ![lora_image](https://tjzk.replicate.delivery/models_models_cover_image/aea9c0c4-b3d6-425b-9e96-9a615220fa30/link-llama.jpeg) > ## Inference with Replicate API Grab your replicate token [here](https://replicate.com/account) ```bash pip install replicate export REPLICATE_API_TOKEN=r8_************************************* ``` ```py import replicate output = replicate.run( "sdxl-botw@sha256:bf412da351d41547f117391eff2824ab0301b6ba1c6c010c4b5f766a492d62fc", input={"prompt": "Link riding a llama, in the style of TOK"} ) print(output) ``` You may also do inference via the API with Node.js or curl, and locally with COG and Docker, [check out the Replicate API page for this model](https://replicate.com/jbilcke/sdxl-botw/api) ## Inference with 🧨 diffusers Replicate SDXL LoRAs are trained with Pivotal Tuning, which combines training a concept via Dreambooth LoRA with training a new token with Textual Inversion. As `diffusers` doesn't yet support textual inversion for SDXL, we will use cog-sdxl `TokenEmbeddingsHandler` class. The trigger tokens for your prompt will be `<s0><s1>` ```shell pip install diffusers transformers accelerate safetensors huggingface_hub git clone https://github.com/replicate/cog-sdxl cog_sdxl ``` ```py import torch from huggingface_hub import hf_hub_download from diffusers import DiffusionPipeline from cog_sdxl.dataset_and_utils import TokenEmbeddingsHandler from diffusers.models import AutoencoderKL pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", ).to("cuda") load_lora_weights("jbilcke-hf/sdxl-botw", weight_name="lora.safetensors") text_encoders = [pipe.text_encoder, pipe.text_encoder_2] tokenizers = [pipe.tokenizer, pipe.tokenizer_2] embedding_path = hf_hub_download(repo_id="jbilcke-hf/sdxl-botw", filename="embeddings.pti", repo_type="model") embhandler = TokenEmbeddingsHandler(text_encoders, tokenizers) embhandler.load_embeddings(embedding_path) prompt="Link riding a llama, in the style of <s0><s1>" images = pipe( prompt, cross_attention_kwargs={"scale": 0.8}, ).images #your output image images[0] ```
nightdude/config_80091
nightdude
2023-09-01T09:17:07Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-01T09:16:34Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
chrisluo5311/falcon-7b-sharded-bf16-english-quote-qlora
chrisluo5311
2023-09-01T09:09:54Z
6
0
peft
[ "peft", "region:us" ]
null
2023-08-26T04:03:01Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
bongo2112/sdxl-db-richtilebati
bongo2112
2023-09-01T09:09:19Z
1
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-09-01T09:09:18Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of richtilebati roof sheet tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
Linly-AI/Chinese-LLaMA-2-7B-hf
Linly-AI
2023-09-01T09:04:51Z
1,548
31
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-21T13:09:30Z
基于中英文混合语料增量训练,词表扩充汉字。 训练细节和benchmark指标: https://github.com/CVI-SZU/Linly ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("Linly-AI/Chinese-LLaMA-2-7B-hf", device_map="cuda:0", torch_dtype=torch.float16, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("Linly-AI/Chinese-LLaMA-2-7B-hf", use_fast=False, trust_remote_code=True) prompt = "北京有什么好玩的地方?" prompt = f"### Instruction:{prompt.strip()} ### Response:" inputs = tokenizer(prompt, return_tensors="pt").to("cuda:0") generate_ids = model.generate(inputs.input_ids, do_sample=True, max_new_tokens=2048, top_k=10, top_p=0.85, temperature=1, repetition_penalty=1.15, eos_token_id=2, bos_token_id=1, pad_token_id=0) response = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] response = response.lstrip(prompt) ```
Intel/whisper-large-v2-int8-dynamic-inc
Intel
2023-09-01T08:57:05Z
5
1
transformers
[ "transformers", "onnx", "whisper", "automatic-speech-recognition", "int8", "ONNX", "PostTrainingDynamic", "Intel® Neural Compressor", "neural-compressor", "dataset:librispeech_asr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-01T08:46:56Z
--- license: apache-2.0 datasets: - librispeech_asr metrics: - wer pipeline_tag: automatic-speech-recognition tags: - automatic-speech-recognition - int8 - ONNX - PostTrainingDynamic - Intel® Neural Compressor - neural-compressor library_name: transformers --- ## Model Details: INT8 Whisper large v2 Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains without the need for fine-tuning. This int8 ONNX model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model can be exported with below command: ```shell optimum-cli export onnx --model openai/whisper-large-v2 whisper-large-v2-with-past/ --task automatic-speech-recognition-with-past --opset 13 ``` | Model Detail | Description | | ----------- | ----------- | | Model Authors - Company | Intel | | Date | September 1, 2023 | | Version | 1 | | Type | Speech Recognition | | Paper or Other Resources | - | | License | Apache 2.0 | | Questions or Comments | [Community Tab](https://huggingface.co/Intel/whisper-large-v2-int8-dynamic/discussions)| | Intended Use | Description | | ----------- | ----------- | | Primary intended uses | You can use the raw model for automatic speech recognition inference | | Primary intended users | Anyone doing automatic speech recognition inference | | Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.| ### How to use Download the model by cloning the repository: ```shell git clone https://huggingface.co/Intel/whisper-large-v2-int8-dynamic ``` Evaluate the model with below code: ```python import os from evaluate import load from datasets import load_dataset from transformers import WhisperForConditionalGeneration, WhisperProcessor, AutoConfig model_name = 'openai/whisper-large-v2' model_path = 'whisper-large-v2-int8-dynamic' processor = WhisperProcessor.from_pretrained(model_name) model = WhisperForConditionalGeneration.from_pretrained(model_name) config = AutoConfig.from_pretrained(model_name) wer = load("wer") librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test") from optimum.onnxruntime import ORTModelForSpeechSeq2Seq from transformers import PretrainedConfig model_config = PretrainedConfig.from_pretrained(model_name) predictions = [] references = [] sessions = ORTModelForSpeechSeq2Seq.load_model( os.path.join(model_path, 'encoder_model.onnx'), os.path.join(model_path, 'decoder_model.onnx'), os.path.join(model_path, 'decoder_with_past_model.onnx')) model = ORTModelForSpeechSeq2Seq(sessions[0], sessions[1], model_config, model_path, sessions[2]) for idx, batch in enumerate(librispeech_test_clean): audio = batch["audio"] input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features reference = processor.tokenizer._normalize(batch['text']) references.append(reference) predicted_ids = model.generate(input_features)[0] transcription = processor.decode(predicted_ids) prediction = processor.tokenizer._normalize(transcription) predictions.append(prediction) wer_result = wer.compute(references=references, predictions=predictions) print(f"Result wer: {wer_result * 100}") accuracy = 1 - wer_result print("Accuracy: %.5f" % accuracy) ``` ## Metrics (Model Performance): | Model | Model Size (GB) | wer | |---|:---:|:---:| | FP32 |13|2.87| | INT8 |2.4|2.82|
vikasvmane/myfirstDreamboothModel
vikasvmane
2023-09-01T08:51:13Z
1
0
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-09-01T03:09:05Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of VM tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
Bazaar/cv_forest_pest_detection
Bazaar
2023-09-01T08:50:43Z
198
1
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-01T08:41:58Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: cv_forest_pest_detection results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8042704463005066 --- # cv_forest_pest_detection Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### ActiasDubernardiOberthur ![ActiasDubernardiOberthur](images/ActiasDubernardiOberthur.jpg) #### ActiasSeleneNingpoanaFelder ![ActiasSeleneNingpoanaFelder](images/ActiasSeleneNingpoanaFelder.jpg) #### AgriusConvolvuli ![AgriusConvolvuli](images/AgriusConvolvuli.jpg) #### AmsactaLactinea ![AmsactaLactinea](images/AmsactaLactinea.jpg) #### AnoplophoraChinensisForster ![AnoplophoraChinensisForster](images/AnoplophoraChinensisForster.jpg) #### AnoplophoraGlabripennisMotschulsky ![AnoplophoraGlabripennisMotschulsky](images/AnoplophoraGlabripennisMotschulsky.jpg) #### AprionaGermari ![AprionaGermari](images/AprionaGermari.jpg) #### AprionaSwainsoni ![AprionaSwainsoni](images/AprionaSwainsoni.jpg) #### ArnpelophagaRubiginosaBremerEtGrey ![ArnpelophagaRubiginosaBremerEtGrey](images/ArnpelophagaRubiginosaBremerEtGrey.jpg) #### AromiaBungiiFald ![AromiaBungiiFald](images/AromiaBungiiFald.jpg) #### AtaturaIlia ![AtaturaIlia](images/AtaturaIlia.jpg) #### BatoceraHorsfieldiHope ![BatoceraHorsfieldiHope](images/BatoceraHorsfieldiHope.jpg) #### ByasaAlcinousKlug ![ByasaAlcinousKlug](images/ByasaAlcinousKlug.jpg) #### CalospilosSuspectaWarren ![CalospilosSuspectaWarren](images/CalospilosSuspectaWarren.jpg) #### CamptolomaInteriorata ![CamptolomaInteriorata](images/CamptolomaInteriorata.jpg) #### CarposinaNiponensisWalsingham ![CarposinaNiponensisWalsingham](images/CarposinaNiponensisWalsingham.jpg) #### CatharsiusMolossusLinnaeus ![CatharsiusMolossusLinnaeus](images/CatharsiusMolossusLinnaeus.jpg) #### CeruraMencianaMoore ![CeruraMencianaMoore](images/CeruraMencianaMoore.jpg) #### ChalcophoraJaponica ![ChalcophoraJaponica](images/ChalcophoraJaponica.jpg) #### CicadellaViridis ![CicadellaViridis](images/CicadellaViridis.jpg) #### ClanisBilineata ![ClanisBilineata](images/ClanisBilineata.jpg) #### CletusPunctigerDallas ![CletusPunctigerDallas](images/CletusPunctigerDallas.jpg) #### ClosteraAnachoreta ![ClosteraAnachoreta](images/ClosteraAnachoreta.jpg) #### ClosteraAnastomosis ![ClosteraAnastomosis](images/ClosteraAnastomosis.jpg) #### CnidocampaFlavescens ![CnidocampaFlavescens](images/CnidocampaFlavescens.jpg) #### ConogethesPunctiferalis ![ConogethesPunctiferalis](images/ConogethesPunctiferalis.jpg) #### CorythuchaCiliata ![CorythuchaCiliata](images/CorythuchaCiliata.jpg) #### CreatonotusTransiens ![CreatonotusTransiens](images/CreatonotusTransiens.jpg) #### CryptotympanaAtrataFabricius ![CryptotympanaAtrataFabricius](images/CryptotympanaAtrataFabricius.jpg) #### CyclidiaSubstigmariaSubstigmaria ![CyclidiaSubstigmariaSubstigmaria](images/CyclidiaSubstigmariaSubstigmaria.jpg) #### CyclopeltaObscura ![CyclopeltaObscura](images/CyclopeltaObscura.jpg) #### CystidiaCouaggariaGuenee ![CystidiaCouaggariaGuenee](images/CystidiaCouaggariaGuenee.jpg) #### DanausChrysippusLinnaeus ![DanausChrysippusLinnaeus](images/DanausChrysippusLinnaeus.jpg) #### DanausGenutia ![DanausGenutia](images/DanausGenutia.jpg) #### DasychiraGroteiMoore ![DasychiraGroteiMoore](images/DasychiraGroteiMoore.jpg) #### DendrolimusPunctatusWalker ![DendrolimusPunctatusWalker](images/DendrolimusPunctatusWalker.jpg) #### DiaphaniaPerspectalis ![DiaphaniaPerspectalis](images/DiaphaniaPerspectalis.jpg) #### DicranocephalusWallichi ![DicranocephalusWallichi](images/DicranocephalusWallichi.jpg) #### DictyopharaSinica ![DictyopharaSinica](images/DictyopharaSinica.jpg) #### DorcusTitanusPlatymelus ![DorcusTitanusPlatymelus](images/DorcusTitanusPlatymelus.jpg) #### DrosichaCorpulenta ![DrosichaCorpulenta](images/DrosichaCorpulenta.jpg) #### EligmaNarcissus ![EligmaNarcissus](images/EligmaNarcissus.jpg) #### EnmonodiaVespertiliFabricius ![EnmonodiaVespertiliFabricius](images/EnmonodiaVespertiliFabricius.jpg) #### ErthesinaFullo ![ErthesinaFullo](images/ErthesinaFullo.jpg) #### EuricaniaClara ![EuricaniaClara](images/EuricaniaClara.jpg) #### EurostusValidusDallas ![EurostusValidusDallas](images/EurostusValidusDallas.jpg) #### EurydemaDominulus ![EurydemaDominulus](images/EurydemaDominulus.jpg) #### GeishaDistinctissima ![GeishaDistinctissima](images/GeishaDistinctissima.jpg) #### GraphiumSarpedonLinnaeue ![GraphiumSarpedonLinnaeue](images/GraphiumSarpedonLinnaeue.jpg) #### GraphosomaRubrolineata ![GraphosomaRubrolineata](images/GraphosomaRubrolineata.jpg) #### HalyomorphaPicusFabricius ![HalyomorphaPicusFabricius](images/HalyomorphaPicusFabricius.jpg) #### HestinaAssimilis ![HestinaAssimilis](images/HestinaAssimilis.jpg) #### HistiaRhodopeCramer ![HistiaRhodopeCramer](images/HistiaRhodopeCramer.jpg) #### HyphantriaCunea ![HyphantriaCunea](images/HyphantriaCunea.jpg) #### JacobiascaFormosana ![JacobiascaFormosana](images/JacobiascaFormosana.jpg) #### LatoriaConsociaWalker ![LatoriaConsociaWalker](images/LatoriaConsociaWalker.jpg) #### LethocerusDeyrolliVuillefroy ![LethocerusDeyrolliVuillefroy](images/LethocerusDeyrolliVuillefroy.jpg) #### LocastraMuscosalisWalker ![LocastraMuscosalisWalker](images/LocastraMuscosalisWalker.jpg) #### LycormaDelicatula ![LycormaDelicatula](images/LycormaDelicatula.jpg) #### MegopisSinicaSinicaWhite ![MegopisSinicaSinicaWhite](images/MegopisSinicaSinicaWhite.jpg) #### MeimunaMongolica ![MeimunaMongolica](images/MeimunaMongolica.jpg) #### MicromelalophaTroglodyta ![MicromelalophaTroglodyta](images/MicromelalophaTroglodyta.jpg) #### MiltochristaStriata ![MiltochristaStriata](images/MiltochristaStriata.jpg) #### MonochamusAlternatusHope ![MonochamusAlternatusHope](images/MonochamusAlternatusHope.jpg) #### Ophthalmitisirrorataria ![Ophthalmitisirrorataria](images/Ophthalmitisirrorataria.jpg) #### OrthagaAchatina ![OrthagaAchatina](images/OrthagaAchatina.jpg) #### PapilioBianorCramer ![PapilioBianorCramer](images/PapilioBianorCramer.jpg) #### PapilioMachaonLinnaeus ![PapilioMachaonLinnaeus](images/PapilioMachaonLinnaeus.jpg) #### PapilioPolytesLinnaeus ![PapilioPolytesLinnaeus](images/PapilioPolytesLinnaeus.jpg) #### PapilioProtenorCramer ![PapilioProtenorCramer](images/PapilioProtenorCramer.jpg) #### PapilioXuthusLinnaeus ![PapilioXuthusLinnaeus](images/PapilioXuthusLinnaeus.jpg) #### ParocneriaFurva ![ParocneriaFurva](images/ParocneriaFurva.jpg) #### PergesaElpenorlewisi ![PergesaElpenorlewisi](images/PergesaElpenorlewisi.jpg) #### PidorusAtratusButter ![PidorusAtratusButter](images/PidorusAtratusButter.jpg) #### PierisRapae ![PierisRapae](images/PierisRapae.jpg) #### PlagioderaVersicolora ![PlagioderaVersicolora](images/PlagioderaVersicolora.jpg) #### PlatypleuraKaempferi ![PlatypleuraKaempferi](images/PlatypleuraKaempferi.jpg) #### PlinachtusBicoloripesScott ![PlinachtusBicoloripesScott](images/PlinachtusBicoloripesScott.jpg) #### PlinachtusDissimilis ![PlinachtusDissimilis](images/PlinachtusDissimilis.jpg) #### PolygoniaCaureum ![PolygoniaCaureum](images/PolygoniaCaureum.jpg) #### PolyuraNarcaeaHewitson ![PolyuraNarcaeaHewitson](images/PolyuraNarcaeaHewitson.jpg) #### PorthesiaSimilis ![PorthesiaSimilis](images/PorthesiaSimilis.jpg) #### ProdeniaLitura ![ProdeniaLitura](images/ProdeniaLitura.jpg) #### ProtaetiaBrevitarsisLewis ![ProtaetiaBrevitarsisLewis](images/ProtaetiaBrevitarsisLewis.jpg) #### PsilogrammaMenephron ![PsilogrammaMenephron](images/PsilogrammaMenephron.jpg) #### RicaniaSublimata ![RicaniaSublimata](images/RicaniaSublimata.jpg) #### RiptortusPedestris ![RiptortusPedestris](images/RiptortusPedestris.jpg) #### SemanotusBifasciatusBifasciatus ![SemanotusBifasciatusBifasciatus](images/SemanotusBifasciatusBifasciatus.jpg) #### SericinusMontelusGrey ![SericinusMontelusGrey](images/SericinusMontelusGrey.jpg) #### SinnaExtrema ![SinnaExtrema](images/SinnaExtrema.jpg) #### SmerinthusPlanusWalker ![SmerinthusPlanusWalker](images/SmerinthusPlanusWalker.jpg) #### SpeiredoniaRetorta ![SpeiredoniaRetorta](images/SpeiredoniaRetorta.jpg) #### SpilarctiaRobusta ![SpilarctiaRobusta](images/SpilarctiaRobusta.jpg) #### SpilarctiaSubcarnea ![SpilarctiaSubcarnea](images/SpilarctiaSubcarnea.jpg) #### StilprotiaSalicis ![StilprotiaSalicis](images/StilprotiaSalicis.jpg) #### TheretraJaponica ![TheretraJaponica](images/TheretraJaponica.jpg) #### ThoseaSinensisWalker ![ThoseaSinensisWalker](images/ThoseaSinensisWalker.jpg) #### UropyiaMeticulodina ![UropyiaMeticulodina](images/UropyiaMeticulodina.jpg) #### VanessaIndicaHerbst ![VanessaIndicaHerbst](images/VanessaIndicaHerbst.jpg)
Vertti/TuumaPEFTDialogue06
Vertti
2023-09-01T08:48:19Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-01T08:47:57Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0
Intel/whisper-large-v2-int8-static-inc
Intel
2023-09-01T08:46:27Z
4
0
transformers
[ "transformers", "onnx", "whisper", "automatic-speech-recognition", "int8", "ONNX", "PostTrainingStatic", "Intel® Neural Compressor", "neural-compressor", "dataset:librispeech_asr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-01T03:12:37Z
--- license: apache-2.0 datasets: - librispeech_asr metrics: - wer pipeline_tag: automatic-speech-recognition tags: - automatic-speech-recognition - int8 - ONNX - PostTrainingStatic - Intel® Neural Compressor - neural-compressor library_name: transformers --- ## Model Details: INT8 Whisper large v2 Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains without the need for fine-tuning. This int8 ONNX model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model can be exported with below command: ```shell optimum-cli export onnx --model openai/whisper-large-v2 whisper-large-v2-with-past/ --task automatic-speech-recognition-with-past --opset 13 ``` | Model Detail | Description | | ----------- | ----------- | | Model Authors - Company | Intel | | Date | September 1, 2023 | | Version | 1 | | Type | Speech Recognition | | Paper or Other Resources | - | | License | Apache 2.0 | | Questions or Comments | [Community Tab](https://huggingface.co/Intel/whisper-large-v2-int8-static/discussions)| | Intended Use | Description | | ----------- | ----------- | | Primary intended uses | You can use the raw model for automatic speech recognition inference | | Primary intended users | Anyone doing automatic speech recognition inference | | Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.| ### How to use Download the model by cloning the repository: ```shell git clone https://huggingface.co/Intel/whisper-large-v2-int8-static ``` Evaluate the model with below code: ```python import os from evaluate import load from datasets import load_dataset from transformers import WhisperForConditionalGeneration, WhisperProcessor, AutoConfig model_name = 'openai/whisper-large-v2' model_path = 'whisper-large-v2-int8-static' processor = WhisperProcessor.from_pretrained(model_name) model = WhisperForConditionalGeneration.from_pretrained(model_name) config = AutoConfig.from_pretrained(model_name) wer = load("wer") librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test") from optimum.onnxruntime import ORTModelForSpeechSeq2Seq from transformers import PretrainedConfig model_config = PretrainedConfig.from_pretrained(model_name) predictions = [] references = [] sessions = ORTModelForSpeechSeq2Seq.load_model( os.path.join(model_path, 'encoder_model.onnx'), os.path.join(model_path, 'decoder_model.onnx'), os.path.join(model_path, 'decoder_with_past_model.onnx')) model = ORTModelForSpeechSeq2Seq(sessions[0], sessions[1], model_config, model_path, sessions[2]) for idx, batch in enumerate(librispeech_test_clean): audio = batch["audio"] input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features reference = processor.tokenizer._normalize(batch['text']) references.append(reference) predicted_ids = model.generate(input_features)[0] transcription = processor.decode(predicted_ids) prediction = processor.tokenizer._normalize(transcription) predictions.append(prediction) wer_result = wer.compute(references=references, predictions=predictions) print(f"Result wer: {wer_result * 100}") accuracy = 1 - wer_result print("Accuracy: %.5f" % accuracy) ``` ## Metrics (Model Performance): | Model | Model Size (GB) | wer | |---|:---:|:---:| | FP32 |13|2.87| | INT8 |2.8|2.62|
Toflamus/GPT-2_para3M_2epoch_256
Toflamus
2023-09-01T08:42:02Z
154
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-01T00:27:15Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: GPT-2_para3M_512 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT-2_para3M_2epoch_256 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.1873 | 0.01 | 500 | 4.0187 | | 3.5461 | 0.02 | 1000 | 3.4287 | | 3.2706 | 0.04 | 1500 | 3.1495 | | 3.105 | 0.05 | 2000 | 2.9773 | | 2.9885 | 0.06 | 2500 | 2.8566 | | 2.8931 | 0.07 | 3000 | 2.7720 | | 2.8307 | 0.08 | 3500 | 2.7016 | | 2.7912 | 0.09 | 4000 | 2.6474 | | 2.7295 | 0.11 | 4500 | 2.5972 | | 2.6927 | 0.12 | 5000 | 2.5641 | | 2.6756 | 0.13 | 5500 | 2.5248 | | 2.6536 | 0.14 | 6000 | 2.4972 | | 2.6186 | 0.15 | 6500 | 2.4730 | | 2.5947 | 0.17 | 7000 | 2.4492 | | 2.591 | 0.18 | 7500 | 2.4313 | | 2.5706 | 0.19 | 8000 | 2.4172 | | 2.5441 | 0.2 | 8500 | 2.3991 | | 2.5266 | 0.21 | 9000 | 2.3838 | | 2.5259 | 0.22 | 9500 | 2.3740 | | 2.5173 | 0.24 | 10000 | 2.3629 | | 2.5122 | 0.25 | 10500 | 2.3549 | | 2.5004 | 0.26 | 11000 | 2.3409 | | 2.4902 | 0.27 | 11500 | 2.3364 | | 2.4735 | 0.28 | 12000 | 2.3242 | | 2.4784 | 0.29 | 12500 | 2.3193 | | 2.4754 | 0.31 | 13000 | 2.3126 | | 2.4587 | 0.32 | 13500 | 2.3077 | | 2.4613 | 0.33 | 14000 | 2.3050 | | 2.4562 | 0.34 | 14500 | 2.2968 | | 2.4422 | 0.35 | 15000 | 2.2913 | | 2.4307 | 0.37 | 15500 | 2.2870 | | 2.4339 | 0.38 | 16000 | 2.2814 | | 2.445 | 0.39 | 16500 | 2.2801 | | 2.4257 | 0.4 | 17000 | 2.2747 | | 2.425 | 0.41 | 17500 | 2.2709 | | 2.4095 | 0.42 | 18000 | 2.2672 | | 2.4137 | 0.44 | 18500 | 2.2632 | | 2.4284 | 0.45 | 19000 | 2.2601 | | 2.419 | 0.46 | 19500 | 2.2569 | | 2.4221 | 0.47 | 20000 | 2.2504 | | 2.3951 | 0.48 | 20500 | 2.2507 | | 2.4054 | 0.5 | 21000 | 2.2515 | | 2.3977 | 0.51 | 21500 | 2.2442 | | 2.4009 | 0.52 | 22000 | 2.2422 | | 2.3941 | 0.53 | 22500 | 2.2388 | | 2.3909 | 0.54 | 23000 | 2.2349 | | 2.4016 | 0.55 | 23500 | 2.2380 | | 2.389 | 0.57 | 24000 | 2.2326 | | 2.3864 | 0.58 | 24500 | 2.2287 | | 2.3795 | 0.59 | 25000 | 2.2285 | | 2.3817 | 0.6 | 25500 | 2.2266 | | 2.3789 | 0.61 | 26000 | 2.2256 | | 2.3801 | 0.62 | 26500 | 2.2210 | | 2.3687 | 0.64 | 27000 | 2.2189 | | 2.378 | 0.65 | 27500 | 2.2194 | | 2.3735 | 0.66 | 28000 | 2.2157 | | 2.3758 | 0.67 | 28500 | 2.2142 | | 2.3616 | 0.68 | 29000 | 2.2133 | | 2.3731 | 0.7 | 29500 | 2.2085 | | 2.3606 | 0.71 | 30000 | 2.2115 | | 2.3516 | 0.72 | 30500 | 2.2072 | | 2.3551 | 0.73 | 31000 | 2.2067 | | 2.3626 | 0.74 | 31500 | 2.2033 | | 2.3516 | 0.75 | 32000 | 2.2031 | | 2.3658 | 0.77 | 32500 | 2.2008 | | 2.3554 | 0.78 | 33000 | 2.1992 | | 2.3524 | 0.79 | 33500 | 2.1988 | | 2.3509 | 0.8 | 34000 | 2.1996 | | 2.3474 | 0.81 | 34500 | 2.1949 | | 2.3431 | 0.83 | 35000 | 2.1943 | | 2.3413 | 0.84 | 35500 | 2.1907 | | 2.3592 | 0.85 | 36000 | 2.1917 | | 2.3636 | 0.86 | 36500 | 2.1919 | | 2.3529 | 0.87 | 37000 | 2.1881 | | 2.3371 | 0.88 | 37500 | 2.1875 | | 2.3413 | 0.9 | 38000 | 2.1856 | | 2.3463 | 0.91 | 38500 | 2.1839 | | 2.3303 | 0.92 | 39000 | 2.1859 | | 2.3432 | 0.93 | 39500 | 2.1790 | | 2.3455 | 0.94 | 40000 | 2.1801 | | 2.344 | 0.95 | 40500 | 2.1761 | | 2.3442 | 0.97 | 41000 | 2.1759 | | 2.3331 | 0.98 | 41500 | 2.1760 | | 2.3391 | 0.99 | 42000 | 2.1748 | | 2.3275 | 1.0 | 42500 | 2.1760 | | 2.3308 | 1.01 | 43000 | 2.1712 | | 2.3191 | 1.03 | 43500 | 2.1727 | | 2.3182 | 1.04 | 44000 | 2.1682 | | 2.3184 | 1.05 | 44500 | 2.1683 | | 2.3177 | 1.06 | 45000 | 2.1668 | | 2.3163 | 1.07 | 45500 | 2.1643 | | 2.321 | 1.08 | 46000 | 2.1631 | | 2.3164 | 1.1 | 46500 | 2.1655 | | 2.3231 | 1.11 | 47000 | 2.1631 | | 2.3139 | 1.12 | 47500 | 2.1591 | | 2.3223 | 1.13 | 48000 | 2.1588 | | 2.3133 | 1.14 | 48500 | 2.1588 | | 2.2995 | 1.16 | 49000 | 2.1569 | | 2.308 | 1.17 | 49500 | 2.1578 | | 2.3062 | 1.18 | 50000 | 2.1539 | | 2.3203 | 1.19 | 50500 | 2.1538 | | 2.3116 | 1.2 | 51000 | 2.1526 | | 2.294 | 1.21 | 51500 | 2.1520 | | 2.2941 | 1.23 | 52000 | 2.1499 | | 2.3053 | 1.24 | 52500 | 2.1502 | | 2.3154 | 1.25 | 53000 | 2.1507 | | 2.3057 | 1.26 | 53500 | 2.1485 | | 2.3106 | 1.27 | 54000 | 2.1464 | | 2.3035 | 1.28 | 54500 | 2.1457 | | 2.304 | 1.3 | 55000 | 2.1445 | | 2.2985 | 1.31 | 55500 | 2.1439 | | 2.296 | 1.32 | 56000 | 2.1421 | | 2.2917 | 1.33 | 56500 | 2.1411 | | 2.2936 | 1.34 | 57000 | 2.1406 | | 2.2866 | 1.36 | 57500 | 2.1383 | | 2.2973 | 1.37 | 58000 | 2.1396 | | 2.2865 | 1.38 | 58500 | 2.1378 | | 2.2929 | 1.39 | 59000 | 2.1370 | | 2.2858 | 1.4 | 59500 | 2.1351 | | 2.2857 | 1.41 | 60000 | 2.1350 | | 2.3019 | 1.43 | 60500 | 2.1338 | | 2.289 | 1.44 | 61000 | 2.1330 | | 2.2874 | 1.45 | 61500 | 2.1318 | | 2.2858 | 1.46 | 62000 | 2.1305 | | 2.2875 | 1.47 | 62500 | 2.1298 | | 2.2859 | 1.49 | 63000 | 2.1294 | | 2.28 | 1.5 | 63500 | 2.1275 | | 2.2866 | 1.51 | 64000 | 2.1277 | | 2.2851 | 1.52 | 64500 | 2.1281 | | 2.2806 | 1.53 | 65000 | 2.1258 | | 2.2889 | 1.54 | 65500 | 2.1245 | | 2.2745 | 1.56 | 66000 | 2.1249 | | 2.2739 | 1.57 | 66500 | 2.1230 | | 2.2853 | 1.58 | 67000 | 2.1226 | | 2.2773 | 1.59 | 67500 | 2.1228 | | 2.2742 | 1.6 | 68000 | 2.1214 | | 2.2656 | 1.61 | 68500 | 2.1200 | | 2.2756 | 1.63 | 69000 | 2.1194 | | 2.2806 | 1.64 | 69500 | 2.1193 | | 2.271 | 1.65 | 70000 | 2.1186 | | 2.2671 | 1.66 | 70500 | 2.1185 | | 2.2718 | 1.67 | 71000 | 2.1168 | | 2.2781 | 1.69 | 71500 | 2.1172 | | 2.2744 | 1.7 | 72000 | 2.1164 | | 2.2744 | 1.71 | 72500 | 2.1156 | | 2.2603 | 1.72 | 73000 | 2.1154 | | 2.2703 | 1.73 | 73500 | 2.1141 | | 2.267 | 1.74 | 74000 | 2.1141 | | 2.2614 | 1.76 | 74500 | 2.1141 | | 2.263 | 1.77 | 75000 | 2.1133 | | 2.2668 | 1.78 | 75500 | 2.1128 | | 2.2642 | 1.79 | 76000 | 2.1128 | | 2.2637 | 1.8 | 76500 | 2.1128 | | 2.2692 | 1.82 | 77000 | 2.1118 | | 2.2631 | 1.83 | 77500 | 2.1117 | | 2.2567 | 1.84 | 78000 | 2.1116 | | 2.2707 | 1.85 | 78500 | 2.1112 | | 2.2707 | 1.86 | 79000 | 2.1109 | | 2.2664 | 1.87 | 79500 | 2.1114 | | 2.266 | 1.89 | 80000 | 2.1113 | | 2.2645 | 1.9 | 80500 | 2.1108 | | 2.2767 | 1.91 | 81000 | 2.1106 | | 2.274 | 1.92 | 81500 | 2.1102 | | 2.2587 | 1.93 | 82000 | 2.1102 | | 2.2736 | 1.94 | 82500 | 2.1100 | | 2.2633 | 1.96 | 83000 | 2.1102 | | 2.2652 | 1.97 | 83500 | 2.1100 | | 2.2655 | 1.98 | 84000 | 2.1101 | | 2.2683 | 1.99 | 84500 | 2.1100 | ### Framework versions - Transformers 4.32.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.2
botp/nRuaif_fiction.live-Kimiko-V2-70B
botp
2023-09-01T08:28:22Z
0
1
null
[ "text-generation", "en", "license:creativeml-openrail-m", "region:us" ]
text-generation
2023-09-01T08:28:22Z
--- license: creativeml-openrail-m language: - en pipeline_tag: text-generation duplicated_from: nRuaif/fiction.live-Kimiko-V2-70B --- ## Sponsor Thanks to fiction.live for sponsoring this finetune and make this a reality. ## Model Details [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** nRuaif - **Model type:** large language model - **License:** - **Finetuned from model [optional]:** Llama-70B ### Model Sources [optional] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> The model uses Fastchat/ShareGPT format. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is finetuned for normal and erotic roleplay while can still an assistant. (Might not be a helpfull one through) ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Do anything you want. I don't care ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Model might have bias to NSFW due to the large % of NSFW data in the training set. ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> 3000 convos with 4090 cut off len. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Training Hyperparameters - **Training regime:** BF16, QLoRA, constant LR 5e-5 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> ### Compute Infrastructure The model is trained on 1 A100 for 10 hours on runpod.
WizardLMTeam/WizardMath-7B-V1.0
WizardLMTeam
2023-09-01T08:18:09Z
3,710
52
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-11T04:32:31Z
--- license: llama2 --- ## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF) <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> | Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License | | ----- |------| ---- |------|-------| ----- | ----- | | WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License| | ----- |------| ---- |------|-------| ----- | ----- | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>| <font size=4> | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>| | ----- |------| ---- |------|-------| ----- | ----- | ----- | | <sup>**WizardLM-70B-V1.0**</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>📃**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6 pass@1**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>| | <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> | | <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>| | <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>| </font> **Github Repo**: https://github.com/nlpxucan/WizardLM/tree/main/WizardMath **Twitter**: https://twitter.com/WizardLM_AI/status/1689998428200112128 **Discord**: https://discord.gg/VZjjHtWrKs ## Comparing WizardMath-V1.0 with Other LLMs. 🔥 The following figure shows that our **WizardMath-70B-V1.0 attains the fifth position in this benchmark**, surpassing ChatGPT (81.6 vs. 80.8) , Claude Instant (81.6 vs. 80.9), PaLM 2 540B (81.6 vs. 80.7). <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardMath/images/wizardmath_gsm8k.png" alt="WizardMath" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ❗<b>Note for model system prompts usage:</b> Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**. **Default version:** ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" ``` **CoT Version:** (❗For the **simple** math questions, we do NOT recommend to use the CoT prompt.) ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step." ``` ## Inference WizardMath Demo Script We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo). ❗<b>To commen concern about dataset:</b> Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models. Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team . Our researchers have no authority to publicly release them without authorization. Thank you for your understanding. ## Citation Please cite the repo if you use the data, method or code in this repo. ``` @article{luo2023wizardmath, title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct}, author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei}, journal={arXiv preprint arXiv:2308.09583}, year={2023} } ```
mdance/bert-finetuned-ner
mdance
2023-09-01T08:11:38Z
107
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-01T04:05:31Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9348652669862787 - name: Recall type: recall value: 0.9516997643890945 - name: F1 type: f1 value: 0.9432074055541656 - name: Accuracy type: accuracy value: 0.986504385706717 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0615 - Precision: 0.9349 - Recall: 0.9517 - F1: 0.9432 - Accuracy: 0.9865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0764 | 1.0 | 1756 | 0.0867 | 0.9092 | 0.9303 | 0.9196 | 0.9794 | | 0.032 | 2.0 | 3512 | 0.0603 | 0.9266 | 0.9453 | 0.9359 | 0.9856 | | 0.0181 | 3.0 | 5268 | 0.0615 | 0.9349 | 0.9517 | 0.9432 | 0.9865 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cpu - Datasets 2.14.4 - Tokenizers 0.13.3
kyungmin011029/category_last
kyungmin011029
2023-09-01T08:10:57Z
62
1
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:klue/bert-base", "base_model:finetune:klue/bert-base", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-01T08:09:52Z
--- license: cc-by-sa-4.0 base_model: klue/bert-base tags: - generated_from_keras_callback model-index: - name: category_last results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # category_last This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.32.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
kyungmin011029/code_last
kyungmin011029
2023-09-01T08:10:34Z
63
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:klue/bert-base", "base_model:finetune:klue/bert-base", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-01T08:09:52Z
--- license: cc-by-sa-4.0 base_model: klue/bert-base tags: - generated_from_keras_callback model-index: - name: code_last results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # code_last This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.32.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
s3nh/Sentdex-WSB-GPT-13B-GGUF
s3nh
2023-09-01T08:09:40Z
42
0
transformers
[ "transformers", "gguf", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us" ]
text-generation
2023-09-01T07:51:55Z
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGUF Format model files for [This project](https://huggingface.co/Sentdex/WSB-GPT-13B). ### GGUF Specs GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired: Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model. ### Perplexity params Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16 7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066 13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543 ### inference ```python import ctransformers from ctransformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(output_dir, gguf_file, gpu_layers=32, model_type="llama") manual_input: str = "Tell me about your last dream, please." llm(manual_input, max_new_tokens=256, temperature=0.9, top_p= 0.7) ``` # Original model card
swaroopajit/git-base-pokemon
swaroopajit
2023-09-01T08:03:10Z
63
0
transformers
[ "transformers", "pytorch", "git", "image-text-to-text", "generated_from_trainer", "base_model:microsoft/git-base", "base_model:finetune:microsoft/git-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-08-22T11:42:14Z
--- license: mit base_model: microsoft/git-base tags: - generated_from_trainer model-index: - name: git-base-pokemon results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-pokemon This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
nadcy/bloomz-1b7_MONA_LORA
nadcy
2023-09-01T08:02:26Z
6
1
peft
[ "peft", "arxiv:2203.00148", "region:us" ]
null
2023-08-01T07:47:41Z
--- library_name: peft --- # BLOOMZ-角色LORA(mona) BLOOMZ是由开源社区主导训练的一系列跨语言模型,能够在数十种语言中无监督学习执行人类指令。这些模型经过微调后,能在未见过的任务和语言中实现跨语言泛化。 在本项目中,我们该模型令该模型根据三个描述环境的信息(天气,特殊日子以及一天内的时段)以及一个字符串形式的日程提示生成角色的对话。 例如输入: human:天气:晴天,日期:双休日,时间:凌晨,提示我去做:买咖啡assistant: 模型将会输出该角色的日程提示: 双休日的凌晨,阳光照耀着大地,就像星辰在闪烁。你打算去买咖啡,记得带上你的咖啡杯。 ## Prompt介绍 具体为: "天气"描述了诸如晴天、多云、雨天、雪天、雾、霾等各类天气状况。 "日期"涵盖了众多节假日、纪念日等,如春节、元旦、春假、暑假等。 "时间"提供了一天中不同时间段的描述,如早晨、中午、晚上等。 "日程提示"如"需要在9点时赶飞机","下午3点有重要的会议",描述待提醒的事项。 在指令微调截断我们使用的数据使用了严格的格式 human:天气:[...],日期:[...],时间:[...],提示我去做:[...]assistant:[...] 所以建议推理采取 __相同的格式(prompt)__ 执行任务,当然我们也方向仅执行该任务指令微调也有助于其它对话能力提示,具体见 __模型泛化__ 一节。 ## 训练数据 训练过程受到论文 [LIMA: Less Is More for Alignment](https://arxiv.org/abs/2203.00148) 的启发。与LIMA一样,BLOOMZ-LORA主要关注大型语言模型训练的指令微调阶段。试图用尽可能少的高质量指令达到训练任务。 我们使用GPT4模型获得高质量的指令微调数据,在生成数据的过程中我们展示了多个基准样本,并设定prompt模板生成随机生成了一系列的输入,由GPT4这个巨型模型生成1000条指令,一下是一个数据样例: {"text":"human:今天天气:雷暴,日期:中秋节,时间:深夜,提示我去做:'去科技展览会'assistant:在深夜雷鸣电闪的中秋节里,你计划去科技展览会。请牢记,正是因为无法更改,无可违逆,只能接受,命运才会被称之为命运。end"} 我们选择学习的角色是游戏原神中的角色莫娜,具体才角色的初始语料可参考官方Wiki。 ## 量化,性能与硬件要求 LORA模型,可以在边缘设备上部署,只需要4GB的内存。这样可以直接在用户的设备上提供低延迟的个性化服务,无需高带宽的互联网连接或高性能的服务器。 在测试中,BLOOMZ-LORA表现出强大的性能,生成的对话密切匹配《原神》中"Mona"角色的风格和语气。它从训练数据中的少量例子中学会遵循特定的回答格式,并能很好地泛化到训练数据中没有出现的未见任务。 我们在训练和测试阶段都采用bitsandbytes的int8量化选项(总token长度<200,不讨论transformer模型二次复杂度在长序列上带来的性能消耗)。 模型运行要求<4GB,已在t5, jetson nano, A100下测试推理。 ## 模型泛化 我们发现即使在上述非常垂域的任务进行训练也能够大幅改善其它任务的性能,例如一下这个示例: 基座模型效果: human:今天天气很好,我应该去做什么assistant: I should go to work 微调后效果: human:今天天气很好,我应该去做什么assistant:今天是个好天气,去外面走走吧,去外面走走,去享受阳光吧。 微调有助于缓解先前模型错误回答另一种语言的缺陷,以及能够生成更为丰富的内容。 ## 结论 1. BLOOMZ-LORA展示了大型语言模型中几乎所有的知识都是在预训练阶段学习的,只需要少量的指令调整数据就可以教授模型产生高质量的输出。它为基于流行虚构角色创建AI个人助手奠定了坚实的基础。 2. 蒸馏某个大模型进行微调的成本极低,作者消耗的GPT4标注费用小于10美元,在一张A100内训练Lora模型能够在10分钟内完成。而且,数据还能够通过更为细致的规划使得模型获得更为丰富的能力,这建立在预训练模型强大的先验之上。 3. 后续我们将补充模型大小与性能变化的测试结果以及训练细节,并尝试重新设计1000条指令的分布,探讨任务设计对模型性能的影响。 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0 ## 在线使用连接,测试T5-普通内存下可用,运行约5min(谷歌colab,需外网) https://colab.research.google.com/drive/12zKnvIAEqGCt2Qi_IS99GfTBbrzdMX8L?usp=sharing ![Alt text](./result.PNG)
Korkkork/youngjikara
Korkkork
2023-09-01T07:56:35Z
0
0
null
[ "Kpop", "kara", "license:openrail", "region:us" ]
null
2023-09-01T07:55:20Z
--- license: openrail tags: - Kpop - kara ---
WizardLMTeam/WizardLM-13B-V1.0
WizardLMTeam
2023-09-01T07:56:25Z
298
73
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-13T15:17:01Z
This is WizardLM-13B V1.0 diff weight. Project Repo: https://github.com/nlpxucan/WizardLM NOTE: The **WizardLM-13B-1.0** and **Wizard-7B** use different prompt at the beginning of the conversation: For **WizardLM-13B-1.0** , the Prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hello, who are you? ASSISTANT: ``` For **WizardLM-7B** , the Prompt should be as following: ``` {instruction}\n\n### Response: ``` <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> | Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License | | ----- |------| ---- |------|-------| ----- | ----- | | WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License| | ----- |------| ---- |------|-------| ----- | ----- | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>| <font size=4> | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>| | ----- |------| ---- |------|-------| ----- | ----- | ----- | | <sup>**WizardLM-70B-V1.0**</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>📃**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6 pass@1**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>| | <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> | | <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>| | <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>| </font> **Github Repo**: https://github.com/nlpxucan/WizardLM/tree/main/WizardMath **Twitter**: https://twitter.com/WizardLM_AI/status/1689998428200112128 **Discord**: https://discord.gg/VZjjHtWrKs ## Inference WizardLM Demo Script We provide the inference WizardLM demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
dg845/consistency-models-test
dg845
2023-09-01T07:49:26Z
0
0
diffusers
[ "diffusers", "safetensors", "license:mit", "region:us" ]
null
2023-05-30T01:39:25Z
--- license: mit --- These `UNet2DModel` checkpoints are small randomly-initialized U-Nets which accept 32x32 images for use in testing consistency models. "test_unet_class_cond" is class-conditional (e.g. contains a class label embedding), while "test_unet" is not. Please refer to the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information about consistency models.
Filippo/e5-small-v2-onnx
Filippo
2023-09-01T07:48:04Z
4
0
transformers
[ "transformers", "onnx", "bert", "feature-extraction", "sentence-similarity", "en", "license:mit", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-09-01T06:59:39Z
--- license: mit language: - en pipeline_tag: sentence-similarity --- Work in progress! Learning how to use Optimum with the https://huggingface.co/intfloat/e5-small-v2 model.
tomjam/my_awesome_peft_model
tomjam
2023-09-01T07:42:03Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-01T07:41:46Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0
dkqjrm/20230901120149
dkqjrm
2023-09-01T07:40:31Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-01T03:02:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - accuracy model-index: - name: '20230901120149' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20230901120149 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.1576 - Accuracy: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 11 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 80.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 340 | 0.1594 | 0.5 | | 0.1863 | 2.0 | 680 | 0.1639 | 0.5 | | 0.1705 | 3.0 | 1020 | 0.1604 | 0.5 | | 0.1705 | 4.0 | 1360 | 0.1572 | 0.5 | | 0.1659 | 5.0 | 1700 | 0.1604 | 0.5 | | 0.1635 | 6.0 | 2040 | 0.1674 | 0.5 | | 0.1635 | 7.0 | 2380 | 0.1568 | 0.5 | | 0.1633 | 8.0 | 2720 | 0.1633 | 0.5 | | 0.1599 | 9.0 | 3060 | 0.1611 | 0.5 | | 0.1599 | 10.0 | 3400 | 0.1636 | 0.5 | | 0.1615 | 11.0 | 3740 | 0.1574 | 0.5 | | 0.1606 | 12.0 | 4080 | 0.1632 | 0.5 | | 0.1606 | 13.0 | 4420 | 0.1579 | 0.5 | | 0.1594 | 14.0 | 4760 | 0.1623 | 0.5 | | 0.1698 | 15.0 | 5100 | 0.1623 | 0.5 | | 0.1698 | 16.0 | 5440 | 0.1614 | 0.5 | | 0.168 | 17.0 | 5780 | 0.1579 | 0.5 | | 0.1626 | 18.0 | 6120 | 0.1586 | 0.5 | | 0.1626 | 19.0 | 6460 | 0.1565 | 0.5 | | 0.1604 | 20.0 | 6800 | 0.1574 | 0.5 | | 0.1595 | 21.0 | 7140 | 0.1601 | 0.5 | | 0.1595 | 22.0 | 7480 | 0.1675 | 0.5 | | 0.1615 | 23.0 | 7820 | 0.1602 | 0.5 | | 0.1669 | 24.0 | 8160 | 0.1604 | 0.5 | | 0.1677 | 25.0 | 8500 | 0.1635 | 0.5 | | 0.1677 | 26.0 | 8840 | 0.1603 | 0.5 | | 0.1666 | 27.0 | 9180 | 0.1614 | 0.5 | | 0.1656 | 28.0 | 9520 | 0.1609 | 0.5 | | 0.1656 | 29.0 | 9860 | 0.1625 | 0.5 | | 0.1668 | 30.0 | 10200 | 0.1624 | 0.5 | | 0.1658 | 31.0 | 10540 | 0.1702 | 0.5 | | 0.1658 | 32.0 | 10880 | 0.1606 | 0.5 | | 0.166 | 33.0 | 11220 | 0.1657 | 0.5 | | 0.1674 | 34.0 | 11560 | 0.1619 | 0.5 | | 0.1674 | 35.0 | 11900 | 0.1585 | 0.5 | | 0.1636 | 36.0 | 12240 | 0.1592 | 0.5 | | 0.1612 | 37.0 | 12580 | 0.1568 | 0.5 | | 0.1612 | 38.0 | 12920 | 0.1607 | 0.5 | | 0.159 | 39.0 | 13260 | 0.1577 | 0.5 | | 0.1586 | 40.0 | 13600 | 0.1566 | 0.5 | | 0.1586 | 41.0 | 13940 | 0.1584 | 0.5 | | 0.1587 | 42.0 | 14280 | 0.1620 | 0.5 | | 0.1577 | 43.0 | 14620 | 0.1571 | 0.5 | | 0.1577 | 44.0 | 14960 | 0.1610 | 0.5 | | 0.1587 | 45.0 | 15300 | 0.1576 | 0.5 | | 0.1578 | 46.0 | 15640 | 0.1577 | 0.5 | | 0.1578 | 47.0 | 15980 | 0.1570 | 0.5 | | 0.1592 | 48.0 | 16320 | 0.1578 | 0.5 | | 0.1578 | 49.0 | 16660 | 0.1565 | 0.5 | | 0.1582 | 50.0 | 17000 | 0.1581 | 0.5 | | 0.1582 | 51.0 | 17340 | 0.1571 | 0.5 | | 0.1569 | 52.0 | 17680 | 0.1585 | 0.5 | | 0.1586 | 53.0 | 18020 | 0.1566 | 0.5 | | 0.1586 | 54.0 | 18360 | 0.1579 | 0.5 | | 0.1576 | 55.0 | 18700 | 0.1578 | 0.5 | | 0.1577 | 56.0 | 19040 | 0.1581 | 0.5 | | 0.1577 | 57.0 | 19380 | 0.1566 | 0.5 | | 0.1571 | 58.0 | 19720 | 0.1572 | 0.5 | | 0.1578 | 59.0 | 20060 | 0.1562 | 0.5 | | 0.1578 | 60.0 | 20400 | 0.1579 | 0.5 | | 0.157 | 61.0 | 20740 | 0.1578 | 0.5 | | 0.157 | 62.0 | 21080 | 0.1566 | 0.5 | | 0.157 | 63.0 | 21420 | 0.1572 | 0.5 | | 0.1562 | 64.0 | 21760 | 0.1594 | 0.5 | | 0.1584 | 65.0 | 22100 | 0.1582 | 0.5 | | 0.1584 | 66.0 | 22440 | 0.1566 | 0.5 | | 0.1549 | 67.0 | 22780 | 0.1579 | 0.5 | | 0.1582 | 68.0 | 23120 | 0.1587 | 0.5 | | 0.1582 | 69.0 | 23460 | 0.1580 | 0.5 | | 0.157 | 70.0 | 23800 | 0.1580 | 0.5 | | 0.1563 | 71.0 | 24140 | 0.1585 | 0.5 | | 0.1563 | 72.0 | 24480 | 0.1576 | 0.5 | | 0.1562 | 73.0 | 24820 | 0.1570 | 0.5 | | 0.1566 | 74.0 | 25160 | 0.1576 | 0.5 | | 0.156 | 75.0 | 25500 | 0.1570 | 0.5 | | 0.156 | 76.0 | 25840 | 0.1575 | 0.5 | | 0.1566 | 77.0 | 26180 | 0.1584 | 0.5 | | 0.1561 | 78.0 | 26520 | 0.1572 | 0.5 | | 0.1561 | 79.0 | 26860 | 0.1580 | 0.5 | | 0.1561 | 80.0 | 27200 | 0.1576 | 0.5 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Yoshimitsujhi/finetuned
Yoshimitsujhi
2023-09-01T07:40:06Z
0
0
null
[ "generated_from_trainer", "base_model:tiiuae/falcon-7b", "base_model:finetune:tiiuae/falcon-7b", "license:apache-2.0", "region:us" ]
null
2023-08-31T12:31:41Z
--- license: apache-2.0 base_model: tiiuae/falcon-7b tags: - generated_from_trainer model-index: - name: finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 20 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1 - Datasets 2.14.4 - Tokenizers 0.13.3
sosuneko/ppo-SnowballTarget
sosuneko
2023-09-01T07:38:53Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-09-01T07:38:46Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: sosuneko/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
dg845/diffusers-ct_imagenet64
dg845
2023-09-01T07:27:08Z
3
0
diffusers
[ "diffusers", "safetensors", "generative model", "unconditional image generation", "arxiv:2303.01469", "arxiv:1506.03365", "arxiv:1512.00567", "license:mit", "diffusers:ConsistencyModelPipeline", "region:us" ]
null
2023-06-21T11:08:15Z
--- license: mit tags: - generative model - unconditional image generation --- Consistency models are a new class of generative models introduced in ["Consistency Models"](https://arxiv.org/abs/2303.01469) ([paper](https://arxiv.org/pdf/2303.01469.pdf), [code](https://github.com/openai/consistency_models)) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. From the paper abstract: > Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64 x 64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64 x 64 and LSUN 256 x 256. Intuitively, a consistency model can be thought of as a model which, when evaluated on a noisy image and timestep, returns an output image sample similar to that which would be returned by running a sampling algorithm on a diffusion model. Consistency models can be parameterized by any neural network whose input has the same dimensionality as its output, such as a U-Net. More precisely, given a teacher diffusion model and fixed sampler, we can train ("distill") a consistency model such that when it is given a noisy image and its corresponding timestep, the output sample of the consistency model will be close to the output that would result by using the sampler on the diffusion model to produce a sample, starting at the same noisy image and timestep. The authors call this procedure "consistency distillation (CD)". Consistency models can also be trained from scratch to generate clean images from a noisy image and timestep, which the authors call "consistency training (CT)". This model is a `diffusers`-compatible version of the [ct_imagenet64.pt](https://github.com/openai/consistency_models#pre-trained-models) checkpont from the [original code and model release](https://github.com/openai/consistency_models). This model was trained on the ImageNet 64x64 dataset using the consistency training (CT) algorithm. See the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information. ## Download The original PyTorch model checkpoint can be downloaded from the [original code and model release](https://github.com/openai/consistency_models#pre-trained-models). The `diffusers` pipeline for the `ct_imagenet64` model can be downloaded as follows: ```python from diffusers import ConsistencyModelPipeline pipe = ConsistencyModelPipeline.from_pretrained("dg845/diffusers-ct_imagenet64") ``` ## Usage The original model checkpoint can be used with the [original consistency models codebase](https://github.com/openai/consistency_models). Here is an example of using the `ct_imagenet64` checkpoint with `diffusers`: ```python import torch from diffusers import ConsistencyModelPipeline device = "cuda" # Load the ct_imagenet64 checkpoint. model_id_or_path = "dg845/diffusers-ct_imagenet64" pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) pipe.to(device) # Onestep Sampling image = pipe(num_inference_steps=1).images[0] image.save("ct_imagenet64_onestep_sample.png") # Onestep sampling, class-conditional image generation # ImageNet-64 class label 145 corresponds to king penguins image = pipe(num_inference_steps=1, class_labels=145).images[0] image.save("ct_imagenet64_onestep_sample_penguin.png") # Multistep sampling, class-conditional image generation # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo: # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L80 image = pipe(num_inference_steps=None, timesteps=[106, 0], class_labels=145).images[0] image.save("ct_imagenet64_multistep_sample_penguin.png") ``` ## Model Details - **Model type:** Consistency model unconditional image generation model - **Dataset:** ImageNet 64x64 - **License:** MIT - **Model Description:** This model performs unconditional image generation. Its main component is a U-Net, which parameterizes the consistency model. This model was trained by the Consistency Model authors. - **Resources for more information:**: [Paper](https://arxiv.org/abs/2303.01469), [GitHub Repository](https://github.com/openai/consistency_models), [Original Model Card](/openai/consistency_models/blob/main/model-card.md) ## Datasets _Note: This section is taken from the ["Datasets" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#datasets)_. The models that we are making available have been trained on the [ILSVRC 2012 subset of ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) or on individual categories from [LSUN](https://arxiv.org/abs/1506.03365). Here we outline the characteristics of these datasets that influence the behavior of the models: **ILSVRC 2012 subset of ImageNet**: This dataset was curated in 2012 and has around a million pictures, each of which belongs to one of 1,000 categories. A significant number of the categories in this dataset are animals, plants, and other naturally occurring objects. Although many photographs include humans, these humans are typically not represented by the class label (for example, the category "Tench, tinca tinca" includes many photographs of individuals holding fish). **LSUN**: This dataset was collected in 2015 by a combination of human labeling via Amazon Mechanical Turk and automated data labeling. Both classes that we consider have more than a million images. The dataset creators discovered that when assessed by trained experts, the label accuracy was approximately 90% throughout the entire LSUN dataset. The pictures are gathered from the internet, and those in the cat class often follow a "meme" format. Occasionally, people, including faces, appear in these photographs. ## Performance _Note: This section is taken from the ["Performance" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#performance)_. These models are intended to generate samples consistent with their training distributions. This has been measured in terms of FID, Inception Score, Precision, and Recall. These metrics all rely on the representations of a [pre-trained Inception-V3 model](https://arxiv.org/abs/1512.00567), which was trained on ImageNet, and so is likely to focus more on the ImageNet classes (such as animals) than on other visual features (such as human faces). ## Intended Use _Note: This section is taken from the ["Intended Use" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#intended-use)_. These models are intended to be used for research purposes only. In particular, they can be used as a baseline for generative modeling research, or as a starting point for advancing such research. These models are not intended to be commercially deployed. Additionally, they are not intended to be used to create propaganda or offensive imagery. ## Limitations _Note: This section is taken from the ["Limitations" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#limitations)_. These models sometimes produce highly unrealistic outputs, particularly when generating images containing human faces. This may stem from ImageNet's emphasis on non-human objects. In consistency distillation and training, minimizing LPIPS results in better sample quality, as evidenced by improved FID and Inception scores. However, it also carries the risk of overestimating model performance, because LPIPS uses a VGG network pre-trained on ImageNet, while FID and Inception scores also rely on convolutional neural networks (the Inception network in particular) pre-trained on the same ImageNet dataset. Although these two convolutional neural networks do not share the same architecture and we extract latents from them in substantially different ways, knowledge leakage is still plausible which can undermine the fidelity of FID and Inception scores. Because ImageNet and LSUN contain images from the internet, they include photos of real people, and the model may have memorized some of the information contained in these photos. However, these images are already publicly available, and existing generative models trained on ImageNet have not demonstrated significant leakage of this information.
sosuneko/Reinforce-Pixelcopter-PLE-v0
sosuneko
2023-09-01T07:27:02Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-09-01T07:26:57Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 33.70 +/- 23.29 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
dg845/diffusers-cd_imagenet64_lpips
dg845
2023-09-01T07:23:28Z
4
0
diffusers
[ "diffusers", "safetensors", "generative model", "unconditional image generation", "arxiv:2303.01469", "arxiv:2206.00364", "arxiv:1506.03365", "arxiv:1512.00567", "license:mit", "diffusers:ConsistencyModelPipeline", "region:us" ]
null
2023-06-21T10:57:25Z
--- license: mit tags: - generative model - unconditional image generation --- Consistency models are a new class of generative models introduced in ["Consistency Models"](https://arxiv.org/abs/2303.01469) ([paper](https://arxiv.org/pdf/2303.01469.pdf), [code](https://github.com/openai/consistency_models)) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. From the paper abstract: > Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64 x 64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64 x 64 and LSUN 256 x 256. Intuitively, a consistency model can be thought of as a model which, when evaluated on a noisy image and timestep, returns an output image sample similar to that which would be returned by running a sampling algorithm on a diffusion model. Consistency models can be parameterized by any neural network whose input has the same dimensionality as its output, such as a U-Net. More precisely, given a teacher diffusion model and fixed sampler, we can train ("distill") a consistency model such that when it is given a noisy image and its corresponding timestep, the output sample of the consistency model will be close to the output that would result by using the sampler on the diffusion model to produce a sample, starting at the same noisy image and timestep. The authors call this procedure "consistency distillation (CD)". Consistency models can also be trained from scratch to generate clean images from a noisy image and timestep, which the authors call "consistency training (CT)". This model is a `diffusers`-compatible version of the [cd_imagenet64_lpips.pt](https://github.com/openai/consistency_models#pre-trained-models) checkpont from the [original code and model release](https://github.com/openai/consistency_models). This model was distilled (via consistency distillation (CD)) from an [EDM model](https://arxiv.org/pdf/2206.00364.pdf) trained on the ImageNet 64x64 dataset, using [LPIPS](https://richzhang.github.io/PerceptualSimilarity/) as the measure of closeness. See the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information. ## Download The original PyTorch model checkpoint can be downloaded from the [original code and model release](https://github.com/openai/consistency_models#pre-trained-models). The `diffusers` pipeline for the `cd-imagenet64-lpips` model can be downloaded as follows: ```python from diffusers import ConsistencyModelPipeline pipe = ConsistencyModelPipeline.from_pretrained("dg845/diffusers-cd_imagenet64_lpips") ``` ## Usage The original model checkpoint can be used with the [original consistency models codebase](https://github.com/openai/consistency_models). Here is an example of using the `cd_imagenet64_lpips` checkpoint with `diffusers`: ```python import torch from diffusers import ConsistencyModelPipeline device = "cuda" # Load the cd_imagenet64_lpips checkpoint. model_id_or_path = "dg845/diffusers-cd_imagenet64_lpips" pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) pipe.to(device) # Onestep Sampling image = pipe(num_inference_steps=1).images[0] image.save("cd_imagenet64_lpips_onestep_sample.png") # Onestep sampling, class-conditional image generation # ImageNet-64 class label 145 corresponds to king penguins image = pipe(num_inference_steps=1, class_labels=145).images[0] image.save("cd_imagenet64_lpips_onestep_sample_penguin.png") # Multistep sampling, class-conditional image generation # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo: # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L74 image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0] image.save("cd_imagenet64_lpips_multistep_sample_penguin.png") ``` ## Model Details - **Model type:** Consistency model unconditional image generation model, distilled from a diffusion model - **Dataset:** ImageNet 64x64 - **License:** MIT - **Model Description:** This model performs unconditional image generation. Its main component is a U-Net, which parameterizes the consistency model. This model was distilled by the Consistency Model authors from an EDM diffusion model, also originally trained by the authors. - **Resources for more information:**: [Paper](https://arxiv.org/abs/2303.01469), [GitHub Repository](https://github.com/openai/consistency_models), [Original Model Card](/openai/consistency_models/blob/main/model-card.md) ## Datasets _Note: This section is taken from the ["Datasets" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#datasets)_. The models that we are making available have been trained on the [ILSVRC 2012 subset of ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) or on individual categories from [LSUN](https://arxiv.org/abs/1506.03365). Here we outline the characteristics of these datasets that influence the behavior of the models: **ILSVRC 2012 subset of ImageNet**: This dataset was curated in 2012 and has around a million pictures, each of which belongs to one of 1,000 categories. A significant number of the categories in this dataset are animals, plants, and other naturally occurring objects. Although many photographs include humans, these humans are typically not represented by the class label (for example, the category "Tench, tinca tinca" includes many photographs of individuals holding fish). **LSUN**: This dataset was collected in 2015 by a combination of human labeling via Amazon Mechanical Turk and automated data labeling. Both classes that we consider have more than a million images. The dataset creators discovered that when assessed by trained experts, the label accuracy was approximately 90% throughout the entire LSUN dataset. The pictures are gathered from the internet, and those in the cat class often follow a "meme" format. Occasionally, people, including faces, appear in these photographs. ## Performance _Note: This section is taken from the ["Performance" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#performance)_. These models are intended to generate samples consistent with their training distributions. This has been measured in terms of FID, Inception Score, Precision, and Recall. These metrics all rely on the representations of a [pre-trained Inception-V3 model](https://arxiv.org/abs/1512.00567), which was trained on ImageNet, and so is likely to focus more on the ImageNet classes (such as animals) than on other visual features (such as human faces). ## Intended Use _Note: This section is taken from the ["Intended Use" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#intended-use)_. These models are intended to be used for research purposes only. In particular, they can be used as a baseline for generative modeling research, or as a starting point for advancing such research. These models are not intended to be commercially deployed. Additionally, they are not intended to be used to create propaganda or offensive imagery. ## Limitations _Note: This section is taken from the ["Limitations" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#limitations)_. These models sometimes produce highly unrealistic outputs, particularly when generating images containing human faces. This may stem from ImageNet's emphasis on non-human objects. In consistency distillation and training, minimizing LPIPS results in better sample quality, as evidenced by improved FID and Inception scores. However, it also carries the risk of overestimating model performance, because LPIPS uses a VGG network pre-trained on ImageNet, while FID and Inception scores also rely on convolutional neural networks (the Inception network in particular) pre-trained on the same ImageNet dataset. Although these two convolutional neural networks do not share the same architecture and we extract latents from them in substantially different ways, knowledge leakage is still plausible which can undermine the fidelity of FID and Inception scores. Because ImageNet and LSUN contain images from the internet, they include photos of real people, and the model may have memorized some of the information contained in these photos. However, these images are already publicly available, and existing generative models trained on ImageNet have not demonstrated significant leakage of this information.
julian5383/word_ethical
julian5383
2023-09-01T07:21:44Z
114
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "ko", "dataset:kowiki", "dataset:news", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-01T07:20:31Z
--- language: ko datasets: - kowiki - news --- deeqBERT-base --- - model: bert-base - vocab: bert-wordpiece, 35k - version: latest
dg845/consistency-model-pipelines
dg845
2023-09-01T07:18:07Z
6
1
diffusers
[ "diffusers", "safetensors", "generative model", "unconditional image generation", "arxiv:2303.01469", "arxiv:2206.00364", "arxiv:1506.03365", "arxiv:1512.00567", "license:mit", "diffusers:ConsistencyModelPipeline", "region:us" ]
null
2023-06-07T09:16:38Z
--- license: mit tags: - generative model - unconditional image generation --- Consistency models are a new class of generative models introduced in ["Consistency Models"](https://arxiv.org/abs/2303.01469) ([paper](https://arxiv.org/pdf/2303.01469.pdf), [code](https://github.com/openai/consistency_models)) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. From the paper abstract: > Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64 x 64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64 x 64 and LSUN 256 x 256. Intuitively, a consistency model can be thought of as a model which, when evaluated on a noisy image and timestep, returns an output image sample similar to that which would be returned by running a sampling algorithm on a diffusion model. Consistency models can be parameterized by any neural network whose input has the same dimensionality as its output, such as a U-Net. More precisely, given a teacher diffusion model and fixed sampler, we can train ("distill") a consistency model such that when it is given a noisy image and its corresponding timestep, the output sample of the consistency model will be close to the output that would result by using the sampler on the diffusion model to produce a sample, starting at the same noisy image and timestep. The authors call this procedure "consistency distillation (CD)". Consistency models can also be trained from scratch to generate clean images from a noisy image and timestep, which the authors call "consistency training (CT)". This model is a `diffusers`-compatible version of the [cd_imagenet64_l2.pt](https://github.com/openai/consistency_models#pre-trained-models) checkpont from the [original code and model release](https://github.com/openai/consistency_models). This model was distilled (via consistency distillation (CD)) from an [EDM model](https://arxiv.org/pdf/2206.00364.pdf) trained on the ImageNet 64x64 dataset, using the [L2 distance](https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm) as the measure of closeness. See the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information. ## Download The original PyTorch model checkpoint can be downloaded from the [original code and model release](https://github.com/openai/consistency_models#pre-trained-models). The `diffusers` pipeline for the `cd-imagenet64-l2` model can be downloaded as follows: ```python from diffusers import ConsistencyModelPipeline pipe = ConsistencyModelPipeline.from_pretrained("dg845/consistency-model-pipelines") ``` ## Usage The original model checkpoint can be used with the [original consistency models codebase](https://github.com/openai/consistency_models). Here is an example of using the `cd-imagenet64-l2` checkpoint with `diffusers`: ```python import torch from diffusers import ConsistencyModelPipeline device = "cuda" # Load the cd_imagenet64_l2 checkpoint. model_id_or_path = "dg845/consistency-model-pipelines" pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) pipe.to(device) # Onestep Sampling image = pipe(num_inference_steps=1).images[0] image.save("cd_imagenet64_l2_onestep_sample.png") # Onestep sampling, class-conditional image generation # ImageNet-64 class label 145 corresponds to king penguins image = pipe(num_inference_steps=1, class_labels=145).images[0] image.save("cd_imagenet64_l2_onestep_sample_penguin.png") # Multistep sampling, class-conditional image generation # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo: # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L77 image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0] image.save("cd_imagenet64_l2_multistep_sample_penguin.png") ``` ## Model Details - **Model type:** Consistency model unconditional image generation model, distilled from a diffusion model - **Dataset:** ImageNet 64x64 - **License:** MIT - **Model Description:** This model performs unconditional image generation. Its main component is a U-Net, which parameterizes the consistency model. This model was distilled by the Consistency Model authors from an EDM diffusion model, also originally trained by the authors. - **Resources for more information:**: [Paper](https://arxiv.org/abs/2303.01469), [GitHub Repository](https://github.com/openai/consistency_models), [Original Model Card](/openai/consistency_models/blob/main/model-card.md) ## Datasets _Note: This section is taken from the ["Datasets" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#datasets)_. The models that we are making available have been trained on the [ILSVRC 2012 subset of ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) or on individual categories from [LSUN](https://arxiv.org/abs/1506.03365). Here we outline the characteristics of these datasets that influence the behavior of the models: **ILSVRC 2012 subset of ImageNet**: This dataset was curated in 2012 and has around a million pictures, each of which belongs to one of 1,000 categories. A significant number of the categories in this dataset are animals, plants, and other naturally occurring objects. Although many photographs include humans, these humans are typically not represented by the class label (for example, the category "Tench, tinca tinca" includes many photographs of individuals holding fish). **LSUN**: This dataset was collected in 2015 by a combination of human labeling via Amazon Mechanical Turk and automated data labeling. Both classes that we consider have more than a million images. The dataset creators discovered that when assessed by trained experts, the label accuracy was approximately 90% throughout the entire LSUN dataset. The pictures are gathered from the internet, and those in the cat class often follow a "meme" format. Occasionally, people, including faces, appear in these photographs. ## Performance _Note: This section is taken from the ["Performance" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#performance)_. These models are intended to generate samples consistent with their training distributions. This has been measured in terms of FID, Inception Score, Precision, and Recall. These metrics all rely on the representations of a [pre-trained Inception-V3 model](https://arxiv.org/abs/1512.00567), which was trained on ImageNet, and so is likely to focus more on the ImageNet classes (such as animals) than on other visual features (such as human faces). ## Intended Use _Note: This section is taken from the ["Intended Use" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#intended-use)_. These models are intended to be used for research purposes only. In particular, they can be used as a baseline for generative modeling research, or as a starting point for advancing such research. These models are not intended to be commercially deployed. Additionally, they are not intended to be used to create propaganda or offensive imagery. ## Limitations _Note: This section is taken from the ["Limitations" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#limitations)_. These models sometimes produce highly unrealistic outputs, particularly when generating images containing human faces. This may stem from ImageNet's emphasis on non-human objects. In consistency distillation and training, minimizing LPIPS results in better sample quality, as evidenced by improved FID and Inception scores. However, it also carries the risk of overestimating model performance, because LPIPS uses a VGG network pre-trained on ImageNet, while FID and Inception scores also rely on convolutional neural networks (the Inception network in particular) pre-trained on the same ImageNet dataset. Although these two convolutional neural networks do not share the same architecture and we extract latents from them in substantially different ways, knowledge leakage is still plausible which can undermine the fidelity of FID and Inception scores. Because ImageNet and LSUN contain images from the internet, they include photos of real people, and the model may have memorized some of the information contained in these photos. However, these images are already publicly available, and existing generative models trained on ImageNet have not demonstrated significant leakage of this information.
Yntec/Reddit
Yntec
2023-09-01T07:17:35Z
686
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "nutbutter", "acheong08", "license:creativeml-openrail-m", "autotrain_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-26T11:20:49Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - nutbutter - acheong08 inference: false --- Warning: This model is horny! Add "nude, naked" to the negative prompt if want to avoid NSFW. # Reddit A mix of RedditAlpha and REV 1.0, with the Color101VAE baked in. Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/-gIhjHv5k4s3KAv_FMJR3.png) cute pretty girl, sitting, detailed chibi eyes, holding super soaker, beautiful detailed legs, cowgirl, gorgeous detailed hair, cowboy hat, magazine ad, iconic, 1943, from the movie, sharp focus. visible brushstrokes ​by kyoani and clay mann Original page: https://civitai.com/models/5216?modelVersionId=6048 # RedditOmega A model made by mistake by using Weighted Sum 0.3 instead of 0.7, but it's a nice model still. ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/9TGXYgCT_bR8IO-VvXfpq.png) # RedditAlpha A mix of F222 wih subreddit-v3 (many attempts were done to implement subreddit-v4 to v6 but all of them failed.) This is an unsafe model and should only be be used for research purposes. # Recipes Weighted Sum 0.5 F222 + subreddit-v3 = RedditBeta Add Difference 1.0 sd-1.5 + (RedditBeta - sd-1.4) = RedditAlpha Weighted Sum 0.3 REV + RedditAlpha = RedditOmega Weighted Sum 0.7 REV + RedditAlpha = RedditZeta Bake VAE Color 101 = Reddit
yrajm1997/gpt_model
yrajm1997
2023-09-01T07:10:11Z
156
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-01T07:08:30Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: gpt_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt_model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Tokenizers 0.13.3
skaliy/endometrial_cancer_segmentation
skaliy
2023-09-01T07:07:14Z
0
1
fastMONAI
[ "fastMONAI", "medical", "image-segmentation", "en", "region:us" ]
image-segmentation
2023-06-27T12:10:42Z
--- language: - en pipeline_tag: image-segmentation tags: - medical library_name: fastMONAI --- # Endometrial cancer segmentation This repository contains weights and exported learner (encapsulates both the model architecture and its trained parameters) for a deep learning model designed to automate the segmentation of endometrial cancer on MR images. Our VIBE model utilizes a Residual U-Net architecture, trained on data derived from the study [Automated segmentation of endometrial cancer on MR images using deep learning](https://link.springer.com/content/pdf/10.1038/s41598-020-80068-9.pdf). The primary objective of this repository is to reproduce the results reported in the study and to integrate this model into research PACS (see [Results for VIBE](#results-for-vibe) section). In addition, we have looked at improving the segmentation performance using multi-sequence MR images (T2w, VIBE, and ADC) (see [Results for multi-sequence (T2, VIBE, and ADC)](#results-for-multi-sequence-t2-vibe-and-adc) section). ## Requirements Last checked and validated with fastMONAI version 0.3.9. Please ensure that you have the correct version of fastMONAI installed to guarantee the correct operation of the model. ## Usage The source code for training the model and running inference on your own data is available at: https://github.com/MMIV-ML/fastMONAI/tree/master/research/endometrial_cancer. Test our model live with the Gradio app for VIBE on [Hugging Face Spaces](https://skaliy-endometrial-cancer-segmentation-app.hf.space). ## Results for VIBE Note that our results are not directly comparable with the results reported in [study](https://link.springer.com/content/pdf/10.1038/s41598-020-80068-9.pdf), as we opted to use the test set for validation to allocate more data to training. Unlike the approach detailed in the study, we refrained from post-processing steps, such as retaining only the largest object. Predictions from new test cases indicate that this method could occasionally eliminate the tumor. Below is the box plot showcasing predictions on the validation set:![](vibe_boxplot.png) The results from the validation set are also presented in the table below: | | subject_id | tumor_vol | inter_rater | r1_ml | r2_ml | n_components | |---:|-------------:|------------:|--------------:|---------:|-----------:|---------------:| | 0 | 29 | 4.16 | 0.201835 | 0.806382 | 0.00623053 | 3 | | 1 | 32 | 8 | 0.684142 | 0.293306 | 0.209449 | 4 | | 2 | 36 | 19.06 | 0.92875 | 0.793055 | 0.784799 | 2 | | 3 | 47 | 11.01 | 0.944209 | 0.900945 | 0.898409 | 2 | | 4 | 50 | 6.26 | 0.722867 | 0.614357 | 0.624832 | 1 | | 5 | 65 | 13.09 | 0.930613 | 0.879279 | 0.850546 | 2 | | 6 | 67 | 3.71 | 0.943498 | 0.887189 | 0.878163 | 2 | | 7 | 75 | 7.16 | 0.263539 | 0.774237 | 0.266619 | 2 | | 8 | 86 | 7.04 | 0.842577 | 0.821208 | 0.798148 | 1 | | 9 | 135 | 8.1 | 0.839964 | 0.758176 | 0.680348 | 2 | | 10 | 140 | 19.78 | 0.895506 | 0.936177 | 0.874019 | 4 | | 11 | 164 | 16.98 | 0.905008 | 0.923559 | 0.887268 | 1 | | 12 | 246 | 6.59 | 0.899448 | 0.895311 | 0.860322 | 3 | | 13 | 255 | 36.22 | 0.955784 | 0.927517 | 0.921816 | 6 | | 14 | 343 | 0.69 | 0.528261 | 0.840237 | 0.600751 | 4 | | 15 | 349 | 2.96 | 0.912664 | 0.828181 | 0.778983 | 1 | | 16 | 367 | 1.02 | 0.0734848 | 0.391737 | 0.118035 | 1 | | 17 | 370 | 10.82 | 0.953443 | 0.917094 | 0.908893 | 1 | | 18 | 371 | 3.83 | 0.859781 | 0.684751 | 0.618114 | 1 | | 19 | 375 | 11.67 | 0.911141 | 0.921079 | 0.91056 | 4 | | 20 | 377 | 4.37 | 0.782994 | 0.712791 | 0.680165 | 1 | | 21 | 381 | 7.63 | 0.89199 | 0.246428 | 0.238641 | 1 | | 22 | 385 | 2.67 | 0.803215 | 0.641916 | 0.60169 | 1 | | 23 | 395 | 0.68 | 0.770738 | 0.198273 | 0.236343 | 5 | | 24 | 397 | 5.94 | 0.904544 | 0.882265 | 0.874036 | 3 | | 25 | 409 | 11.86 | 0.944934 | 0.900727 | 0.900767 | 1 | | 26 | 411 | 5.98 | 0.949977 | 0.933271 | 0.929499 | 1 | | 27 | 425 | 0.91 | 0.802867 | 0.589069 | 0.545761 | 1 | | 28 | 434 | 94.42 | 0.894601 | 0.590408 | 0.580585 | 1 | | 29 | 531 | 22.08 | 0.89225 | 0.555066 | 0.505109 | 1 | | 30 | 540 | 8.35 | 0.923702 | 0.855009 | 0.840958 | 1 | <b>Median DSC</b>: 0.8946, 0.8212, 0.779 Prediction on a new subject in the research PACS: ![](research_pacs_predicition.png) ## Results for multi-sequence (T2, VIBE, and ADC) The box plot of the predictions on the validation set: ![](t2_vibe_adc_boxplot.png) The results from the validation set are also presented in the table below: | | subject_id | tumor_vol | inter_rater | r1_ml | r2_ml | n_components | |---:|-------------:|------------:|--------------:|---------:|----------:|---------------:| | 0 | 29 | 4.16 | 0.201835 | 0.859937 | 0.148586 | 4 | | 1 | 32 | 8 | 0.684142 | 0.662779 | 0.515479 | 10 | | 2 | 36 | 19.06 | 0.92875 | 0.902343 | 0.888306 | 1 | | 3 | 47 | 11.01 | 0.944209 | 0.907344 | 0.907 | 3 | | 4 | 50 | 6.26 | 0.722867 | 0.581594 | 0.540991 | 5 | | 5 | 65 | 13.09 | 0.930613 | 0.889782 | 0.862255 | 4 | | 6 | 67 | 3.71 | 0.943498 | 0.851658 | 0.842331 | 2 | | 7 | 75 | 7.16 | 0.263539 | 0.750551 | 0.205457 | 2 | | 8 | 86 | 7.04 | 0.842577 | 0.87216 | 0.81374 | 1 | | 9 | 135 | 8.1 | 0.839964 | 0.80436 | 0.747164 | 1 | | 10 | 140 | 19.78 | 0.895506 | 0.907457 | 0.852548 | 1 | | 11 | 164 | 16.98 | 0.905008 | 0.92533 | 0.893135 | 2 | | 12 | 246 | 6.59 | 0.899448 | 0.906569 | 0.852195 | 5 | | 13 | 255 | 36.22 | 0.955784 | 0.924517 | 0.927624 | 2 | | 14 | 343 | 0.69 | 0.528261 | 0.868251 | 0.457711 | 3 | | 15 | 349 | 2.96 | 0.912664 | 0.85214 | 0.819898 | 1 | | 16 | 367 | 1.02 | 0.0734848 | 0.383455 | 0.0891463 | 3 | | 17 | 370 | 10.82 | 0.953443 | 0.916154 | 0.911768 | 2 | | 18 | 371 | 3.83 | 0.859781 | 0.593136 | 0.565848 | 8 | | 19 | 375 | 11.67 | 0.911141 | 0.898501 | 0.910147 | 3 | | 20 | 377 | 4.37 | 0.782994 | 0.713798 | 0.646684 | 3 | | 21 | 381 | 7.63 | 0.89199 | 0.4375 | 0.430847 | 1 | | 22 | 385 | 2.67 | 0.803215 | 0.688608 | 0.624595 | 1 | | 23 | 395 | 0.68 | 0.770738 | 0.385992 | 0.43154 | 2 | | 24 | 397 | 5.94 | 0.904544 | 0.868022 | 0.850653 | 6 | | 25 | 409 | 11.86 | 0.944934 | 0.83407 | 0.833206 | 5 | | 26 | 411 | 5.98 | 0.949977 | 0.867137 | 0.866112 | 1 | | 27 | 425 | 0.91 | 0.802867 | 0.557732 | 0.475499 | 3 | | 28 | 434 | 94.42 | 0.894601 | 0.618916 | 0.605596 | 6 | | 29 | 531 | 22.08 | 0.89225 | 0.349648 | 0.319533 | 1 | | 30 | 540 | 8.35 | 0.923702 | 0.890343 | 0.88052 | 1 | <b>Median DSC</b>: 0.8946, 0.8521, 0.8137 ## Results for multi-sequence (T2, VIBE, and ADC) with extra training data (n=54) Need to run cross-validation to make a better comparison. | | subject_id | tumor_vol | inter_rater | r1_ml | r2_ml | n_components | |---:|-------------:|------------:|--------------:|---------:|----------:|---------------:| | 0 | 29 | 4.16 | 0.201835 | 0.836437 | 0.0599303 | 2 | | 1 | 32 | 8 | 0.684142 | 0.65186 | 0.503093 | 11 | | 2 | 36 | 19.06 | 0.92875 | 0.876779 | 0.862773 | 3 | | 3 | 47 | 11.01 | 0.944209 | 0.914218 | 0.911429 | 4 | | 4 | 50 | 6.26 | 0.722867 | 0.667869 | 0.60398 | 1 | | 5 | 65 | 13.09 | 0.930613 | 0.88374 | 0.859066 | 1 | | 6 | 67 | 3.71 | 0.943498 | 0.861391 | 0.851904 | 1 | | 7 | 75 | 7.16 | 0.263539 | 0.769195 | 0.236445 | 4 | | 8 | 86 | 7.04 | 0.842577 | 0.848937 | 0.80314 | 3 | | 9 | 135 | 8.1 | 0.839964 | 0.810392 | 0.732383 | 1 | | 10 | 140 | 19.78 | 0.895506 | 0.92261 | 0.865316 | 1 | | 11 | 164 | 16.98 | 0.905008 | 0.923593 | 0.879799 | 5 | | 12 | 246 | 6.59 | 0.899448 | 0.919342 | 0.864234 | 1 | | 13 | 255 | 36.22 | 0.955784 | 0.939234 | 0.938806 | 2 | | 14 | 343 | 0.69 | 0.528261 | 0.839357 | 0.448649 | 5 | | 15 | 349 | 2.96 | 0.912664 | 0.877018 | 0.839009 | 1 | | 16 | 367 | 1.02 | 0.0734848 | 0.255149 | 0.0615073 | 1 | | 17 | 370 | 10.82 | 0.953443 | 0.916431 | 0.907043 | 18 | | 18 | 371 | 3.83 | 0.859781 | 0.508698 | 0.475138 | 1 | | 19 | 375 | 11.67 | 0.911141 | 0.90593 | 0.910805 | 1 | | 20 | 377 | 4.37 | 0.782994 | 0.622583 | 0.598939 | 4 | | 21 | 381 | 7.63 | 0.89199 | 0.392978 | 0.381061 | 1 | | 22 | 385 | 2.67 | 0.803215 | 0.666327 | 0.583576 | 2 | | 23 | 395 | 0.68 | 0.770738 | 0.53442 | 0.54433 | 4 | | 24 | 397 | 5.94 | 0.904544 | 0.867964 | 0.868074 | 5 | | 25 | 409 | 11.86 | 0.944934 | 0.826939 | 0.827658 | 5 | | 26 | 411 | 5.98 | 0.949977 | 0.786394 | 0.796158 | 1 | | 27 | 425 | 0.91 | 0.802867 | 0.508261 | 0.43545 | 1 | | 28 | 434 | 94.42 | 0.894601 | 0.77102 | 0.758085 | 2 | | 29 | 531 | 22.08 | 0.89225 | 0.271076 | 0.253303 | 1 | | 30 | 540 | 8.35 | 0.923702 | 0.898613 | 0.890637 | 1 | <b>Median DSC</b>: 0.8946, 0.8364, 0.7962 ## Support and Contribution For any issues related to the model or the source code, please open an issue in the corresponding GitHub repository. Contributions to the code or the model are welcome and should be proposed through a pull request.
nightdude/config_80090
nightdude
2023-09-01T06:54:31Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-01T06:54:03Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
fnlp/SpeechTokenizer
fnlp
2023-09-01T06:52:14Z
0
10
null
[ "arxiv:2308.16692", "region:us" ]
null
2023-09-01T04:52:32Z
# SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models <a href='https://github.com/ZhangXInFD/SpeechTokenizer'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/abs/2308.16692'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> ## Introduction This is the code for the SpeechTokenizer presented in the [SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models](https://arxiv.org/abs/2308.16692). SpeechTokenizer is a unified speech tokenizer for speech large language models, which adopts the Encoder-Decoder architecture with residual vector quantization (RVQ). Unifying semantic and acoustic tokens, SpeechTokenizer disentangles different aspects of speech information hierarchically across different RVQ layers. Specifically, The code indices that the first quantizer of RVQ outputs can be considered as semantic tokens and the output of the remaining quantizers can be regarded as acoustic tokens, which serve as supplements for the information lost by the first quantizer. We provide our models: * A model operated at 16khz on monophonic speech trained on Librispeech with average representation across all HuBERT layers as semantic teacher. <br> <p align="center"> <img src="images/overview.png" width="95%"> <br> Overview </p> <p align="center"> <img src="images/speechtokenizer_framework.jpg" width="95%"> <br> The SpeechTokenizer framework. </p> <br> Welcome to try our [SLMTokBench](https://github.com/0nutation/SLMTokBench) and we will also open source our [USLM](https://github.com/0nutation/USLM) !! ## Samples Samples are provided on [our demo page](https://0nutation.github.io/SpeechTokenizer.github.io/). ## Installation SpeechTokenizer requires Python>=3.8, and a reasonly recent version of PyTorch. To install SpeechTokenizer, you can run from this repository: ```bash pip install -U speechtokenizer # or you can clone the repo and install locally git clone https://github.com/ZhangXInFD/SpeechTokenizer.git cd SpeechTokenizer pip install . ``` ## Usage ### Model storage | Model |Discription| |:----|:----| |[speechtokenizer_hubert_avg](https://huggingface.co/fnlp/SpeechTokenizer/tree/main/speechtokenizer_hubert_avg)|Adopt average representation across all HuBERT layers as semantic teacher | ### load model ```python from speechtokenizer import SpeechTokenizer config_path = '/path/config.json' ckpt_path = '/path/SpeechTokenizer.pt' model = SpeechTokenizer.load_from_checkpoint(config_path, ckpt_path) model.eval() ``` ### Extracting discrete representions ```python import torchaudio import torch # Load and pre-process speech waveform wav, sr = torchaudio.load('<SPEECH_FILE_PATH>') if sr != model.sample_rate: wav = torchaudio.functional.resample(wav, sr, model.sample_rate) wav = wav.unsqueeze(0) # Extract discrete codes from SpeechTokenizer with torch.no_grad(): codes = model.encode(wav) # codes: (n_q, B, T) semantic_tokens = codes[0, :, :] acoustic_tokens = codes[1:, :, :] ``` ### Decoding discrete representions ```python # Decoding from the first quantizers to ith quantizers wav = model.decode(codes[:(i + 1)]) # wav: (B, 1, T) # Decoding from ith quantizers to jth quantizers wav = model.decode(codes[i: (j + 1)], st=i) # Cancatenating semantic tokens and acoustic tokens and then decoding semantic_tokens = ... # (..., B, T) acoustic_tokens = ... # (..., B, T) wav = model.decode(torch.cat([semantic_tokens, acoustic_tokens], axis=0)) ``` ## Citation If you use this code or result in your paper, please cite our work as: ```tex @misc{zhang2023speechtokenizer, title={SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models}, author={Xin Zhang and Dong Zhang and Shimin Li and Yaqian Zhou and Xipeng Qiu}, year={2023}, eprint={2308.16692}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License The code in this repository is released under the Apache 2.0 license as found in the [LICENSE](LICENSE) file.
ryanyip7777/pmc_vit_l_14
ryanyip7777
2023-09-01T06:45:52Z
47
3
open_clip
[ "open_clip", "safetensors", "clip", "biology", "chemistry", "medical", "text-to-image", "en", "dataset:axiong/pmc_oa_beta", "region:us" ]
text-to-image
2023-07-23T23:56:43Z
--- datasets: - axiong/pmc_oa_beta language: - en library_name: open_clip pipeline_tag: text-to-image tags: - biology - chemistry - medical --- ### Model Description The model is fine-tuned from openai's ViT-L-14 using PMC_OA_beta and roco's data sets, using the tool open_clip(https://github.com/mlfoundations/open_clip). ### Training ```python python -m training.main \ --save-frequency 2 \ --zeroshot-frequency 1 \ --report-to tensorboard \ --train-data="/home/data1/ryanyip/huggingface-models/pmc_oa_beta/train.csv" \ --val-data="/home/data1/ryanyip/huggingface-models/pmc_oa_beta/sample_valid.csv" \ --csv-separator "," \ --csv-img-key image \ --csv-caption-key caption \ --warmup 10000 \ --batch-size=128 \ --lr=1e-5 \ --wd=0.2 \ --epochs=30 \ --workers=8 \ --model "ViT-L-14" \ --name "pmc_vit_l_14" \ --pretrained "ViT-L-14_state_dict.pt" \ --save-most-recent ```` *ViT-L-14_state_dict.pt is the pretrained weight from openai/ViT-L-14*
uukuguy/speechless-llama2-luban-orca-platypus-13b
uukuguy
2023-09-01T06:28:52Z
1,410
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-2", "en", "dataset:garage-bAInd/Open-Platypus", "arxiv:2307.09288", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-01T02:43:40Z
--- extra_gated_heading: Access Llama 2 on Hugging Face extra_gated_description: >- This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days. extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**" extra_gated_button_content: Submit extra_gated_fields: I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox language: - en datasets: - garage-bAInd/Open-Platypus library_name: transformers pipeline_tag: text-generation inference: false tags: - facebook - meta - pytorch - llama - llama-2 --- <p><h1> speechless-llama2-orca-platypus-13b </h1></p> speechless-llama2-orca-platypus-13b is a merge of AIDC-ai-business/Luban-13B and Open-Orca/OpenOrca-Platypus2-13B. | Metric | Value | | --- | --- | | ARC | 62.54 | | HellaSwag | 82.76 | | MMLU | 59.23 | | TruthfulQA | 54.66 | | Average | 64.80 | # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
emotibot-inc/Moli-Pro
emotibot-inc
2023-09-01T06:25:18Z
0
0
null
[ "region:us" ]
null
2023-09-01T04:05:07Z
# README # Moli-Pro [Hugging Face](https://huggingface.co/emotibot-inc/Moli-Pro) | [GitHub](https://github.com/emotibot-inc/Moli-Pro) | [Model Scope](https://modelscope.cn/models/emotibotinc/Moli-Pro/summary) | [Emotibrain](https://brain.emotibot.com/?source=molipro_huggingface) # **模型介绍** 魔力-Pro是竹间智能基于超过2亿token的基础语料训练的基础模型。它具备以下特点: 1. 上下文长度:魔力大模型具有强大的上下文理解能力,其上下文长度可以达到4096个token。这意味着它可以处理和理解更长的文本段落,从而在生成或翻译长篇文章时提供更准确的结果。 2. 训练数据:魔力大模型接受了超过100万条人类标注进行训练。这使得该模型能够更好地理解和生成人类语言,提高了其在各种任务中的表现。 3. 模型优化:相比于llama模型,魔力大模型使用了优化的自回归Transformer。这种Transformer使得魔力大模型在处理复杂任务时更加高效。 4. 数据清理和混合更新:为了进一步提升性能,魔力大模型进行了更强大的数据清理,并更新了数据混合。这两项改进都有助于提高模型对输入数据的理解和处理能力,从而产生更准确、质量更高的输出结果。 # Model **benchmark** ## **中文评测** - **CMMLU** ### Result | Model 5-shot | STEM | Humanities | Social Science | Other | China-specific | Average | | --- | --- | --- | --- | --- | --- | --- | | Multilingual-oriented | | | | | | | | [GPT4](https://openai.com/gpt4) | 65.23 | 72.11 | 72.06 | 74.79 | 66.12 | 70.95 | | [ChatGPT](https://openai.com/chatgpt) | 47.81 | 55.68 | 56.50 | 62.66 | 50.69 | 55.51 | | [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | 33.33 | 43.46 | 44.28 | 44.75 | 39.46 | 41.45 | | [LLaMA-65B](https://github.com/facebookresearch/llama) | 34.47 | 40.24 | 41.55 | 42.88 | 37.00 | 39.80 | | [BLOOMZ-7B](https://github.com/bigscience-workshop/xmtf) | 30.56 | 39.10 | 38.59 | 40.32 | 37.15 | 37.04 | | [Bactrian-LLaMA-13B](https://github.com/mbzuai-nlp/bactrian-x) | 27.52 | 32.47 | 32.27 | 35.77 | 31.56 | 31.88 | | Chinese-oriented | | | | | | | | [Zhuzhi-6B](https://github.com/emotibot-inc/Zhuzhi-6B) | 40.30 | 48.08 | 46.72 | 47.41 | 45.51 | 45.60 | | [Zhuhai-13B](https://github.com/emotibot-inc/Zhuhai-13B) | 42.39 | 61.57 | 60.48 | 58.57 | 55.68 | 55.74 | | [Moli-7B](https://github.com/emotibot-inc/Moli-7B) | 28.44 | 29.45 | 31.28 | 32.54 | 28.65 | 30.07 | | [Moli-Pro](https://github.com/emotibot-inc/Moli-Pro) | 30.2 | 37.5 | 36.22 | 39.71 | 33.55 | 35.44 | | [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) | 42.38 | 61.61 | 60.44 | 59.26 | 56.62 | 55.82 | | [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) | 42.55 | 50.98 | 50.99 | 50.80 | 48.37 | 48.80 | | [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | 35.25 | 48.07 | 47.88 | 46.61 | 44.14 | 44.43 | | [ChatGLM-6B](https://github.com/THUDM/GLM-130B) | 32.35 | 39.22 | 39.65 | 38.62 | 37.70 | 37.48 | | [BatGPT-15B](https://github.com/haonan-li/CMMLU/blob/master) | 34.96 | 35.45 | 36.31 | 42.14 | 37.89 | 37.16 | | [Chinese-LLaMA-13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | 27.12 | 33.18 | 34.87 | 35.10 | 32.97 | 32.63 | | [MOSS-SFT-16B](https://github.com/OpenLMLab/MOSS) | 27.23 | 30.41 | 28.84 | 32.56 | 28.68 | 29.57 | | [Chinese-GLM-10B](https://github.com/THUDM/GLM) | 25.49 | 27.05 | 27.42 | 29.21 | 28.05 | 27.26 | | Random | 25.00 | 25.00 | 25.00 | 25.00 | 25.00 | 25.00 | | Model 0-shot | STEM | Humanities | Social Science | Other | China-specific | Average | | --- | --- | --- | --- | --- | --- | --- | | Multilingual-oriented | | | | | | | | [GPT4](https://openai.com/gpt4) | 63.16 | 69.19 | 70.26 | 73.16 | 63.47 | 68.9 | | [ChatGPT](https://openai.com/chatgpt) | 44.8 | 53.61 | 54.22 | 59.95 | 49.74 | 53.22 | | [BLOOMZ-7B](https://github.com/bigscience-workshop/xmtf) | 33.03 | 45.74 | 45.74 | 46.25 | 41.58 | 42.8 | | [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | 31.11 | 41.3 | 40.87 | 40.61 | 36.05 | 38.5 | | [LLaMA-65B](https://github.com/facebookresearch/llama) | 31.09 | 34.45 | 36.05 | 37.94 | 32.89 | 34.88 | | [Bactrian-LLaMA-13B](https://github.com/mbzuai-nlp/bactrian-x) | 26.46 | 29.36 | 31.81 | 31.55 | 29.17 | 30.06 | | Chinese-oriented | | | | | | | | [Zhuzhi-6B](https://github.com/emotibot-inc/Zhuzhi-6B) | 42.51 | 48.91 | 48.85 | 50.25 | 47.57 | 47.62 | | [Zhuhai-13B](https://github.com/emotibot-inc/Zhuhai-13B) | 42.37 | 60.97 | 59.71 | 56.35 | 54.81 | 54.84 | | [Moli-7B](https://github.com/emotibot-inc/Moli-7B) | 28.48 | 32.53 | 33.45 | 35.8 | 31.09 | 32.27 | | [Moli-Pro](https://github.com/emotibot-inc/Moli-Pro) | 30.46 | 36.05 | 37.07 | 38.72 | 32.62 | 34.98 | | [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) | 42.04 | 60.49 | 59.55 | 56.6 | 55.72 | 54.63 | | [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) | 41.28 | 52.85 | 53.37 | 52.24 | 50.58 | 49.95 | | [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | 32.79 | 44.43 | 46.78 | 44.79 | 43.11 | 42.33 | | [ChatGLM-6B](https://github.com/THUDM/GLM-130B) | 32.22 | 42.91 | 44.81 | 42.6 | 41.93 | 40.79 | | [BatGPT-15B](https://github.com/haonan-li/CMMLU/blob/master) | 33.72 | 36.53 | 38.07 | 46.94 | 38.32 | 38.51 | | [Chinese-LLaMA-13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | 26.76 | 26.57 | 27.42 | 28.33 | 26.73 | 27.34 | | [MOSS-SFT-16B](https://github.com/OpenLMLab/MOSS) | 25.68 | 26.35 | 27.21 | 27.92 | 26.7 | 26.88 | | [Chinese-GLM-10B](https://github.com/THUDM/GLM) | 25.57 | 25.01 | 26.33 | 25.94 | 25.81 | 25.8 | | Random | 25 | 25 | 25 | 25 | 25 | 25 | # **推理对话** 您可以直接注册并登录竹间智能科技发布的大模型产品 [Emotibrain](https://brain.emotibot.com/?source=molipro_huggingface),并选择 **CoPilot**(**KKBot**) 进行的在线测试,注册即可立即使用; ![Untitled](./READMEjpg/Untitled.png) # **模型训练** 您可以直接注册并登录竹间智能科技发布的大模型产品 [Emotibrain](https://brain.emotibot.com/?source=molipro_huggingface),并选择 Fine-tune 进行 **0 代码微调**,注册即可立即使用; 详细的训练流程您可以浏览此文档:[Emotibrain 快速入门](https://brain.emotibot.com/supports/model-factory/dash-into.html)(大约 5 分钟) ![Untitled](./READMEjpg/Untitled1.png) ![Untitled](./READMEjpg/Untitled2.png) # **更多信息** 若您想了解更多 大模型训练平台 的相关信息,请访问 [Emotibrain 官网](https://brain.emotibot.com/?source=molipro_huggingface) 进行了解;
raygx/GNePT-NepSA
raygx
2023-09-01T06:21:00Z
60
0
transformers
[ "transformers", "tf", "gpt2", "text-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-25T09:22:40Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: GNePT-NepSA results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # GNePT-NepSA This model is a fine-tuned version of [raygx/GNePT](https://huggingface.co/raygx/GNePT) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4830 - Validation Loss: 0.6690 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.03} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.4830 | 0.6690 | 0 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.11.0 - Datasets 2.1.0 - Tokenizers 0.13.3
dkqjrm/20230901101200
dkqjrm
2023-09-01T05:54:51Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-01T01:12:18Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - accuracy model-index: - name: '20230901101200' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20230901101200 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.1593 - Accuracy: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0007 - train_batch_size: 16 - eval_batch_size: 8 - seed: 11 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 80.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 340 | 0.1696 | 0.5 | | 0.1874 | 2.0 | 680 | 0.1654 | 0.5 | | 0.1712 | 3.0 | 1020 | 0.1626 | 0.5 | | 0.1712 | 4.0 | 1360 | 0.1604 | 0.5 | | 0.1706 | 5.0 | 1700 | 0.1658 | 0.5 | | 0.1677 | 6.0 | 2040 | 0.1600 | 0.5 | | 0.1677 | 7.0 | 2380 | 0.1608 | 0.5 | | 0.1695 | 8.0 | 2720 | 0.1604 | 0.5 | | 0.1669 | 9.0 | 3060 | 0.1605 | 0.5 | | 0.1669 | 10.0 | 3400 | 0.1694 | 0.5 | | 0.168 | 11.0 | 3740 | 0.1618 | 0.5 | | 0.168 | 12.0 | 4080 | 0.1641 | 0.5 | | 0.168 | 13.0 | 4420 | 0.1601 | 0.5 | | 0.1667 | 14.0 | 4760 | 0.1601 | 0.5 | | 0.1679 | 15.0 | 5100 | 0.1640 | 0.5 | | 0.1679 | 16.0 | 5440 | 0.1638 | 0.5 | | 0.1681 | 17.0 | 5780 | 0.1636 | 0.5 | | 0.1655 | 18.0 | 6120 | 0.1645 | 0.5 | | 0.1655 | 19.0 | 6460 | 0.1627 | 0.5 | | 0.1672 | 20.0 | 6800 | 0.1601 | 0.5 | | 0.1672 | 21.0 | 7140 | 0.1618 | 0.5 | | 0.1672 | 22.0 | 7480 | 0.1668 | 0.5 | | 0.1675 | 23.0 | 7820 | 0.1599 | 0.5 | | 0.1663 | 24.0 | 8160 | 0.1608 | 0.5 | | 0.168 | 25.0 | 8500 | 0.1617 | 0.5 | | 0.168 | 26.0 | 8840 | 0.1601 | 0.5 | | 0.1667 | 27.0 | 9180 | 0.1604 | 0.5 | | 0.1655 | 28.0 | 9520 | 0.1643 | 0.5 | | 0.1655 | 29.0 | 9860 | 0.1605 | 0.5 | | 0.1675 | 30.0 | 10200 | 0.1603 | 0.5 | | 0.1664 | 31.0 | 10540 | 0.1602 | 0.5 | | 0.1664 | 32.0 | 10880 | 0.1631 | 0.5 | | 0.1666 | 33.0 | 11220 | 0.1611 | 0.5 | | 0.167 | 34.0 | 11560 | 0.1616 | 0.5 | | 0.167 | 35.0 | 11900 | 0.1613 | 0.5 | | 0.1667 | 36.0 | 12240 | 0.1600 | 0.5 | | 0.1662 | 37.0 | 12580 | 0.1600 | 0.5 | | 0.1662 | 38.0 | 12920 | 0.1702 | 0.5 | | 0.1652 | 39.0 | 13260 | 0.1599 | 0.5 | | 0.1659 | 40.0 | 13600 | 0.1600 | 0.5 | | 0.1659 | 41.0 | 13940 | 0.1605 | 0.5 | | 0.1661 | 42.0 | 14280 | 0.1601 | 0.5 | | 0.165 | 43.0 | 14620 | 0.1622 | 0.5 | | 0.165 | 44.0 | 14960 | 0.1607 | 0.5 | | 0.1664 | 45.0 | 15300 | 0.1621 | 0.5 | | 0.1654 | 46.0 | 15640 | 0.1600 | 0.5 | | 0.1654 | 47.0 | 15980 | 0.1606 | 0.5 | | 0.1666 | 48.0 | 16320 | 0.1612 | 0.5 | | 0.1652 | 49.0 | 16660 | 0.1600 | 0.5 | | 0.1658 | 50.0 | 17000 | 0.1605 | 0.5 | | 0.1658 | 51.0 | 17340 | 0.1604 | 0.5 | | 0.1647 | 52.0 | 17680 | 0.1606 | 0.5 | | 0.1657 | 53.0 | 18020 | 0.1641 | 0.5 | | 0.1657 | 54.0 | 18360 | 0.1613 | 0.5 | | 0.1644 | 55.0 | 18700 | 0.1605 | 0.5 | | 0.1643 | 56.0 | 19040 | 0.1592 | 0.5 | | 0.1643 | 57.0 | 19380 | 0.1600 | 0.5 | | 0.1632 | 58.0 | 19720 | 0.1633 | 0.5 | | 0.1643 | 59.0 | 20060 | 0.1612 | 0.5 | | 0.1643 | 60.0 | 20400 | 0.1604 | 0.5 | | 0.163 | 61.0 | 20740 | 0.1616 | 0.5 | | 0.1623 | 62.0 | 21080 | 0.1598 | 0.5 | | 0.1623 | 63.0 | 21420 | 0.1597 | 0.5 | | 0.1616 | 64.0 | 21760 | 0.1655 | 0.5 | | 0.1636 | 65.0 | 22100 | 0.1595 | 0.5 | | 0.1636 | 66.0 | 22440 | 0.1599 | 0.5 | | 0.1599 | 67.0 | 22780 | 0.1598 | 0.5 | | 0.163 | 68.0 | 23120 | 0.1602 | 0.5 | | 0.163 | 69.0 | 23460 | 0.1587 | 0.5 | | 0.1613 | 70.0 | 23800 | 0.1604 | 0.5 | | 0.1608 | 71.0 | 24140 | 0.1599 | 0.5 | | 0.1608 | 72.0 | 24480 | 0.1587 | 0.5 | | 0.1604 | 73.0 | 24820 | 0.1610 | 0.5 | | 0.1606 | 74.0 | 25160 | 0.1592 | 0.5 | | 0.1599 | 75.0 | 25500 | 0.1587 | 0.5 | | 0.1599 | 76.0 | 25840 | 0.1593 | 0.5 | | 0.1604 | 77.0 | 26180 | 0.1589 | 0.5 | | 0.16 | 78.0 | 26520 | 0.1602 | 0.5 | | 0.16 | 79.0 | 26860 | 0.1596 | 0.5 | | 0.1599 | 80.0 | 27200 | 0.1593 | 0.5 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
kyungmin011029/category
kyungmin011029
2023-09-01T05:46:22Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:klue/bert-base", "base_model:finetune:klue/bert-base", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-01T05:45:57Z
--- license: cc-by-sa-4.0 base_model: klue/bert-base tags: - generated_from_keras_callback model-index: - name: category results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # category This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.32.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
Korkkork/seungyeonkara
Korkkork
2023-09-01T05:42:25Z
0
0
null
[ "kara", "Kpop", "license:openrail", "region:us" ]
null
2023-09-01T05:36:15Z
--- license: openrail tags: - kara - Kpop ---
Korkkork/Hyejeong
Korkkork
2023-09-01T05:41:44Z
0
0
null
[ "aoa", "Kpop", "license:openrail", "region:us" ]
null
2023-08-31T04:40:15Z
--- license: openrail tags: - aoa - Kpop ---
Korkkork/Chanmi
Korkkork
2023-09-01T05:40:54Z
0
0
null
[ "aoa", "Kpop", "license:openrail", "region:us" ]
null
2023-08-31T06:41:29Z
--- license: openrail tags: - aoa - Kpop ---
Korkkork/yunaaoa
Korkkork
2023-09-01T05:40:32Z
0
0
null
[ "aoa", "Kpop", "license:openrail", "region:us" ]
null
2023-08-31T17:58:39Z
--- license: openrail tags: - aoa - Kpop ---
Korkkork/jiminaoa
Korkkork
2023-09-01T05:40:03Z
0
0
null
[ "aoa", "Kpop", "license:openrail", "region:us" ]
null
2023-08-31T22:28:44Z
--- license: openrail tags: - aoa - Kpop ---
hellomyoh/translator-12000-base-polyglot1.3b_v1
hellomyoh
2023-09-01T05:08:47Z
81
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-25T07:42:22Z
|no|english|korean| --|--|--| |1 | Do you know who i am?| '제가 누군지 아시겠어요?'| |2 |Tired of scrolling through the same posts? When you create an account you’ll always come back to where you left off. With an account you can also be notified of new replies, save bookmarks, and use likes to thank others. We can all work together to make this community great.. | '같은 포스트를 계속해서 읽는 것에 지쳐버렸습니다. 계정을 만들면 항상 뒤로 가기 버튼을 눌러야 합니다. 계정을 만들면 새로운 댓글과 좋아요를 받을 수 있고, 다른 사람들에게 알림을 보낼 수 있습니다. 우리는 모두 함께 이 커뮤니티를 훌륭하게 만들기 위해 노력할 수 있습니다.' | |3 | As technology continues to advance, new vulnerabilities emerge and the importance of security becomes increasingly crucial. In this regard, Cho Hong Ki, the Information Security Specialist at the 2bytes, shared valuable knowledge on the significance of game security and the solutions it offers.| '기술이 계속 발전함에 따라 새로운 취약점이 나타나고 있으며, 보안의 중요성이 점점 더 중요해지고 있다. 이와 관련하여 2bytes의 정보 보안 전문가인 홍기는 게임 보안의 중요성과 해결할 수 있는 해결책을 제공하는 2byte의 정보 보안 전문가로서의 가치를 공유하고 있다.'| |4 |They are <i>gifts</i> to my Queen from the goddess Tiamat herself. They reside in the great city of Tu'narath, awaiting the privilege of battle |그들은 </i>gifts</i<> Tiamat의 여신 티아마트의 선물입니다. 그들은 투나랏의 위대함을 기다리고 있습니다. | |5|I showed my masterpiece to the grown-ups, and asked them whether the drawing frightened them. But they answered: 'Frighten? Why should any one be frightened by a hat?' | 저는 어른들에게 제 걸작을 보여드렸습니다. 그들에게 그림이 무섭냐고 물었더니, 그들은 '왜 사람이 무서워하는 것을 쓰겠는가?'라고 대답했습니다.| |6 |New York. CNN. The Federal Aviation Administration has certified for testing a vehicle that a California startup describes as a flying car — the first fully electric vehicle that can both fly and travel on roads to receive US government approval. Alef Automotive said that its vehicle/aircraft, dubbed the “Model A,” is the first flying vehicle that is drivable on public roads and able to park like a normal car. It also has vertical takeoff and landing capabilities. It apparently will be able to carry one or two occupants and will have a road-range of 200 miles and a flying range of 110 miles. The company expects to sell the vehicle for $300,000 each with the first delivery by projected for the end of 2025. The FAA confirmed that it has issued the company a special airworthiness certificate, allowing for limited purposes that include exhibition, research and development. Numerous companies are working on all-electric VTOLs, which stands for vehicle takeoff and landing aircraft. The FAA said that Alef is “not the first aircraft of its kind” to get a special airworthiness certificate. However, Alef noted that its vehicle is different because of its ability to function both on roads and in the air, to appear like a normal car and to park in a normal parking space. | 뉴욕의 연방 항공국은 캘리포니아 스타트업이 비행 자동차로 묘사한 차량을 비행 자동차로 등록하기 위해 테스트하고 있다고 밝힌 첫 번째 전기 자동차를 비행 자동차로 등록하는 것을 허가했다. 이 차량은 미국 정부의 승인을 받을 수 있는 유일한 전기 자동차이다. Automotive사는 이 차량이 비행 자동차로 분류되어 공중에서 비행하고 도로에서 110마일 이상의 비행을 할 수 있다고 밝혔다. 이 회사는 2025년까지 첫 번째 비행 자동차를 판매할 계획이다. 연방 항공국은 이 차량이 상업용으로 사용되고 있으며, 수직 이착륙이 가능하고, 110마일 이상의 비행을 할 수 있는 것을 확인했다. 이 회사는 이 차량이 전시, 연구 개발, 개발을 위해 사용될 수 있다고 밝혔다. 여러 회사가 이 차량을 개발하고 있으며, 이 차량은 도로에서 비행하고 공중에서 비행하는 능력을 가지고 있다. 연방 항공국은 이 차량이 “일반적인 자동차”이며, 도로에서 비행하고 공중에서 110마일 이상의 비행을 할 수있는 능력을 가지고 있다고 밝혔다. 연방 항공국은 이 차량이 일반적인 자동차이며, 도로에서 비행하고 공중에서 비행하는 것이 가능하다고 밝혔다.|
mitchyAI/haerinlora
mitchyAI
2023-09-01T05:08:30Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-01T05:06:42Z
--- license: creativeml-openrail-m ---
erickdp/gl-falcon-7b
erickdp
2023-09-01T04:51:22Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-01T04:09:47Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
Yntec/HassanBlend1512VAE
Yntec
2023-09-01T04:36:48Z
384
2
diffusers
[ "diffusers", "safetensors", "Photorealistic", "General", "Hassan", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-31T17:47:11Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image language: - en tags: - Photorealistic - General - Hassan - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: true --- # Hassan 1.5.1.2 This model with the MoistMixV2 VAE baked in. Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/pQ1SuC6ThWJ7letfmsAZ3.png) concept art of CUTE girl in a pixel, chibi character, DETAILED EYES, key visual, summer day, magazine ad, 1940, iconic, highly detailed, digital painting, artstation, concept art, sharp focus, in harmony with nature, streamlined, hyperrealism by makoto shinkai and akihiko yoshida and wlop Original page: https://civitai.com/models/1173?modelVersionId=4635 (download the Full 6GB file at https://civitai.com/api/download/models/4635?type=Model&format=PickleTensor&size=full&fp=fp16 - the prunned ones are broken and caused all the 1 star reviews)
nomsgadded/Audio_Classification
nomsgadded
2023-09-01T04:29:31Z
166
0
transformers
[ "transformers", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-08-25T00:59:03Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - audio-classification - generated_from_trainer datasets: - superb model-index: - name: Audio_Classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Audio_Classification This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 3 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.1.0.dev20230831+cu121 - Datasets 2.14.4 - Tokenizers 0.13.3
Serotina/Pyramid
Serotina
2023-09-01T04:27:54Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-09-01T04:27:48Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Serotina/Pyramid 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Yntec/Dreamscapes_n_Dragonfire_v2
Yntec
2023-09-01T04:25:10Z
3,947
1
diffusers
[ "diffusers", "safetensors", "fantasy", "art", "realistic", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "DarkAgent", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-31T11:46:19Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image language: - en tags: - fantasy - art - realistic - stable-diffusion - stable-diffusion-diffusers - text-to-image - DarkAgent inference: true --- # Dreamscape & Dragonfire 2 This model with MoistMixV2's VAE baked in. Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/4alM1BJW825NzlvsLppoV.png) Victorian pretty cute girl with mushrooms growing in a spheroid forest, 3d render, nightlight study, by jan davidsz de heem and lisa frank, DETAILED CHIBI EYES, art nouveau, 8k, extreme detail, sharp focus, octane render. professional beeple photo of a intricate, elegant, highly detailed digital photo, smooth, sharp focus, 4k Original Page: https://civitai.com/models/50294/dreamscapes-and-dragonfire-new-v20-semi-realism-fantasy-model
tMako/sd-class-butterflies-32
tMako
2023-09-01T04:21:03Z
44
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-09-01T04:20:09Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('tMako/sd-class-butterflies-32') image = pipeline().images[0] image ```
Serotina/ppo-SnowballTarget1
Serotina
2023-09-01T03:31:41Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-09-01T03:31:32Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Serotina/ppo-SnowballTarget1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
yjkang49/distilbert-base-uncased-finetuned-emotion
yjkang49
2023-09-01T03:09:41Z
107
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-01T02:50:49Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9215 - name: F1 type: f1 value: 0.9217968795926891 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2270 - Accuracy: 0.9215 - F1: 0.9218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8291 | 1.0 | 250 | 0.3316 | 0.907 | 0.9059 | | 0.2533 | 2.0 | 500 | 0.2270 | 0.9215 | 0.9218 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
parksuna/distilbert-base-uncased-finetuned-emotion
parksuna
2023-09-01T03:03:45Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-31T07:59:05Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.9239151469743487 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2168 - Accuracy: 0.924 - F1: 0.9239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8147 | 1.0 | 250 | 0.3046 | 0.907 | 0.9062 | | 0.2406 | 2.0 | 500 | 0.2168 | 0.924 | 0.9239 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
heeeeeji/distilbert-base-uncased-finetuned-emotion
heeeeeji
2023-09-01T03:03:28Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-01T02:51:30Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9260814670250714 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2170 - Accuracy: 0.926 - F1: 0.9261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8162 | 1.0 | 250 | 0.3213 | 0.9035 | 0.9015 | | 0.2552 | 2.0 | 500 | 0.2170 | 0.926 | 0.9261 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
cagils/sr_dreambooth_anime
cagils
2023-09-01T03:03:23Z
27
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-01T02:08:20Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image ---
abeiler/huggingface-goatLora-goatV9-testData-morePushes
abeiler
2023-09-01T02:50:48Z
5
0
transformers
[ "transformers", "tensorboard", "llama", "text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2023-09-01T01:43:43Z
--- tags: - generated_from_trainer model-index: - name: huggingface-goatLora-goatV9-testData-morePushes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # huggingface-goatLora-goatV9-testData-morePushes This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.13.3
mijoo/distilbert-base-uncased-finetuned-emotion
mijoo
2023-09-01T02:43:25Z
103
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-01T02:31:02Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.925963839376488 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2153 - Accuracy: 0.926 - F1: 0.9260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.83 | 1.0 | 250 | 0.3165 | 0.9095 | 0.9087 | | 0.2508 | 2.0 | 500 | 0.2153 | 0.926 | 0.9260 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
Goo-Bello-Cello/229_testing_20230824.bin
Goo-Bello-Cello
2023-09-01T02:31:42Z
12
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-30T01:29:14Z
These are the converted model weights for Llama-2-7B-chat in Huggingface format. Courtesy of [Mirage-Studio.io](https://mirage-studio.io), home of MirageGPT: the private ChatGPT alternative. --- license: other LLAMA 2 COMMUNITY LICENSE AGREEMENT Llama 2 Version Release Date: July 18, 2023 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and- libraries/llama-downloads/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Llama 2" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and- libraries/llama-downloads/. "Llama Materials" means, collectively, Meta's proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non- transferable and royalty-free limited license under Meta's intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved." iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta's ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ---
juarroyom/bloom_prompt_tuning_1693535321.922217
juarroyom
2023-09-01T02:31:00Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-01T02:30:59Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
trieudemo11/llama_7b_attrb_cate_8m_0
trieudemo11
2023-09-01T02:24:48Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-01T02:24:30Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0
AdanLee/ppo-LunarLander-v2
AdanLee
2023-09-01T02:11:32Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-14T03:29:18Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 287.42 +/- 19.54 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) ```python import gymnasium from stable_baselines3 import PPO from stable_baselines3.common.env_util import make_vec_env from stable_baselines3.common.evaluation import evaluate_policy from stable_baselines3.common.monitor import Monitor from huggingface_sb3 import load_from_hub repo_id = "AdanLee/ppo-LunarLander-v2" # The repo_id filename = "ppo-LunarLander-v2.zip" # The model filename.zip # When the model was trained on Python 3.8 the pickle protocol is 5 # But Python 3.6, 3.7 use protocol 4 # In order to get compatibility we need to: # 1. Install pickle5 (we done it at the beginning of the colab) # 2. Create a custom empty object we pass as parameter to PPO.load() custom_objects = { "learning_rate": 0.0, "lr_schedule": lambda _: 0.0, "clip_range": lambda _: 0.0, } checkpoint = load_from_hub(repo_id, filename) model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True) eval_env = Monitor(gym.make("LunarLander-v2")) mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") ... ```
yulan-team/YuLan-Chat-2-13b-fp16
yulan-team
2023-09-01T01:57:41Z
1,481
15
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-04T04:12:11Z
--- license: mit --- <div align=center> <h1>YuLan-Chat: An Open-Source Bilingual Chatbot</h1> </div> YuLan-Chat models are chat-based large language models, which are developed by the researchers in GSAI, Renmin University of China (YuLan, which represents Yulan Magnolia, is the campus flower of Renmin University of China). The newest version is developed by continually-pretraining and instruction-tuning LLaMA-2 with high-quality English and Chinese data. The model has the following technical characteristics: - Due to continued pre-training on high-quality Chinese-English bilingual data, the language ability of the model has been improved. - To well support Chinese and longer inputs and outputs, we expand the original vocabulary with Chinese words and extend the maximum length of LLaMA-2. It can support 8k context now. - To well activate the bilingual instruction following capacity, we construct high-quality bilingual instructions, and perform multi-stage instruction-tuning. > YuLan-Chat系列模型是中国人民大学高瓴人工智能学院师生共同开发的支持聊天的大语言模型(名字"玉兰"取自中国人民大学校花)。最新版本基于LLaMA-2进行了中英文双语的继续预训练和指令微调。该版模型具有如下技术特点: > - 由于在高质量中英双语数据上进行了继续预训练,模型的语言能力得到提高; > - 为了更好的支持中文和更长的输入输出,对原版LLaMA-2的词表及长度进行了扩充,目前可支持8k上下文; > - 为了让模型更好地服从用户指令,构建了高质量双语指令数据集,并行了多阶段指令微调。 ## Model Zoo Due to the license limitation, for models based on LLaMA, we only provide the weight difference with the original checkpoints; for models based on LLaMA-2, they can be used directly. Please check the [Usage](https://github.com/RUC-GSAI/YuLan-LLM/tree/main#usage) section for more details. **Limitations**: Despite our efforts to reduce potential security issues during the model's usage and encourage the generation of text that aligns with ethical and legal requirements, the language model is based on probabilistic generation, which means it may still produce unexpected outputs. For instance, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We do not assume any responsibility for any consequences resulting from the dissemination of harmful information. > 由于许可证的限制,基于LLaMA的模型我们仅提供与官方模型的差值,基于LLaMA-2的模型可直接使用,具体请参见使用方法章节。 > **局限性**:尽管我们尝试减少模型在使用中可能出现的安全性问题,并鼓励模型生成符合道德和法律要求的文本,但由于语言模型基于概率生成的范式,模型仍然可能会产生意外的输出。 例如,生成的响应可能包含偏见、歧视或其他有害内容。 请不要传播此类内容。 我们对因传播有害信息而造成的任何后果不承担任何责任。 | Model | Backbone | Extended Vocab | Extended Length | Continue PT | SFT | Released Date | | ------------------- | :--------: | :------------: | :-------------: | :---------: | ---- | :-----------: | | [YuLan-Chat-2-13B](https://huggingface.co/yulan-team/YuLan-Chat-2-13b-fp16) | LLaMA2-13B | ✅ 51,190 | ✅ 8,192 | ✅ | ✅ | 2023.8.2 | | [YuLan-LLaMA-2-13B](https://huggingface.co/yulan-team/YuLan-LLaMA-2-13b) | LLaMA2-13B | ✅ 51,190 | ✅ 8,192 | ✅ | ❌ | 2023.8.2 | | [YuLan-Chat-1-65B-v2](https://huggingface.co/yulan-team/YuLan-Chat-1-65B-v2-delta) | LLaMA-65B | ✅ 51,190 | ❌ 2,048 | ✅ | ✅ | 2023.8.2 | | [YuLan-Chat-1-13B-v1](https://huggingface.co/RUCAIBox/YuLan-Chat-13b-delta) | LLaMA-13B | ❌ 32,000 | ❌ 2,048 | ❌ | ✅ | 2023.6.8 | | [YuLan-Chat-1-65B-v1](https://huggingface.co/RUCAIBox/YuLan-Chat-65b-delta) | LLaMA-65B | ❌ 32,000 | ❌ 2,048 | ❌ | ✅ | 2023.6.8 | ## Evaluation We evaluate our YuLan-Chat model on several Chinese and English benchmarks. The evaluation results are shown as follows. > 我们在中英文的一些基准测试上对YuLan-Chat进行了评价,其结果如下。 ### MMLU [MMLU](https://github.com/hendrycks/test) (Massive Multitask Language Understanding) is a benchmark designed to measure knowledge acquired during pretraining by evaluating models exclusively in zero-shot and few-shot settings. > MMLU是一个评估模型知识量的常用的英文基准测试集。 | Model | STEM | Social Science | Humanities | Others | Avg. | | --------------------------------- | :--: | :------------: | :--------: | :----: | :--: | | YuLan-Chat-1-13B-v1 | 39.6 | 57.8 | 42.6 | 57.6 | 49.4 | | YuLan-Chat-1-65B-v1 | 49.2 | 71.7 | 57.7 | 66.7 | 61.3 | | YuLan-Chat-1-65B-v2 | 46.3 | 67.9 | 56.9 | 63.9 | 58.7 | | LLaMA-2-13B | 44.6 | 64.2 | 53.9 | 62.2 | 56.2 | | FlagAlpha/Llama2-Chinese-13b-Chat | 44.4 | 63.2 | 51.6 | 60.6 | 55.0 | | Linly-AI/Chinese-LLaMA-2-13B-hf | 43.6 | 62.7 | 49.8 | 61.6 | 54.4 | | YuLan-LLaMA-2-13B | 42.9 | 61.5 | 50.4 | 58.6 | 53.4 | | YuLan-Chat-2-13B | 45.3 | 66.7 | 53.8 | 62.8 | 57.2 | ### C-Eval [C-Eval](https://cevalbenchmark.com/) is a comprehensive Chinese evaluation suite for foundation models. > C-Eval是一个针对基石模型综合能力的中文基准测试集。 | Model | STEM | Social Science | Humanities | Others | Avg. | Avg. (Hard) | | --------------------------------- | :--: | :------------: | :--------: | :----: | :--: | :---------: | | YuLan-Chat-1-13B-v1 | 30.2 | 37.4 | 31.9 | 30.7 | 32.0 | 25.7 | | YuLan-Chat-1-65B-v1 | 37.7 | 46.1 | 36.8 | 38.0 | 39.2 | 31.1 | | YuLan-Chat-1-65B-v2 | 39.9 | 55.9 | 47.7 | 43.7 | 45.4 | 31.4 | | LLaMA-2-13B | 36.9 | 43.2 | 37.6 | 36.6 | 38.2 | 32.0 | | FlagAlpha/Llama2-Chinese-13b-Chat | 36.8 | 44.5 | 36.3 | 36.5 | 38.1 | 30.9 | | Linly-AI/Chinese-LLaMA-2-13B-hf | 33.7 | 44.8 | 36.6 | 36.5 | 37 | 27.7 | | YuLan-LLaMA-2-13B | 35.3 | 46.4 | 41.9 | 37.6 | 39.3 | 28.6 | | YuLan-Chat-2-13B | 38.9 | 49.7 | 45.0 | 40.8 | 42.6 | 32.2 | ### AGI-Eval-Gaokao [AGI-Eval](https://github.com/microsoft/AGIEval) is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. We use the sub-branch Chinese-Gaokao for evaluation. > AGI-Eval 是一个以人为中心的基准,专门设计用于评估基础模型在与人类认知和解决问题相关的任务中的一般能力。我们使用其中的"高考"分支进行评测。 | Model | Avg. | Chinese | English | Geography | History | Biology | Chemistry | Physics | Math-QA | Math-Cloze | | --------------------------------- | :--: | :-----: | :-----: | :-------: | :-----: | :-----: | :-------: | :-----: | :-----: | :--------: | | YuLan-Chat-1-13B-v1 | 24.3 | 22.4 | 60.1 | 27.6 | 25.5 | 21.9 | 30.0 | 8.0 | 21.1 | 1.7 | | YuLan-Chat-1-65B-v1 | 29.3 | 25.2 | 79.1 | 37.2 | 36.6 | 28.6 | 24.2 | 11.0 | 21.9 | 0.0 | | YuLan-Chat-1-65B-v2 | 37.9 | 31.4 | 80.4 | 50.8 | 56.6 | 33.3 | 29.0 | 32.0 | 24.4 | 0.8 | | LLaMA-2-13B | 32.7 | 27.2 | 72.2 | 36.2 | 43.0 | 26.2 | 32.4 | 30.0 | 26.2 | 0.9 | | FlagAlpha/Llama2-Chinese-13b-Chat | 31.6 | 26.4 | 70.6 | 35.2 | 38.7 | 28.1 | 28.0 | 29.5 | 25.6 | 2.5 | | Linly-AI/Chinese-LLaMA-2-13B-hf | 31.1 | 22.8 | 74.8 | 42.2 | 37.9 | 24.3 | 28.0 | 23.0 | 26.5 | 0.0 | | YuLan-LLaMA-2-13B | 34.2 | 25.2 | 70.3 | 43.2 | 48.5 | 30.0 | 29.5 | 31.0 | 28.5 | 1.7 | | YuLan-Chat-2-13B | 39.5 | 37.0 | 85.3 | 46.7 | 51.9 | 43.8 | 38.2 | 29.0 | 23.1 | 0.9 | ## Usage ### Import from Huggingface Transformers As our model is trained based on LLaMA, it can be loaded in the same way as original LLaMA. > 由于我们的模型是基于LLaMA开发的,可以使用与LLaMA相同的方法加载。 ```Python >>> from transformers import LlamaTokenizer, LlamaForCausalLM >>> tokenizer = LlamaTokenizer.from_pretrained("yulan-team/YuLan-Chat-2-13b") >>> model = LlamaForCausalLM.from_pretrained("yulan-team/YuLan-Chat-2-13b").cuda() >>> model = model.eval() >>> input_text = "hello" >>> prompt = "The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.\n[|Human|]:{}\n[|AI|]:".format(input_text) >>> inputs = tokenizer(prompt, return_tensors='pt', padding="longest", max_length=8192, truncation=True, return_attention_mask=True, add_special_tokens=True) >>> kwargs = {'temperature': 0.8, 'top_p': 0.95, "top_k": 50, "repetition_penalty": 1.1, "no_repeat_ngram_size": 64, "max_length": 8192, "pad_token_id": tokenizer.bos_token_id, "eos_token_id": tokenizer.eos_token_id} >>> outputs = model.generate(inputs['input_ids'].to(model.device), attention_mask=inputs['attention_mask'].to(model.device), do_sample=True, **kwargs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[len(prompt):]) Hello! How can I assist you today? ``` ## License YuLan-Chat uses [MIT License](https://github.com/RUC-GSAI/YuLan-LLM/blob/main/LICENSE). All data and code in this project can only be used for academic purposes. > 本项目使用MIT许可,所有的数据和代码仅供学术研究使用。 ## Contributors | **Pre-training** | **Fine-tuning** | |:----------------------------- |:-------------------------------------------------------------------- | | [Yutao Zhu](https://github.com/DaoD) (Lead), [Kelong Mao](https://github.com/kyriemao), [Wentong Chen](https://github.com/yiye3), [Yiding Sun](https://github.com/Emanual20), [Yihan Wu](https://github.com/wyh2000), [Qian Cao](https://github.com/Aman-4-Real), [Lei Zhang](https://github.com/LLily0703), [Feng Wang](https://github.com/PhealenWang), [Qiangqiang Ren](https://github.com/QiangKing)| [Kun Zhou](https://github.com/Lancelot39) (Lead), [Yushuo Chen](https://github.com/chenyushuo), [Zhipeng Chen](https://github.com/Timothy023), [Lei Wang](https://github.com/Paitesanshi), [Yupeng Hou](https://github.com/hyp1231), [Xincheng Pang](https://github.com/pangxincheng), [Junyi Li](https://github.com/turboLJY), [Yuhan Chen](https://github.com/Fiorina1212), [Shufang Xie](https://github.com/funtion) | ## Reference Please kindly cite our work if it helps you. > 如果我们的项目对您有帮助,请引用我们,谢谢! ```BibTeX @misc{YuLan-Chat, author = {YuLan-Team}, title = {YuLan-Chat: An Open-Source Bilingual Chatbot}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/RUC-GSAI/YuLan-Chat}}, } ```
zzzotop/low-resource-data-quality-classification-demo-esp
zzzotop
2023-09-01T01:46:41Z
107
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-31T23:36:09Z
Demo exploring, amongst other things, the extent to which low-resource languages have poorer quality data (in terms of both tagging and more general usefulness) than high-resource counterparts. Inspired by the estimate that error rate of tagging in the corpus used was 10% higher in the LRL than it was in the HRL (Zotova et al 2020). Also demonstrated is cross-lingual transfer, akin to my earlier demos. BETO (dccuchile/bert-base-spanish-wwm-cased) finetuned for text classification on the Spanish portion of the Catalonia Independence Corpus (CIC) for 10 epochs, and then the Catalonian portion for 10 more. Same number of training steps. The intermediate model is on my profile. All Catalonian text entered will be classified as either in favour of, against, or neutral towards Catalonian independence. Significant preprocessing of dataset involved, including removal of the validation set and the reassignment of its data to the train and test sets. Learning rate 2e-5, batch size 4, weight decay 0.1. <b>Subject to many of the same shortcomings of its Catalonian-only counterpart, but seems to perform much better qualitatively overall. These results might indicate that the data for Catalonian is in fact of a poorer quality, to the point that cross-lingual transfer from more useful Spanish data is a superior option, but this is impossible to say for certain as the experiment is very lazy. It may well be the case, for example, that Catalonian-language examples skew more 'FAVOR' than Spanish examples, and as such finetuning on both could be greatly beneficial for the task. Unlike demo-cat, "la independencia catalana" is a big 'AGAINST' trigger whereas "la independència de Catalunya" is a big 'FAVOR' trigger.</b> Evaluated every epoch using F1 score with macro averaging:<br> 5 epochs: 0.765449<br> 10 epochs: 0.778278<br> 15 (5) epochs: 0.727466<br> 20 (10) epochs (final): 0.723115
zzzotop/low-resource-data-quality-classification-demo-cat
zzzotop
2023-09-01T01:43:26Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-31T21:12:01Z
Demo exploring, amongst other things, the extent to which low-resource languages have poorer quality data (in terms of both tagging and more general usefulness) than high-resource counterparts. Inspired by the estimate that error rate of tagging in the corpus used was 10% higher in the LRL than it was in the HRL (Zotova et al 2020). Also demonstrated is cross-lingual transfer, akin to my earlier demos. BETO (dccuchile/bert-base-spanish-wwm-cased) finetuned for text classification on the Catalan portion of the Catalonia Independence Corpus (CIC) for 5 epochs. All Catalonian text entered will be classified as either in favour of, against, or neutral towards Catalonian independence. Significant preprocessing of dataset involved, including removal of the validation set and the reassignment of its data to the train and test sets. Learning rate 2e-5, batch size 4, weight decay 0.1. <b>Works best with long inputs, seems to associate topics about change and modernity with 'FAVOR' and those about history with 'AGAINST'. Generally skews 'AGAINST', probably overfitted.</b> Evaluated every epoch using F1 score with macro averaging:<br> 5 epochs: 0.716673<br> 10 epochs: 0.719966<br> 20 epochs (final): 0.740322
menoua/a2c-PandaReachDense-v2
menoua
2023-09-01T01:41:43Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "arxiv:2106.13687", "model-index", "region:us" ]
reinforcement-learning
2023-02-19T00:50:38Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.39 +/- 0.39 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ``` Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
czl/SpaceInvadersNoFrameskip-v4
czl
2023-09-01T01:33:17Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-01T01:26:12Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 1164.00 +/- 293.45 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga czl -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga czl -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga czl ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 12000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
thanhduycao/wav2vec2-base-demo-aug
thanhduycao
2023-09-01T01:17:57Z
162
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:nguyenvulebinh/wav2vec2-base-vietnamese-250h", "base_model:finetune:nguyenvulebinh/wav2vec2-base-vietnamese-250h", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-01T01:17:39Z
--- license: cc-by-nc-4.0 base_model: nguyenvulebinh/wav2vec2-base-vietnamese-250h tags: - generated_from_trainer model-index: - name: wav2vec2-base-demo-aug results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-demo-aug This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5138 - eval_wer: 0.2151 - eval_runtime: 50.9731 - eval_samples_per_second: 14.674 - eval_steps_per_second: 1.844 - epoch: 21.85 - step: 9200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.0 - Datasets 2.14.4.dev0 - Tokenizers 0.13.3
yaystevek/poca-SoccerTwos
yaystevek
2023-09-01T01:14:39Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-09-01T01:14:27Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: yaystevek/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
eitoi/food_classifier
eitoi
2023-09-01T01:08:22Z
64
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-08-30T01:31:34Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_keras_callback model-index: - name: eitoi/food_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # eitoi/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3938 - Validation Loss: 0.3457 - Train Accuracy: 0.92 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.8191 | 1.6466 | 0.832 | 0 | | 1.2361 | 0.8349 | 0.889 | 1 | | 0.7265 | 0.5148 | 0.913 | 2 | | 0.5151 | 0.3855 | 0.923 | 3 | | 0.3938 | 0.3457 | 0.92 | 4 | ### Framework versions - Transformers 4.32.1 - TensorFlow 2.12.0 - Datasets 2.14.4 - Tokenizers 0.13.3
Plumbear/distilhubert-finetuned-gtzan
Plumbear
2023-09-01T01:04:56Z
167
0
transformers
[ "transformers", "pytorch", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-08-30T19:49:21Z
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.86 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5698 - Accuracy: 0.86 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5773 | 1.0 | 75 | 0.7146 | 0.84 | | 0.4322 | 2.0 | 150 | 0.6362 | 0.82 | | 0.445 | 3.0 | 225 | 0.5768 | 0.88 | | 0.2764 | 4.0 | 300 | 0.5698 | 0.86 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
jaober/ppo-LunarLander-v2
jaober
2023-09-01T00:47:10Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-01T00:46:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.42 +/- 20.73 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
abeiler/huggingface-goatLora-goatV10-fullData
abeiler
2023-09-01T00:46:12Z
0
0
null
[ "pytorch", "tensorboard", "generated_from_trainer", "region:us" ]
null
2023-08-30T03:33:56Z
--- tags: - generated_from_trainer model-index: - name: huggingface-goatLora-goatV10-fullData results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # huggingface-goatLora-goatV10-fullData This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.13.3
cmvgia/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
cmvgia
2023-09-01T00:41:54Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-01T00:14:55Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0
cagils/sr_dreambooth_mug
cagils
2023-09-01T00:22:00Z
23
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-01T00:00:15Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image ---
jalaluddin94/baseline_nli_xlmr_zero_shot
jalaluddin94
2023-09-01T00:17:35Z
161
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-01T00:15:43Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy - precision - recall model-index: - name: baseline_nli_xlmr_zero_shot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # baseline_nli_xlmr_zero_shot This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4063 - Accuracy: 0.4452 - Precision: 0.4452 - Recall: 0.4452 - F1 Score: 0.4102 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 12 - eval_batch_size: 12 - seed: 101 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:| | 1.0598 | 1.0 | 861 | 1.0376 | 0.4356 | 0.4356 | 0.4356 | 0.4093 | | 0.8459 | 2.0 | 1722 | 1.2189 | 0.4342 | 0.4342 | 0.4342 | 0.3792 | | 0.7501 | 3.0 | 2583 | 1.3530 | 0.4224 | 0.4224 | 0.4224 | 0.3779 | | 0.7097 | 4.0 | 3444 | 1.3412 | 0.4315 | 0.4315 | 0.4315 | 0.3887 | | 0.6706 | 5.0 | 4305 | 1.3792 | 0.4497 | 0.4497 | 0.4497 | 0.4187 | | 0.6534 | 6.0 | 5166 | 1.4063 | 0.4452 | 0.4452 | 0.4452 | 0.4102 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3