pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
distilgpt2-stable-diffusion-v2 - bnb 8bits
- Model creator: https://huggingface.co/FredZhang7/
- Original model: https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion-v2/
Original model description:
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- prompt-generator
- arxiv:2210.14140
widget:
- text: "amazing"
- text: "a photo of"
- text: "a sci-fi"
- text: "a portrait of"
- text: "a person standing"
- text: "a boy watching"
datasets:
- FredZhang7/stable-diffusion-prompts-2.47M
- poloclub/diffusiondb
- Gustavosta/Stable-Diffusion-Prompts
- bartman081523/stable-diffusion-discord-prompts
---
# Fast GPT2 PromptGen
<style>
.container {
padding-left: 20px;
border-left: 5px solid gray;
}
</style>
<div class="container">
<p><strong><a href="https://huggingface.co/FredZhang7/anime-anything-promptgen-v2">Fast Anime PromptGen</a></strong> generates descriptive safebooru and danbooru tags for anime text-to-image models.</p>
</div>
This model was trained on 2,470,000 descriptive stable diffusion prompts on the [FredZhang7/distilgpt2-stable-diffusion](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion) checkpoint for another 4,270,000 steps.
Compared to other prompt generation models using GPT2, this one runs with 50% faster forwardpropagation and 40% less disk space & RAM.
Major improvements from v1 are:
- 25% more variations
- faster and more fluent prompt generation
- cleaned training data
* removed prompts that generate images with nsfw scores > 0.5
* removed duplicates, including prompts that differ by capitalization and punctuations
* removed punctuations at random places
* removed prompts shorter than 15 characters
## Live WebUI Demo
See the Prompt Generator tab of [Paint Journey Demo](https://huggingface.co/spaces/FredZhang7/paint-journey-demo).
## Contrastive Search
```bash
pip install --upgrade transformers
```
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model = GPT2LMHeadModel.from_pretrained('FredZhang7/distilgpt2-stable-diffusion-v2')
prompt = r'a cat sitting' # the beginning of the prompt
temperature = 0.9 # a higher temperature will produce more diverse results, but with a higher risk of less coherent text
top_k = 8 # the number of tokens to sample from at each step
max_length = 80 # the maximum number of tokens for the output of the model
repitition_penalty = 1.2 # the penalty value for each repetition of a token
num_return_sequences=5 # the number of results to generate
# generate the result with contrastive search
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
output = model.generate(input_ids, do_sample=True, temperature=temperature, top_k=top_k, max_length=max_length, num_return_sequences=num_return_sequences, repetition_penalty=repitition_penalty, penalty_alpha=0.6, no_repeat_ngram_size=1, early_stopping=True)
print('\nInput:\n' + 100 * '-')
print('\033[96m' + prompt + '\033[0m')
print('\nOutput:\n' + 100 * '-')
for i in range(len(output)):
print('\033[92m' + tokenizer.decode(output[i], skip_special_tokens=True) + '\033[0m\n')
```
No comma style:

To bring back the commas, assign output without `penalty_alpha` and `no_repeat_ngram_size`:
```python
output = model.generate(input_ids, do_sample=True, temperature=temperature, top_k=top_k, max_length=max_length, num_return_sequences=num_return_sequences, repetition_penalty=repitition_penalty, early_stopping=True)
```

| {} | RichardErkhov/FredZhang7_-_distilgpt2-stable-diffusion-v2-8bits | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-17T08:05:00+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
distilgpt2-stable-diffusion-v2 - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- prompt-generator
- arxiv:2210.14140
widget:
- text: "amazing"
- text: "a photo of"
- text: "a sci-fi"
- text: "a portrait of"
- text: "a person standing"
- text: "a boy watching"
datasets:
- FredZhang7/stable-diffusion-prompts-2.47M
- poloclub/diffusiondb
- Gustavosta/Stable-Diffusion-Prompts
- bartman081523/stable-diffusion-discord-prompts
---
# Fast GPT2 PromptGen
<style>
.container {
padding-left: 20px;
border-left: 5px solid gray;
}
</style>
<div class="container">
<p><strong><a href="URL Anime PromptGen</a></strong> generates descriptive safebooru and danbooru tags for anime text-to-image models.</p>
</div>
This model was trained on 2,470,000 descriptive stable diffusion prompts on the FredZhang7/distilgpt2-stable-diffusion checkpoint for another 4,270,000 steps.
Compared to other prompt generation models using GPT2, this one runs with 50% faster forwardpropagation and 40% less disk space & RAM.
Major improvements from v1 are:
- 25% more variations
- faster and more fluent prompt generation
- cleaned training data
* removed prompts that generate images with nsfw scores > 0.5
* removed duplicates, including prompts that differ by capitalization and punctuations
* removed punctuations at random places
* removed prompts shorter than 15 characters
## Live WebUI Demo
See the Prompt Generator tab of Paint Journey Demo.
## Contrastive Search
No comma style:
!constrastive search
To bring back the commas, assign output without 'penalty_alpha' and 'no_repeat_ngram_size':
!constrastive search
| [
"# Fast GPT2 PromptGen\n\n<style>\n.container {\n padding-left: 20px;\n border-left: 5px solid gray;\n}\n</style>\n\n<div class=\"container\">\n <p><strong><a href=\"URL Anime PromptGen</a></strong> generates descriptive safebooru and danbooru tags for anime text-to-image models.</p>\n</div>\n\n\nThis model was trained on 2,470,000 descriptive stable diffusion prompts on the FredZhang7/distilgpt2-stable-diffusion checkpoint for another 4,270,000 steps.\n\nCompared to other prompt generation models using GPT2, this one runs with 50% faster forwardpropagation and 40% less disk space & RAM.\n\nMajor improvements from v1 are:\n- 25% more variations\n- faster and more fluent prompt generation\n- cleaned training data\n * removed prompts that generate images with nsfw scores > 0.5\n * removed duplicates, including prompts that differ by capitalization and punctuations\n * removed punctuations at random places\n * removed prompts shorter than 15 characters",
"## Live WebUI Demo\nSee the Prompt Generator tab of Paint Journey Demo.",
"## Contrastive Search\n\n\n\n\n\nNo comma style:\n!constrastive search\n\n\nTo bring back the commas, assign output without 'penalty_alpha' and 'no_repeat_ngram_size':\n\n\n!constrastive search"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"# Fast GPT2 PromptGen\n\n<style>\n.container {\n padding-left: 20px;\n border-left: 5px solid gray;\n}\n</style>\n\n<div class=\"container\">\n <p><strong><a href=\"URL Anime PromptGen</a></strong> generates descriptive safebooru and danbooru tags for anime text-to-image models.</p>\n</div>\n\n\nThis model was trained on 2,470,000 descriptive stable diffusion prompts on the FredZhang7/distilgpt2-stable-diffusion checkpoint for another 4,270,000 steps.\n\nCompared to other prompt generation models using GPT2, this one runs with 50% faster forwardpropagation and 40% less disk space & RAM.\n\nMajor improvements from v1 are:\n- 25% more variations\n- faster and more fluent prompt generation\n- cleaned training data\n * removed prompts that generate images with nsfw scores > 0.5\n * removed duplicates, including prompts that differ by capitalization and punctuations\n * removed punctuations at random places\n * removed prompts shorter than 15 characters",
"## Live WebUI Demo\nSee the Prompt Generator tab of Paint Journey Demo.",
"## Contrastive Search\n\n\n\n\n\nNo comma style:\n!constrastive search\n\n\nTo bring back the commas, assign output without 'penalty_alpha' and 'no_repeat_ngram_size':\n\n\n!constrastive search"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
anime-anything-promptgen-v2 - bnb 8bits
- Model creator: https://huggingface.co/FredZhang7/
- Original model: https://huggingface.co/FredZhang7/anime-anything-promptgen-v2/
Original model description:
---
license: creativeml-openrail-m
language:
- en
widget:
- text: 1girl, fate
- text: 1boy, league of
- text: 1girl, genshin
- text: 1boy, national basketball association
- text: 1girl, spy x
- text: 1girl, absurdres
tags:
- stable-diffusion
- anime
- anything-v4
- art
- arxiv:2210.14140
datasets:
- FredZhang7/anime-prompts-180K
---
## Fast Anime PromptGen
This model was trained on a dataset of **80,000** safe anime prompts for 3 epochs. I fetched the prompts from the [Safebooru API endpoint](https://safebooru.donmai.us/posts/random.json), but only accepted unique prompts with **up_score ≥ 8** and without any [blacklisted tags](./blacklist.txt).
I didn't release the V1 model because it often generated gibberish prompts. After trying all means to correct that behavior, I eventually figured that the cause of the gibberish prompts is not from the pipeline params, model structure or training duration, but rather from the random usernames in the training data.
Here's the complete [prompt preprocessing algorithm](./preprocess.py).
## Text-to-image Examples
Prefix *1girl* | [Generated *1girl* prompts](./anime_girl_settings.txt) | Model *Anything V4*

Prefix *1boy* | [Generated *1boy* prompts](./anime_boy_settings.txt) | Model *Anything V4*

## Contrastive Search
```
pip install --upgrade transformers
```
```python
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel, pipeline
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model = GPT2LMHeadModel.from_pretrained('FredZhang7/anime-anything-promptgen-v2')
prompt = r'1girl, genshin'
# generate text using fine-tuned model
nlp = pipeline('text-generation', model=model, tokenizer=tokenizer)
# generate 10 samples using contrastive search
outs = nlp(prompt, max_length=76, num_return_sequences=10, do_sample=True, repetition_penalty=1.2, temperature=0.7, top_k=4, early_stopping=True)
print('\nInput:\n' + 100 * '-')
print('\033[96m' + prompt + '\033[0m')
print('\nOutput:\n' + 100 * '-')
for i in range(len(outs)):
# remove trailing commas and double spaces
outs[i] = str(outs[i]['generated_text']).replace(' ', '').rstrip(',')
print('\033[92m' + '\n\n'.join(outs) + '\033[0m\n')
```
Output Example:

Please see [Fast GPT PromptGen](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion-v2) for more info on the pipeline parameters.
## Awesome Tips
- If you feel like a generated anime character doesn't show emotions, try emoticons like `;o`, `:o`, `;p`, `:d`, `:p`, and `;d` in the prompt.
I also use `happy smirk`, `happy smile`, `laughing closed eyes`, etc. to make the characters more lively and expressive.
- Adding `absurdres`, instead of `highres` and `masterpiece`, to a prompt can drastically increase the sharpness and resolution of a generated image.
## Danbooru
[Link to the Danbooru version](https://huggingface.co/FredZhang7/danbooru-tag-generator)
| {} | RichardErkhov/FredZhang7_-_anime-anything-promptgen-v2-8bits | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-17T08:05:15+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
anime-anything-promptgen-v2 - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: creativeml-openrail-m
language:
- en
widget:
- text: 1girl, fate
- text: 1boy, league of
- text: 1girl, genshin
- text: 1boy, national basketball association
- text: 1girl, spy x
- text: 1girl, absurdres
tags:
- stable-diffusion
- anime
- anything-v4
- art
- arxiv:2210.14140
datasets:
- FredZhang7/anime-prompts-180K
---
## Fast Anime PromptGen
This model was trained on a dataset of 80,000 safe anime prompts for 3 epochs. I fetched the prompts from the Safebooru API endpoint, but only accepted unique prompts with up_score ≥ 8 and without any blacklisted tags.
I didn't release the V1 model because it often generated gibberish prompts. After trying all means to correct that behavior, I eventually figured that the cause of the gibberish prompts is not from the pipeline params, model structure or training duration, but rather from the random usernames in the training data.
Here's the complete prompt preprocessing algorithm.
## Text-to-image Examples
Prefix *1girl* | Generated *1girl* prompts | Model *Anything V4*

Prefix *1boy* | Generated *1boy* prompts | Model *Anything V4*

## Contrastive Search
Output Example:

Please see Fast GPT PromptGen for more info on the pipeline parameters.
## Awesome Tips
- If you feel like a generated anime character doesn't show emotions, try emoticons like ';o', ':o', ';p', ':d', ':p', and ';d' in the prompt.
I also use 'happy smirk', 'happy smile', 'laughing closed eyes', etc. to make the characters more lively and expressive.
- Adding 'absurdres', instead of 'highres' and 'masterpiece', to a prompt can drastically increase the sharpness and resolution of a generated image.
## Danbooru
Link to the Danbooru version
| [
"## Fast Anime PromptGen\n\nThis model was trained on a dataset of 80,000 safe anime prompts for 3 epochs. I fetched the prompts from the Safebooru API endpoint, but only accepted unique prompts with up_score ≥ 8 and without any blacklisted tags. \nI didn't release the V1 model because it often generated gibberish prompts. After trying all means to correct that behavior, I eventually figured that the cause of the gibberish prompts is not from the pipeline params, model structure or training duration, but rather from the random usernames in the training data. \nHere's the complete prompt preprocessing algorithm.",
"## Text-to-image Examples\n\nPrefix *1girl* | Generated *1girl* prompts | Model *Anything V4*\n\n\n\nPrefix *1boy* | Generated *1boy* prompts | Model *Anything V4*\n\n",
"## Contrastive Search\n\n\n\nOutput Example:\n\n\n\nPlease see Fast GPT PromptGen for more info on the pipeline parameters.",
"## Awesome Tips\n\n- If you feel like a generated anime character doesn't show emotions, try emoticons like ';o', ':o', ';p', ':d', ':p', and ';d' in the prompt.\nI also use 'happy smirk', 'happy smile', 'laughing closed eyes', etc. to make the characters more lively and expressive.\n\n- Adding 'absurdres', instead of 'highres' and 'masterpiece', to a prompt can drastically increase the sharpness and resolution of a generated image.",
"## Danbooru\nLink to the Danbooru version"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"## Fast Anime PromptGen\n\nThis model was trained on a dataset of 80,000 safe anime prompts for 3 epochs. I fetched the prompts from the Safebooru API endpoint, but only accepted unique prompts with up_score ≥ 8 and without any blacklisted tags. \nI didn't release the V1 model because it often generated gibberish prompts. After trying all means to correct that behavior, I eventually figured that the cause of the gibberish prompts is not from the pipeline params, model structure or training duration, but rather from the random usernames in the training data. \nHere's the complete prompt preprocessing algorithm.",
"## Text-to-image Examples\n\nPrefix *1girl* | Generated *1girl* prompts | Model *Anything V4*\n\n\n\nPrefix *1boy* | Generated *1boy* prompts | Model *Anything V4*\n\n",
"## Contrastive Search\n\n\n\nOutput Example:\n\n\n\nPlease see Fast GPT PromptGen for more info on the pipeline parameters.",
"## Awesome Tips\n\n- If you feel like a generated anime character doesn't show emotions, try emoticons like ';o', ':o', ';p', ':d', ':p', and ';d' in the prompt.\nI also use 'happy smirk', 'happy smile', 'laughing closed eyes', etc. to make the characters more lively and expressive.\n\n- Adding 'absurdres', instead of 'highres' and 'masterpiece', to a prompt can drastically increase the sharpness and resolution of a generated image.",
"## Danbooru\nLink to the Danbooru version"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
hebrew-gpt_neo-small - bnb 8bits
- Model creator: https://huggingface.co/Norod78/
- Original model: https://huggingface.co/Norod78/hebrew-gpt_neo-small/
Original model description:
---
language: he
thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "עוד בימי קדם"
- text: "קוראים לי דורון ואני מעוניין ל"
- text: "קוראים לי איציק ואני חושב ש"
- text: "החתול שלך מאוד חמוד ו"
license: mit
---
# hebrew-gpt_neo-small
Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program.
## Datasets
1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ)
2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he)
The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
3. CC100-Hebrew Dataset [Homepage](https://metatext.io/datasets/cc100-hebrew)
Created by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.
## Training Config
Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-small/configs) <BR>
## Usage
### Google Colab Notebook
Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-small/Norod78_hebrew_gpt_neo_small_Colab.ipynb) <BR>
#### Simple usage sample code
```python
!pip install tokenizers==0.10.2 transformers==4.6.0
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-small")
model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-small", pad_token_id=tokenizer.eos_token_id)
prompt_text = "אני אוהב שוקולד ועוגות"
max_len = 512
sample_output_num = 3
seed = 1000
import numpy as np
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count()
print(f"device: {device}, n_gpu: {n_gpu}")
np.random.seed(seed)
torch.manual_seed(seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(seed)
model.to(device)
encoded_prompt = tokenizer.encode(
prompt_text, add_special_tokens=False, return_tensors="pt")
encoded_prompt = encoded_prompt.to(device)
if encoded_prompt.size()[-1] == 0:
input_ids = None
else:
input_ids = encoded_prompt
print("input_ids = " + str(input_ids))
if input_ids != None:
max_len += len(encoded_prompt[0])
if max_len > 2048:
max_len = 2048
print("Updated max_len = " + str(max_len))
stop_token = "<|endoftext|>"
new_lines = "\n\n\n"
sample_outputs = model.generate(
input_ids,
do_sample=True,
max_length=max_len,
top_k=50,
top_p=0.95,
num_return_sequences=sample_output_num
)
print(100 * '-' + "\n\t\tOutput\n" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
text = tokenizer.decode(sample_output, skip_special_tokens=True)
# Remove all text after the stop token
text = text[: text.find(stop_token) if stop_token else None]
# Remove all text after 3 newlines
text = text[: text.find(new_lines) if new_lines else None]
print("\n{}: {}".format(i, text))
print("\n" + 100 * '-')
```
| {} | RichardErkhov/Norod78_-_hebrew-gpt_neo-small-8bits | null | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] | null | 2024-04-17T08:05:36+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
hebrew-gpt_neo-small - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
language: he
thumbnail: URL
widget:
- text: "עוד בימי קדם"
- text: "קוראים לי דורון ואני מעוניין ל"
- text: "קוראים לי איציק ואני חושב ש"
- text: "החתול שלך מאוד חמוד ו"
license: mit
---
# hebrew-gpt_neo-small
Hebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program.
## Datasets
1. An assortment of various Hebrew corpuses - I have made it available here
2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink
The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
3. CC100-Hebrew Dataset Homepage
Created by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.
## Training Config
Available here <BR>
## Usage
### Google Colab Notebook
Available here <BR>
#### Simple usage sample code
| [
"# hebrew-gpt_neo-small\n\nHebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program.",
"## Datasets\n\n1. An assortment of various Hebrew corpuses - I have made it available here\n\n\n2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\n\n3. CC100-Hebrew Dataset Homepage \n\nCreated by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.",
"## Training Config\n\nAvailable here <BR>",
"## Usage",
"### Google Colab Notebook\n\nAvailable here <BR>",
"#### Simple usage sample code"
] | [
"TAGS\n#transformers #safetensors #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #8-bit #region-us \n",
"# hebrew-gpt_neo-small\n\nHebrew text generation model based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program.",
"## Datasets\n\n1. An assortment of various Hebrew corpuses - I have made it available here\n\n\n2. oscar / unshuffled_deduplicated_he - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\n\n3. CC100-Hebrew Dataset Homepage \n\nCreated by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.",
"## Training Config\n\nAvailable here <BR>",
"## Usage",
"### Google Colab Notebook\n\nAvailable here <BR>",
"#### Simple usage sample code"
] |
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('meric-y/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | meric-y/sd-class-butterflies-32 | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-04-17T08:06:01+00:00 | [] | [] | TAGS
#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
|
# Model Card for Unit 1 of the Diffusion Models Class
This model is a diffusion model for unconditional image generation of cute .
## Usage
| [
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] | [
"TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n",
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="derk06/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]} | derk06/taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-17T08:07:33+00:00 | [] | [] | TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
| [
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] | [
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | mjyoo2/kullm_arc_ft | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-17T08:09:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | adediu25/subtle-toxicgenconprompt-all-no-lora | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:10:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# CodeQwen1.5-7B
GPTQ quantized version of CodeQwen1.5-7B model.
---
## Introduction
CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.
* Strong code generation capabilities and competitve performance across a series of benchmarks;
* Supporting long context understanding and generation with the context length of 64K tokens;
* Supporting 92 coding languages
* Excellent performance in text-to-SQL, bug fix, etc.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/codeqwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Model Details
CodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'.
```
## Usage
For the base language model, we do not advise you to use it for chat. You can use it for finetuning, and you can also use it for code infilling, code generation, etc., but please be careful about your stopping criteria.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
| {"language": ["en"], "license": "other", "tags": ["pretrained"], "license_name": "tongyi-qianwen-research", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE", "pipeline_tag": "text-generation"} | TechxGenus/CodeQwen1.5-7B-GPTQ | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"pretrained",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-17T08:12:05+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #pretrained #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# CodeQwen1.5-7B
GPTQ quantized version of CodeQwen1.5-7B model.
---
## Introduction
CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.
* Strong code generation capabilities and competitve performance across a series of benchmarks;
* Supporting long context understanding and generation with the context length of 64K tokens;
* Supporting 92 coding languages
* Excellent performance in text-to-SQL, bug fix, etc.
For more details, please refer to our blog post and GitHub repo.
## Model Details
CodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error:
## Usage
For the base language model, we do not advise you to use it for chat. You can use it for finetuning, and you can also use it for code infilling, code generation, etc., but please be careful about your stopping criteria.
If you find our work helpful, feel free to give us a cite.
| [
"# CodeQwen1.5-7B\n\nGPTQ quantized version of CodeQwen1.5-7B model.\n\n---",
"## Introduction\n\nCodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes. \n\n* Strong code generation capabilities and competitve performance across a series of benchmarks;\n* Supporting long context understanding and generation with the context length of 64K tokens;\n* Supporting 92 coding languages\n* Excellent performance in text-to-SQL, bug fix, etc.\n\n\nFor more details, please refer to our blog post and GitHub repo.",
"## Model Details\nCodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.",
"## Requirements\nThe code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error:",
"## Usage\n\nFor the base language model, we do not advise you to use it for chat. You can use it for finetuning, and you can also use it for code infilling, code generation, etc., but please be careful about your stopping criteria.\n\n\nIf you find our work helpful, feel free to give us a cite."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #pretrained #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# CodeQwen1.5-7B\n\nGPTQ quantized version of CodeQwen1.5-7B model.\n\n---",
"## Introduction\n\nCodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes. \n\n* Strong code generation capabilities and competitve performance across a series of benchmarks;\n* Supporting long context understanding and generation with the context length of 64K tokens;\n* Supporting 92 coding languages\n* Excellent performance in text-to-SQL, bug fix, etc.\n\n\nFor more details, please refer to our blog post and GitHub repo.",
"## Model Details\nCodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.",
"## Requirements\nThe code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error:",
"## Usage\n\nFor the base language model, we do not advise you to use it for chat. You can use it for finetuning, and you can also use it for code infilling, code generation, etc., but please be careful about your stopping criteria.\n\n\nIf you find our work helpful, feel free to give us a cite."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_4096_512_15M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5085
- F1 Score: 0.8106
- Accuracy: 0.8110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3865 | 100.0 | 200 | 0.5615 | 0.7894 | 0.7896 |
| 0.131 | 200.0 | 400 | 0.9581 | 0.7774 | 0.7774 |
| 0.0671 | 300.0 | 600 | 1.0908 | 0.7652 | 0.7652 |
| 0.0403 | 400.0 | 800 | 1.2292 | 0.7591 | 0.7591 |
| 0.0282 | 500.0 | 1000 | 1.3727 | 0.7561 | 0.7561 |
| 0.0205 | 600.0 | 1200 | 1.4093 | 0.7622 | 0.7622 |
| 0.0172 | 700.0 | 1400 | 1.4180 | 0.7743 | 0.7744 |
| 0.0144 | 800.0 | 1600 | 1.4865 | 0.7622 | 0.7622 |
| 0.0124 | 900.0 | 1800 | 1.4706 | 0.7740 | 0.7744 |
| 0.0104 | 1000.0 | 2000 | 1.6393 | 0.7651 | 0.7652 |
| 0.0098 | 1100.0 | 2200 | 1.6888 | 0.7713 | 0.7713 |
| 0.0083 | 1200.0 | 2400 | 1.5877 | 0.7774 | 0.7774 |
| 0.0072 | 1300.0 | 2600 | 1.7804 | 0.7622 | 0.7622 |
| 0.0077 | 1400.0 | 2800 | 1.6126 | 0.7651 | 0.7652 |
| 0.0071 | 1500.0 | 3000 | 1.6439 | 0.7560 | 0.7561 |
| 0.0063 | 1600.0 | 3200 | 1.6351 | 0.7713 | 0.7713 |
| 0.0055 | 1700.0 | 3400 | 1.8101 | 0.7770 | 0.7774 |
| 0.0054 | 1800.0 | 3600 | 1.6218 | 0.7591 | 0.7591 |
| 0.005 | 1900.0 | 3800 | 1.9408 | 0.7622 | 0.7622 |
| 0.0049 | 2000.0 | 4000 | 1.8393 | 0.7591 | 0.7591 |
| 0.0045 | 2100.0 | 4200 | 1.9814 | 0.7622 | 0.7622 |
| 0.0041 | 2200.0 | 4400 | 1.9375 | 0.7805 | 0.7805 |
| 0.0043 | 2300.0 | 4600 | 1.8234 | 0.7773 | 0.7774 |
| 0.0037 | 2400.0 | 4800 | 1.8858 | 0.7713 | 0.7713 |
| 0.0038 | 2500.0 | 5000 | 1.8331 | 0.7713 | 0.7713 |
| 0.0037 | 2600.0 | 5200 | 1.9857 | 0.7711 | 0.7713 |
| 0.0035 | 2700.0 | 5400 | 1.9290 | 0.7621 | 0.7622 |
| 0.0035 | 2800.0 | 5600 | 1.9851 | 0.7530 | 0.7530 |
| 0.003 | 2900.0 | 5800 | 2.1489 | 0.7622 | 0.7622 |
| 0.003 | 3000.0 | 6000 | 2.0196 | 0.7652 | 0.7652 |
| 0.0029 | 3100.0 | 6200 | 1.9347 | 0.7530 | 0.7530 |
| 0.0028 | 3200.0 | 6400 | 1.8976 | 0.7652 | 0.7652 |
| 0.0025 | 3300.0 | 6600 | 2.0749 | 0.7591 | 0.7591 |
| 0.0025 | 3400.0 | 6800 | 2.0812 | 0.7591 | 0.7591 |
| 0.0025 | 3500.0 | 7000 | 2.0283 | 0.7744 | 0.7744 |
| 0.0024 | 3600.0 | 7200 | 2.0711 | 0.7713 | 0.7713 |
| 0.0025 | 3700.0 | 7400 | 2.1429 | 0.7713 | 0.7713 |
| 0.0021 | 3800.0 | 7600 | 2.1335 | 0.7744 | 0.7744 |
| 0.0023 | 3900.0 | 7800 | 1.9989 | 0.7682 | 0.7683 |
| 0.0021 | 4000.0 | 8000 | 2.1383 | 0.7774 | 0.7774 |
| 0.0018 | 4100.0 | 8200 | 2.1260 | 0.7744 | 0.7744 |
| 0.002 | 4200.0 | 8400 | 2.0975 | 0.7683 | 0.7683 |
| 0.0019 | 4300.0 | 8600 | 2.0614 | 0.7744 | 0.7744 |
| 0.0017 | 4400.0 | 8800 | 2.1881 | 0.7682 | 0.7683 |
| 0.0019 | 4500.0 | 9000 | 2.0712 | 0.7805 | 0.7805 |
| 0.0017 | 4600.0 | 9200 | 2.1428 | 0.7713 | 0.7713 |
| 0.0017 | 4700.0 | 9400 | 2.0867 | 0.7805 | 0.7805 |
| 0.0016 | 4800.0 | 9600 | 2.1474 | 0.7774 | 0.7774 |
| 0.0014 | 4900.0 | 9800 | 2.2229 | 0.7774 | 0.7774 |
| 0.0016 | 5000.0 | 10000 | 2.2047 | 0.7774 | 0.7774 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_2-seqsight_4096_512_15M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_4096_512_15M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-04-17T08:13:07+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_2-seqsight\_4096\_512\_15M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5085
* F1 Score: 0.8106
* Accuracy: 0.8110
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | TYZY89/Llama-2-7b-sft | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:13:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-16-layer](https://huggingface.co/Citaman/command-r-16-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-16-layer
layer_range: [0, 15]
- model: Citaman/command-r-16-layer
layer_range: [1, 16]
merge_method: slerp
base_model: Citaman/command-r-16-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-16-layer"]} | Citaman/command-r-15-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-16-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:15:06+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-16-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-16-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-16-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-16-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-16-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_hh_shp3_400
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1493
- Rewards/chosen: -4.1445
- Rewards/rejected: -6.0670
- Rewards/accuracies: 0.5400
- Rewards/margins: 1.9226
- Logps/rejected: -259.9296
- Logps/chosen: -239.0303
- Logits/rejected: -0.7892
- Logits/chosen: -0.7615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0216 | 4.0 | 100 | 2.2837 | -3.0069 | -3.4434 | 0.5400 | 0.4365 | -257.0145 | -237.7664 | -0.7901 | -0.8063 |
| 0.1476 | 8.0 | 200 | 3.5175 | -3.9480 | -5.1618 | 0.5300 | 1.2138 | -258.9238 | -238.8121 | -0.8303 | -0.8463 |
| 0.0108 | 12.0 | 300 | 3.2066 | -1.3603 | -2.7278 | 0.5600 | 1.3674 | -256.2194 | -235.9369 | -0.7824 | -0.7443 |
| 0.0 | 16.0 | 400 | 3.1558 | -4.1573 | -6.0643 | 0.5300 | 1.9070 | -259.9266 | -239.0446 | -0.7891 | -0.7612 |
| 0.0 | 20.0 | 500 | 3.1564 | -4.1409 | -6.0450 | 0.5400 | 1.9041 | -259.9052 | -239.0264 | -0.7894 | -0.7613 |
| 0.0 | 24.0 | 600 | 3.1533 | -4.1925 | -6.0561 | 0.5300 | 1.8636 | -259.9174 | -239.0837 | -0.7890 | -0.7614 |
| 0.0 | 28.0 | 700 | 3.1650 | -4.1547 | -6.0212 | 0.5300 | 1.8665 | -259.8788 | -239.0417 | -0.7892 | -0.7614 |
| 0.0 | 32.0 | 800 | 3.1593 | -4.1704 | -6.0572 | 0.5400 | 1.8868 | -259.9187 | -239.0591 | -0.7891 | -0.7619 |
| 0.0 | 36.0 | 900 | 3.1711 | -4.1626 | -6.0504 | 0.5400 | 1.8879 | -259.9112 | -239.0504 | -0.7892 | -0.7614 |
| 0.0 | 40.0 | 1000 | 3.1493 | -4.1445 | -6.0670 | 0.5400 | 1.9226 | -259.9296 | -239.0303 | -0.7892 | -0.7615 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "llama2", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_shp3_400", "results": []}]} | guoyu-zhang/model_hh_shp3_400 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null | 2024-04-17T08:15:50+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
| model\_hh\_shp3\_400
====================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.1493
* Rewards/chosen: -4.1445
* Rewards/rejected: -6.0670
* Rewards/accuracies: 0.5400
* Rewards/margins: 1.9226
* Logps/rejected: -259.9296
* Logps/chosen: -239.0303
* Logits/rejected: -0.7892
* Logits/chosen: -0.7615
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_4096_512_15M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6713
- F1 Score: 0.7226
- Accuracy: 0.7229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.9298 | 11.11 | 200 | 0.8057 | 0.6157 | 0.6392 |
| 0.7782 | 22.22 | 400 | 0.7417 | 0.6599 | 0.6659 |
| 0.7207 | 33.33 | 600 | 0.7198 | 0.6840 | 0.6859 |
| 0.6911 | 44.44 | 800 | 0.7016 | 0.6924 | 0.6940 |
| 0.6694 | 55.56 | 1000 | 0.6823 | 0.7000 | 0.7021 |
| 0.6505 | 66.67 | 1200 | 0.6711 | 0.7035 | 0.7067 |
| 0.6391 | 77.78 | 1400 | 0.6669 | 0.7056 | 0.7076 |
| 0.6274 | 88.89 | 1600 | 0.6617 | 0.7151 | 0.7161 |
| 0.6162 | 100.0 | 1800 | 0.6630 | 0.7105 | 0.7109 |
| 0.6092 | 111.11 | 2000 | 0.6555 | 0.7151 | 0.7172 |
| 0.6001 | 122.22 | 2200 | 0.6666 | 0.7072 | 0.7052 |
| 0.5938 | 133.33 | 2400 | 0.6505 | 0.7194 | 0.7218 |
| 0.586 | 144.44 | 2600 | 0.6498 | 0.7196 | 0.7201 |
| 0.5784 | 155.56 | 2800 | 0.6534 | 0.7149 | 0.7150 |
| 0.5729 | 166.67 | 3000 | 0.6484 | 0.7156 | 0.7150 |
| 0.5673 | 177.78 | 3200 | 0.6525 | 0.7174 | 0.7166 |
| 0.5613 | 188.89 | 3400 | 0.6510 | 0.7219 | 0.7221 |
| 0.5558 | 200.0 | 3600 | 0.6437 | 0.7262 | 0.7273 |
| 0.5508 | 211.11 | 3800 | 0.6507 | 0.7237 | 0.7245 |
| 0.5456 | 222.22 | 4000 | 0.6456 | 0.7254 | 0.7249 |
| 0.5389 | 233.33 | 4200 | 0.6526 | 0.7202 | 0.7192 |
| 0.5332 | 244.44 | 4400 | 0.6447 | 0.7265 | 0.7275 |
| 0.5283 | 255.56 | 4600 | 0.6496 | 0.7263 | 0.7262 |
| 0.5228 | 266.67 | 4800 | 0.6497 | 0.7291 | 0.7291 |
| 0.5169 | 277.78 | 5000 | 0.6586 | 0.7263 | 0.7262 |
| 0.5133 | 288.89 | 5200 | 0.6568 | 0.7275 | 0.7271 |
| 0.5078 | 300.0 | 5400 | 0.6624 | 0.7264 | 0.7258 |
| 0.5037 | 311.11 | 5600 | 0.6571 | 0.7194 | 0.7183 |
| 0.4992 | 322.22 | 5800 | 0.6524 | 0.7235 | 0.7231 |
| 0.4947 | 333.33 | 6000 | 0.6636 | 0.7176 | 0.7164 |
| 0.4897 | 344.44 | 6200 | 0.6637 | 0.7292 | 0.7302 |
| 0.4865 | 355.56 | 6400 | 0.6568 | 0.7259 | 0.7262 |
| 0.4816 | 366.67 | 6600 | 0.6631 | 0.7216 | 0.7216 |
| 0.4778 | 377.78 | 6800 | 0.6660 | 0.7207 | 0.7201 |
| 0.475 | 388.89 | 7000 | 0.6535 | 0.7226 | 0.7231 |
| 0.4731 | 400.0 | 7200 | 0.6589 | 0.7248 | 0.7247 |
| 0.4692 | 411.11 | 7400 | 0.6655 | 0.7267 | 0.7267 |
| 0.4665 | 422.22 | 7600 | 0.6664 | 0.7271 | 0.7269 |
| 0.4643 | 433.33 | 7800 | 0.6663 | 0.7244 | 0.7245 |
| 0.4603 | 444.44 | 8000 | 0.6683 | 0.7227 | 0.7225 |
| 0.4591 | 455.56 | 8200 | 0.6667 | 0.7254 | 0.7249 |
| 0.4565 | 466.67 | 8400 | 0.6691 | 0.7250 | 0.7251 |
| 0.4542 | 477.78 | 8600 | 0.6695 | 0.7263 | 0.7258 |
| 0.4544 | 488.89 | 8800 | 0.6701 | 0.7292 | 0.7293 |
| 0.4526 | 500.0 | 9000 | 0.6669 | 0.7248 | 0.7247 |
| 0.4497 | 511.11 | 9200 | 0.6693 | 0.7260 | 0.7258 |
| 0.4498 | 522.22 | 9400 | 0.6689 | 0.7276 | 0.7278 |
| 0.4479 | 533.33 | 9600 | 0.6708 | 0.7259 | 0.7258 |
| 0.4467 | 544.44 | 9800 | 0.6708 | 0.7269 | 0.7267 |
| 0.4463 | 555.56 | 10000 | 0.6711 | 0.7265 | 0.7262 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_4096_512_15M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_4096_512_15M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-04-17T08:16:31+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_splice\_reconstructed-seqsight\_4096\_512\_15M-L32\_all
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6713
* F1 Score: 0.7226
* Accuracy: 0.7229
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('msonali/diffusion-tutorial')
image = pipeline().images[0]
image
```
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | msonali/diffusion-tutorial | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-04-17T08:16:48+00:00 | [] | [] | TAGS
#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
|
# Model Card for Unit 1 of the Diffusion Models Class
This model is a diffusion model for unconditional image generation of cute .
## Usage
| [
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] | [
"TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n",
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
- load_in_4bit: True
- load_in_8bit: False
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "model-index": [{"name": "results", "results": []}]} | deepakmzn/results | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T08:17:20+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us
|
# results
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
- load_in_4bit: True
- load_in_8bit: False
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# results\n\nThis model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n- load_in_4bit: True\n- load_in_8bit: False",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- PEFT 0.4.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us \n",
"# results\n\nThis model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n- load_in_4bit: True\n- load_in_8bit: False",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- PEFT 0.4.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | dzungPaduahsgs/Mistral7Bcleaning_merged | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-04-17T08:19:14+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | guyhadad01/Roberta-large-imdb-lora-1000 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:19:42+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | 0x0mom/sl7 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:20:12+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
| {"title": "Llama2 Finetuned Deploy", "emoji": "\ud83d\udc20", "colorFrom": "green", "colorTo": "red", "sdk": "docker", "pinned": false} | Kunalpal216/llama2-finetuned-road-accident | null | [
"safetensors",
"region:us"
] | null | 2024-04-17T08:21:21+00:00 | [] | [] | TAGS
#safetensors #region-us
|
Check out the configuration reference at URL
| [] | [
"TAGS\n#safetensors #region-us \n"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mixtral-8x7B-v0.1-Instruct-sft-en-de
A full finetune of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using a mix of English and German instruction data.
## Dataset
|source|#examples|
|---|---|
teknium/OpenHermes-2.5 |1001551
maxidl/OpenOrca-gpt4-de |119559
maxidl/MathInstruct-de |56793
maxidl/Capybara-de |15991
maxidl/math-prm-800k-de |12298
maxidl/wikihow-de |10103
maxidl/no_robots-de |9500
maxidl/lima-de |1030
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 32
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["alignment-handbook", "generated_from_trainer"], "datasets": ["maxidl/instruct-en-de"], "base_model": "mistralai/Mixtral-8x7B-v0.1", "model-index": [{"name": "Mixtral-8x7B-v0.1-Instruct-sft-en-de", "results": []}]} | maxidl/Mixtral-8x7B-v0.1-Instruct-sft-en-de | null | [
"transformers",
"tensorboard",
"safetensors",
"mixtral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"conversational",
"dataset:maxidl/instruct-en-de",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:21:34+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #mixtral #text-generation #alignment-handbook #generated_from_trainer #conversational #dataset-maxidl/instruct-en-de #base_model-mistralai/Mixtral-8x7B-v0.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Mixtral-8x7B-v0.1-Instruct-sft-en-de
====================================
A full finetune of mistralai/Mixtral-8x7B-v0.1 using a mix of English and German instruction data.
Dataset
-------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 1
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 32
* total\_train\_batch\_size: 32
* total\_eval\_batch\_size: 256
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 50
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #mixtral #text-generation #alignment-handbook #generated_from_trainer #conversational #dataset-maxidl/instruct-en-de #base_model-mistralai/Mixtral-8x7B-v0.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# ECE-TW3-JRGL-VHF9
ECE-TW3-JRGL-VHF9 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [MTSAIR/MultiVerse_70B](https://huggingface.co/MTSAIR/MultiVerse_70B)
* [davidkim205/Rhea-72b-v0.5](https://huggingface.co/davidkim205/Rhea-72b-v0.5)
## 🧩 Configuration | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "MTSAIR/MultiVerse_70B", "davidkim205/Rhea-72b-v0.5"]} | IAFrance/ECE-TW3-JRGL-VHF9 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"MTSAIR/MultiVerse_70B",
"davidkim205/Rhea-72b-v0.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:23:01+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #MTSAIR/MultiVerse_70B #davidkim205/Rhea-72b-v0.5 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ECE-TW3-JRGL-VHF9
ECE-TW3-JRGL-VHF9 is a merge of the following models using mergekit:
* MTSAIR/MultiVerse_70B
* davidkim205/Rhea-72b-v0.5
## Configuration | [
"# ECE-TW3-JRGL-VHF9\n\nECE-TW3-JRGL-VHF9 is a merge of the following models using mergekit:\n* MTSAIR/MultiVerse_70B\n* davidkim205/Rhea-72b-v0.5",
"## Configuration"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #MTSAIR/MultiVerse_70B #davidkim205/Rhea-72b-v0.5 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ECE-TW3-JRGL-VHF9\n\nECE-TW3-JRGL-VHF9 is a merge of the following models using mergekit:\n* MTSAIR/MultiVerse_70B\n* davidkim205/Rhea-72b-v0.5",
"## Configuration"
] |
reinforcement-learning | null |
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'hlabedade/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
| {"tags": ["LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "-180.44 +/- 108.17", "name": "mean_reward", "verified": false}]}]}]} | hlabedade/ppo-CartPole-v1 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | null | 2024-04-17T08:23:43+00:00 | [] | [] | TAGS
#tensorboard #LunarLander-v2 #ppo #deep-reinforcement-learning #reinforcement-learning #custom-implementation #deep-rl-course #model-index #region-us
|
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
| [
"# PPO Agent Playing LunarLander-v2\n\n This is a trained model of a PPO agent playing LunarLander-v2.\n \n # Hyperparameters"
] | [
"TAGS\n#tensorboard #LunarLander-v2 #ppo #deep-reinforcement-learning #reinforcement-learning #custom-implementation #deep-rl-course #model-index #region-us \n",
"# PPO Agent Playing LunarLander-v2\n\n This is a trained model of a PPO agent playing LunarLander-v2.\n \n # Hyperparameters"
] |
null | null | Here is the most complete collection of controlNet models repository, with the models written above to make it easy to download them at a new workplace, thanks to the authors of the models. | {"license": "apache-2.0"} | jxand/ControlNet15andXL | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T08:24:14+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| Here is the most complete collection of controlNet models repository, with the models written above to make it easy to download them at a new workplace, thanks to the authors of the models. | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_4096_512_15M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5125
- F1 Score: 0.7476
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6119 | 12.5 | 200 | 0.5542 | 0.7103 | 0.712 |
| 0.5462 | 25.0 | 400 | 0.5447 | 0.7297 | 0.73 |
| 0.5287 | 37.5 | 600 | 0.5474 | 0.7229 | 0.723 |
| 0.5111 | 50.0 | 800 | 0.5411 | 0.7267 | 0.728 |
| 0.4938 | 62.5 | 1000 | 0.5440 | 0.7251 | 0.726 |
| 0.4812 | 75.0 | 1200 | 0.5522 | 0.7280 | 0.728 |
| 0.4705 | 87.5 | 1400 | 0.5474 | 0.7368 | 0.737 |
| 0.4616 | 100.0 | 1600 | 0.5508 | 0.7416 | 0.742 |
| 0.4535 | 112.5 | 1800 | 0.5677 | 0.7289 | 0.729 |
| 0.4458 | 125.0 | 2000 | 0.5590 | 0.7258 | 0.726 |
| 0.4387 | 137.5 | 2200 | 0.5634 | 0.7287 | 0.729 |
| 0.432 | 150.0 | 2400 | 0.5866 | 0.7240 | 0.724 |
| 0.425 | 162.5 | 2600 | 0.5730 | 0.7249 | 0.725 |
| 0.4187 | 175.0 | 2800 | 0.5880 | 0.7290 | 0.729 |
| 0.4129 | 187.5 | 3000 | 0.5847 | 0.7235 | 0.724 |
| 0.4075 | 200.0 | 3200 | 0.5859 | 0.7271 | 0.727 |
| 0.402 | 212.5 | 3400 | 0.5931 | 0.7206 | 0.721 |
| 0.3961 | 225.0 | 3600 | 0.6113 | 0.7158 | 0.716 |
| 0.3891 | 237.5 | 3800 | 0.6306 | 0.7231 | 0.723 |
| 0.3847 | 250.0 | 4000 | 0.6166 | 0.7251 | 0.725 |
| 0.3785 | 262.5 | 4200 | 0.6403 | 0.7290 | 0.729 |
| 0.3746 | 275.0 | 4400 | 0.6450 | 0.7251 | 0.725 |
| 0.3674 | 287.5 | 4600 | 0.6494 | 0.7299 | 0.73 |
| 0.3645 | 300.0 | 4800 | 0.6569 | 0.7218 | 0.722 |
| 0.358 | 312.5 | 5000 | 0.6791 | 0.7210 | 0.721 |
| 0.3539 | 325.0 | 5200 | 0.6570 | 0.7196 | 0.72 |
| 0.3501 | 337.5 | 5400 | 0.6671 | 0.7060 | 0.706 |
| 0.3458 | 350.0 | 5600 | 0.6736 | 0.7238 | 0.724 |
| 0.3404 | 362.5 | 5800 | 0.6828 | 0.7221 | 0.723 |
| 0.3356 | 375.0 | 6000 | 0.6927 | 0.7227 | 0.723 |
| 0.3326 | 387.5 | 6200 | 0.6954 | 0.7040 | 0.704 |
| 0.3292 | 400.0 | 6400 | 0.7012 | 0.7198 | 0.72 |
| 0.3255 | 412.5 | 6600 | 0.6925 | 0.7166 | 0.717 |
| 0.3207 | 425.0 | 6800 | 0.7081 | 0.7098 | 0.71 |
| 0.3185 | 437.5 | 7000 | 0.7204 | 0.7187 | 0.719 |
| 0.3144 | 450.0 | 7200 | 0.7085 | 0.7253 | 0.726 |
| 0.3127 | 462.5 | 7400 | 0.7189 | 0.7130 | 0.713 |
| 0.3092 | 475.0 | 7600 | 0.7160 | 0.7178 | 0.718 |
| 0.3056 | 487.5 | 7800 | 0.7196 | 0.7060 | 0.706 |
| 0.3041 | 500.0 | 8000 | 0.7291 | 0.7158 | 0.716 |
| 0.3011 | 512.5 | 8200 | 0.7191 | 0.7080 | 0.708 |
| 0.2977 | 525.0 | 8400 | 0.7405 | 0.7118 | 0.712 |
| 0.2973 | 537.5 | 8600 | 0.7468 | 0.7059 | 0.706 |
| 0.2965 | 550.0 | 8800 | 0.7432 | 0.708 | 0.708 |
| 0.2942 | 562.5 | 9000 | 0.7453 | 0.7109 | 0.711 |
| 0.293 | 575.0 | 9200 | 0.7525 | 0.7010 | 0.701 |
| 0.2918 | 587.5 | 9400 | 0.7557 | 0.7059 | 0.706 |
| 0.2912 | 600.0 | 9600 | 0.7483 | 0.7049 | 0.705 |
| 0.2911 | 612.5 | 9800 | 0.7523 | 0.7089 | 0.709 |
| 0.2904 | 625.0 | 10000 | 0.7519 | 0.7079 | 0.708 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_0-seqsight_4096_512_15M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_4096_512_15M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-04-17T08:24:24+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_0-seqsight\_4096\_512\_15M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5125
* F1 Score: 0.7476
* Accuracy: 0.75
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
- bnb_4bit_quant_storage: uint8
- load_in_4bit: True
- load_in_8bit: False
### Framework versions
- PEFT 0.5.0
| {"library_name": "peft"} | BojanaBas/Mistral-7B-Instruct-v0.2-pqa-10 | null | [
"peft",
"region:us"
] | null | 2024-04-17T08:24:47+00:00 | [] | [] | TAGS
#peft #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
- bnb_4bit_quant_storage: uint8
- load_in_4bit: True
- load_in_8bit: False
### Framework versions
- PEFT 0.5.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: fp4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: bfloat16\n- bnb_4bit_quant_storage: uint8\n- load_in_4bit: True\n- load_in_8bit: False",
"### Framework versions\n\n\n- PEFT 0.5.0"
] | [
"TAGS\n#peft #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: fp4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: bfloat16\n- bnb_4bit_quant_storage: uint8\n- load_in_4bit: True\n- load_in_8bit: False",
"### Framework versions\n\n\n- PEFT 0.5.0"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | deepakmzn/tinyllama_2k_5e_merged_with_base_model | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:25:12+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | IbrahimTarek/mistrial_fine-tuned | null | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-17T08:25:30+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #tensorboard #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_4096_512_15M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4655
- F1 Score: 0.7825
- Accuracy: 0.783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6189 | 13.33 | 200 | 0.5979 | 0.6811 | 0.682 |
| 0.5569 | 26.67 | 400 | 0.6004 | 0.6864 | 0.687 |
| 0.5373 | 40.0 | 600 | 0.5923 | 0.7011 | 0.702 |
| 0.5157 | 53.33 | 800 | 0.5889 | 0.7051 | 0.707 |
| 0.5009 | 66.67 | 1000 | 0.5949 | 0.7149 | 0.715 |
| 0.4891 | 80.0 | 1200 | 0.5888 | 0.7086 | 0.709 |
| 0.4796 | 93.33 | 1400 | 0.5915 | 0.7076 | 0.708 |
| 0.4718 | 106.67 | 1600 | 0.5852 | 0.7077 | 0.708 |
| 0.4646 | 120.0 | 1800 | 0.6004 | 0.7026 | 0.703 |
| 0.4571 | 133.33 | 2000 | 0.6009 | 0.7009 | 0.701 |
| 0.4511 | 146.67 | 2200 | 0.6132 | 0.6980 | 0.698 |
| 0.4448 | 160.0 | 2400 | 0.6201 | 0.6958 | 0.696 |
| 0.4375 | 173.33 | 2600 | 0.6230 | 0.6988 | 0.699 |
| 0.4311 | 186.67 | 2800 | 0.6341 | 0.7030 | 0.703 |
| 0.4245 | 200.0 | 3000 | 0.6532 | 0.6989 | 0.699 |
| 0.4187 | 213.33 | 3200 | 0.6397 | 0.6940 | 0.694 |
| 0.4112 | 226.67 | 3400 | 0.6556 | 0.6830 | 0.683 |
| 0.405 | 240.0 | 3600 | 0.6686 | 0.6969 | 0.697 |
| 0.4007 | 253.33 | 3800 | 0.6562 | 0.7004 | 0.701 |
| 0.3942 | 266.67 | 4000 | 0.6657 | 0.7028 | 0.703 |
| 0.3888 | 280.0 | 4200 | 0.6764 | 0.7069 | 0.707 |
| 0.3839 | 293.33 | 4400 | 0.6725 | 0.7009 | 0.701 |
| 0.3774 | 306.67 | 4600 | 0.6738 | 0.6940 | 0.694 |
| 0.3715 | 320.0 | 4800 | 0.6791 | 0.6950 | 0.695 |
| 0.3671 | 333.33 | 5000 | 0.6803 | 0.7070 | 0.707 |
| 0.362 | 346.67 | 5200 | 0.6784 | 0.6939 | 0.694 |
| 0.3565 | 360.0 | 5400 | 0.7009 | 0.6949 | 0.695 |
| 0.3522 | 373.33 | 5600 | 0.7034 | 0.6935 | 0.694 |
| 0.3475 | 386.67 | 5800 | 0.7351 | 0.7000 | 0.7 |
| 0.3426 | 400.0 | 6000 | 0.7246 | 0.7000 | 0.7 |
| 0.3389 | 413.33 | 6200 | 0.7148 | 0.7007 | 0.701 |
| 0.335 | 426.67 | 6400 | 0.7211 | 0.7047 | 0.705 |
| 0.3305 | 440.0 | 6600 | 0.7277 | 0.6920 | 0.692 |
| 0.3281 | 453.33 | 6800 | 0.7524 | 0.6900 | 0.69 |
| 0.3237 | 466.67 | 7000 | 0.7293 | 0.6979 | 0.698 |
| 0.3205 | 480.0 | 7200 | 0.7415 | 0.6980 | 0.698 |
| 0.3167 | 493.33 | 7400 | 0.7479 | 0.6990 | 0.699 |
| 0.3136 | 506.67 | 7600 | 0.7470 | 0.6960 | 0.696 |
| 0.3097 | 520.0 | 7800 | 0.7434 | 0.6940 | 0.694 |
| 0.3085 | 533.33 | 8000 | 0.7463 | 0.6970 | 0.697 |
| 0.3057 | 546.67 | 8200 | 0.7546 | 0.7000 | 0.7 |
| 0.3038 | 560.0 | 8400 | 0.7464 | 0.6920 | 0.692 |
| 0.3025 | 573.33 | 8600 | 0.7441 | 0.6940 | 0.694 |
| 0.2996 | 586.67 | 8800 | 0.7602 | 0.6959 | 0.696 |
| 0.2981 | 600.0 | 9000 | 0.7555 | 0.6969 | 0.697 |
| 0.2967 | 613.33 | 9200 | 0.7627 | 0.6959 | 0.696 |
| 0.2954 | 626.67 | 9400 | 0.7603 | 0.6970 | 0.697 |
| 0.295 | 640.0 | 9600 | 0.7624 | 0.6960 | 0.696 |
| 0.2929 | 653.33 | 9800 | 0.7658 | 0.6950 | 0.695 |
| 0.2927 | 666.67 | 10000 | 0.7646 | 0.6930 | 0.693 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_1-seqsight_4096_512_15M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_4096_512_15M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-04-17T08:25:40+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_1-seqsight\_4096\_512\_15M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4655
* F1 Score: 0.7825
* Accuracy: 0.783
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# stablelm-zephyr-3b-dare3
stablelm-zephyr-3b-dare3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b)
* [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b)
## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 28]
model: stabilityai/stablelm-zephyr-3b
parameters:
density: [1, 0.5, 0.1]
weight: 0.5
- layer_range: [4, 32]
model: stabilityai/stablelm-zephyr-3b
parameters:
density: [0.1, 0.5, 1]
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: dare_ties
base_model: stabilityai/stablelm-zephyr-3b
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/stablelm-zephyr-3b-dare3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "stabilityai/stablelm-zephyr-3b"], "base_model": ["stabilityai/stablelm-zephyr-3b", "stabilityai/stablelm-zephyr-3b"]} | aipib/stablelm-zephyr-3b-dare3 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"stabilityai/stablelm-zephyr-3b",
"conversational",
"base_model:stabilityai/stablelm-zephyr-3b",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:26:03+00:00 | [] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #merge #mergekit #lazymergekit #stabilityai/stablelm-zephyr-3b #conversational #base_model-stabilityai/stablelm-zephyr-3b #autotrain_compatible #endpoints_compatible #region-us
|
# stablelm-zephyr-3b-dare3
stablelm-zephyr-3b-dare3 is a merge of the following models using LazyMergekit:
* stabilityai/stablelm-zephyr-3b
* stabilityai/stablelm-zephyr-3b
## Configuration
## Usage
| [
"# stablelm-zephyr-3b-dare3\n\nstablelm-zephyr-3b-dare3 is a merge of the following models using LazyMergekit:\n* stabilityai/stablelm-zephyr-3b\n* stabilityai/stablelm-zephyr-3b",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #merge #mergekit #lazymergekit #stabilityai/stablelm-zephyr-3b #conversational #base_model-stabilityai/stablelm-zephyr-3b #autotrain_compatible #endpoints_compatible #region-us \n",
"# stablelm-zephyr-3b-dare3\n\nstablelm-zephyr-3b-dare3 is a merge of the following models using LazyMergekit:\n* stabilityai/stablelm-zephyr-3b\n* stabilityai/stablelm-zephyr-3b",
"## Configuration",
"## Usage"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | phamthanhdung/v51 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-17T08:28:07+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "deepseek-ai/deepseek-coder-5.7bmqa-base-2.0"} | JVictor-CC/deepseek-coder-5.7bmqa-base | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/deepseek-coder-5.7bmqa-base-2.0",
"region:us"
] | null | 2024-04-17T08:28:38+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-deepseek-ai/deepseek-coder-5.7bmqa-base-2.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-deepseek-ai/deepseek-coder-5.7bmqa-base-2.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | amanayush/phi2_1k_4e_merged_with_base_model | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:29:43+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dal-bert-finetuned-emotion
This model is a fine-tuned version of [sharif-dal/dal-bert](https://huggingface.co/sharif-dal/dal-bert) on snappfood's comments dataset.
- 0 -> Positive sentiment
- 1 -> Negative sentiment
It achieves the following results on the evaluation set:
- Loss: 0.2893
- Accuracy: 0.8821
- F1: 0.8820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3131 | 1.0 | 1086 | 0.2859 | 0.8810 | 0.8808 |
| 0.2506 | 2.0 | 2172 | 0.2893 | 0.8821 | 0.8820 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "sharif-dal/dal-bert", "widget": [{"text": "\u0627\u0647 \u0627\u06cc\u0646 \u063a\u0630\u0627 \u062e\u06cc\u0644\u06cc \u0622\u0634\u063a\u0627\u0644\u0647!"}], "model-index": [{"name": "dal-bert-finetuned-sentiment_classifier(binary)", "results": []}]} | Respair/dal-bert-finetuned-emotion | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:sharif-dal/dal-bert",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:30:10+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-sharif-dal/dal-bert #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| dal-bert-finetuned-emotion
==========================
This model is a fine-tuned version of sharif-dal/dal-bert on snappfood's comments dataset.
* 0 -> Positive sentiment
* 1 -> Negative sentiment
It achieves the following results on the evaluation set:
* Loss: 0.2893
* Accuracy: 0.8821
* F1: 0.8820
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 48
* eval\_batch\_size: 48
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-sharif-dal/dal-bert #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | phamthanhdung/v60 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-17T08:30:24+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
sentence-similarity | sentence-transformers |
# mteb-pt/average_fasttext_cc.pt.300
This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model.
The original pre-trained word embeddings can be found at: [https://fasttext.cc/docs/en/crawl-vectors.html](https://fasttext.cc/docs/en/crawl-vectors.html).
This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mteb-pt/average_fasttext_cc.pt.300')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(2000001, 300)
)
(1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
```bibtex
@inproceedings{grave2018learning,
title={Learning Word Vectors for 157 Languages},
author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas},
booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
``` | {"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | pt-mteb/average_fasttext_cc.pt.300 | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"pt",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:32:07+00:00 | [] | [
"pt"
] | TAGS
#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
|
# mteb-pt/average_fasttext_cc.pt.300
This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model.
The original pre-trained word embeddings can be found at: URL
This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard
## Full Model Architecture
## Citing & Authors
| [
"# mteb-pt/average_fasttext_cc.pt.300\n\nThis is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n",
"# mteb-pt/average_fasttext_cc.pt.300\n\nThis is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1569
- F1: 0.8672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2583 | 1.0 | 1573 | 0.1614 | 0.8276 |
| 0.1367 | 2.0 | 3146 | 0.1483 | 0.8550 |
| 0.0784 | 3.0 | 4719 | 0.1569 | 0.8672 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": []}]} | taoyoung/xlm-roberta-base-finetuned-panx-de | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:32:07+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-base-finetuned-panx-de
==================================
This model is a fine-tuned version of xlm-roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1569
* F1: 0.8672
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu118
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on Tesla V100-PCIE-32GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloomz-3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/bigscience-bloomz-3b-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/bigscience-bloomz-3b-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-3b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloomz-3b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/bigscience-bloomz-3b-HQQ-1bit-smashed | null | [
"transformers",
"bloom",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:32:57+00:00 | [] | [] | TAGS
#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloomz-3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloomz-3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on Tesla V100-PCIE-32GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloomz-3b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloomz-3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on Tesla V100-PCIE-32GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloomz-3b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloomz-3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
sentence-similarity | sentence-transformers |
# mteb-pt/average_fasttext_wiki.pt.align.300
This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model.
The original pre-trained word embeddings can be found at: [https://fasttext.cc/docs/en/aligned-vectors.html](https://fasttext.cc/docs/en/aligned-vectors.html).
This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mteb-pt/average_fasttext_wiki.pt.align.300')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(592109, 300)
)
(1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
```bibtex
@InProceedings{joulin2018loss,
title={Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion},
author={Joulin, Armand and Bojanowski, Piotr and Mikolov, Tomas and J'egou, Herv'e and Grave, Edouard},
year={2018},
booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing},
}
@article{bojanowski2017enriching,
title={Enriching Word Vectors with Subword Information},
author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas},
journal={Transactions of the Association for Computational Linguistics},
volume={5},
year={2017},
issn={2307-387X},
pages={135--146}
}
``` | {"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | pt-mteb/average_fasttext_wiki.pt.align.300 | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"pt",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:34:46+00:00 | [] | [
"pt"
] | TAGS
#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
|
# mteb-pt/average_fasttext_wiki.URL.300
This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model.
The original pre-trained word embeddings can be found at: URL
This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard
## Full Model Architecture
## Citing & Authors
| [
"# mteb-pt/average_fasttext_wiki.URL.300\n\nThis is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n",
"# mteb-pt/average_fasttext_wiki.URL.300\n\nThis is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_4096_512_15M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8805
- F1 Score: 0.7176
- Accuracy: 0.718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6063 | 20.0 | 200 | 0.5830 | 0.7062 | 0.707 |
| 0.5079 | 40.0 | 400 | 0.5834 | 0.7090 | 0.709 |
| 0.4645 | 60.0 | 600 | 0.5746 | 0.7269 | 0.727 |
| 0.4302 | 80.0 | 800 | 0.5683 | 0.7195 | 0.72 |
| 0.4071 | 100.0 | 1000 | 0.5656 | 0.7386 | 0.739 |
| 0.3921 | 120.0 | 1200 | 0.5758 | 0.7373 | 0.739 |
| 0.3793 | 140.0 | 1400 | 0.5749 | 0.7391 | 0.74 |
| 0.3682 | 160.0 | 1600 | 0.5791 | 0.7394 | 0.74 |
| 0.3572 | 180.0 | 1800 | 0.5667 | 0.7379 | 0.74 |
| 0.3466 | 200.0 | 2000 | 0.6072 | 0.7471 | 0.749 |
| 0.3332 | 220.0 | 2200 | 0.5956 | 0.7534 | 0.754 |
| 0.3231 | 240.0 | 2400 | 0.5712 | 0.7623 | 0.764 |
| 0.3083 | 260.0 | 2600 | 0.5707 | 0.7618 | 0.762 |
| 0.2981 | 280.0 | 2800 | 0.5905 | 0.7705 | 0.772 |
| 0.2874 | 300.0 | 3000 | 0.5625 | 0.7776 | 0.779 |
| 0.2741 | 320.0 | 3200 | 0.6045 | 0.7768 | 0.779 |
| 0.2637 | 340.0 | 3400 | 0.5745 | 0.7885 | 0.789 |
| 0.2502 | 360.0 | 3600 | 0.5988 | 0.7868 | 0.789 |
| 0.2427 | 380.0 | 3800 | 0.5813 | 0.7958 | 0.797 |
| 0.2303 | 400.0 | 4000 | 0.6029 | 0.7997 | 0.801 |
| 0.2228 | 420.0 | 4200 | 0.6105 | 0.7967 | 0.798 |
| 0.2128 | 440.0 | 4400 | 0.6400 | 0.8045 | 0.806 |
| 0.2041 | 460.0 | 4600 | 0.6063 | 0.7995 | 0.801 |
| 0.1979 | 480.0 | 4800 | 0.6433 | 0.7940 | 0.796 |
| 0.192 | 500.0 | 5000 | 0.6300 | 0.8003 | 0.802 |
| 0.1833 | 520.0 | 5200 | 0.6213 | 0.8128 | 0.814 |
| 0.1775 | 540.0 | 5400 | 0.6495 | 0.8017 | 0.803 |
| 0.1705 | 560.0 | 5600 | 0.6616 | 0.8086 | 0.81 |
| 0.1653 | 580.0 | 5800 | 0.6614 | 0.8097 | 0.811 |
| 0.16 | 600.0 | 6000 | 0.6329 | 0.8035 | 0.805 |
| 0.1557 | 620.0 | 6200 | 0.6855 | 0.8085 | 0.81 |
| 0.1521 | 640.0 | 6400 | 0.6404 | 0.8090 | 0.81 |
| 0.1469 | 660.0 | 6600 | 0.6747 | 0.8059 | 0.807 |
| 0.1436 | 680.0 | 6800 | 0.6880 | 0.8034 | 0.805 |
| 0.1406 | 700.0 | 7000 | 0.6774 | 0.8090 | 0.81 |
| 0.1378 | 720.0 | 7200 | 0.6651 | 0.8080 | 0.809 |
| 0.1333 | 740.0 | 7400 | 0.6861 | 0.8000 | 0.801 |
| 0.1306 | 760.0 | 7600 | 0.7087 | 0.8069 | 0.808 |
| 0.1288 | 780.0 | 7800 | 0.6951 | 0.8070 | 0.808 |
| 0.1263 | 800.0 | 8000 | 0.6925 | 0.8061 | 0.807 |
| 0.1272 | 820.0 | 8200 | 0.7014 | 0.8100 | 0.811 |
| 0.1226 | 840.0 | 8400 | 0.6971 | 0.8070 | 0.808 |
| 0.1207 | 860.0 | 8600 | 0.7008 | 0.8040 | 0.805 |
| 0.119 | 880.0 | 8800 | 0.7114 | 0.8110 | 0.812 |
| 0.1172 | 900.0 | 9000 | 0.6964 | 0.8081 | 0.809 |
| 0.1172 | 920.0 | 9200 | 0.7092 | 0.8060 | 0.807 |
| 0.1166 | 940.0 | 9400 | 0.7173 | 0.8080 | 0.809 |
| 0.1145 | 960.0 | 9600 | 0.7052 | 0.8050 | 0.806 |
| 0.1142 | 980.0 | 9800 | 0.7108 | 0.8050 | 0.806 |
| 0.1163 | 1000.0 | 10000 | 0.7095 | 0.8039 | 0.805 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_4-seqsight_4096_512_15M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_4096_512_15M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-04-17T08:34:59+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_4-seqsight\_4096\_512\_15M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8805
* F1 Score: 0.7176
* Accuracy: 0.718
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_usp3_dpo9
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3840
- Rewards/chosen: -5.8994
- Rewards/rejected: -15.8549
- Rewards/accuracies: 0.75
- Rewards/margins: 9.9555
- Logps/rejected: -125.7216
- Logps/chosen: -114.4451
- Logits/rejected: -0.5607
- Logits/chosen: -0.5006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.1194 | 2.67 | 100 | 1.1073 | 5.0437 | 2.1933 | 0.7400 | 2.8504 | -105.6681 | -102.2862 | 0.0014 | 0.0428 |
| 0.0189 | 5.33 | 200 | 2.5034 | -3.9384 | -11.8385 | 0.7000 | 7.9001 | -121.2590 | -112.2662 | -0.7943 | -0.7591 |
| 0.0521 | 8.0 | 300 | 2.6657 | 2.8593 | -3.0059 | 0.6700 | 5.8652 | -111.4450 | -104.7133 | -0.3470 | -0.2646 |
| 0.0001 | 10.67 | 400 | 2.4434 | -6.5026 | -16.5073 | 0.7400 | 10.0046 | -126.4465 | -115.1154 | -0.5717 | -0.5110 |
| 0.0 | 13.33 | 500 | 2.3881 | -5.9046 | -15.8560 | 0.75 | 9.9513 | -125.7228 | -114.4510 | -0.5605 | -0.5010 |
| 0.0 | 16.0 | 600 | 2.3960 | -5.9125 | -15.8411 | 0.75 | 9.9286 | -125.7063 | -114.4597 | -0.5602 | -0.5003 |
| 0.0 | 18.67 | 700 | 2.3936 | -5.8978 | -15.8162 | 0.75 | 9.9184 | -125.6786 | -114.4434 | -0.5604 | -0.5003 |
| 0.0 | 21.33 | 800 | 2.3929 | -5.9227 | -15.8715 | 0.75 | 9.9488 | -125.7401 | -114.4710 | -0.5609 | -0.5010 |
| 0.0 | 24.0 | 900 | 2.3975 | -5.9447 | -15.8363 | 0.75 | 9.8917 | -125.7010 | -114.4955 | -0.5609 | -0.5009 |
| 0.0 | 26.67 | 1000 | 2.3840 | -5.8994 | -15.8549 | 0.75 | 9.9555 | -125.7216 | -114.4451 | -0.5607 | -0.5006 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "llama2", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_usp3_dpo9", "results": []}]} | guoyu-zhang/model_usp3_dpo9 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null | 2024-04-17T08:35:16+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
| model\_usp3\_dpo9
=================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3840
* Rewards/chosen: -5.8994
* Rewards/rejected: -15.8549
* Rewards/accuracies: 0.75
* Rewards/margins: 9.9555
* Logps/rejected: -125.7216
* Logps/chosen: -114.4451
* Logits/rejected: -0.5607
* Logits/chosen: -0.5006
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "diffusers"} | Niggendar/WildCardX_XLPony | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-04-17T08:35:17+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
feature-extraction | transformers |
# Malaysian Mistral 191M on MLM task using 512 context length
Replicating https://github.com/McGill-NLP/llm2vec using https://huggingface.co/mesolitica/malaysian-mistral-191M-4096, done by https://github.com/aisyahrzk https://twitter.com/aisyahhhrzk
Source code at https://github.com/mesolitica/malaya/tree/master/session/llm2vec
WandB, https://wandb.ai/aisyahrazak/mistral-191M-mlm?nw=nwuseraisyahrazak | {"language": ["ms"], "library_name": "transformers"} | mesolitica/malaysian-mistral-191M-MLM-512 | null | [
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"custom_code",
"ms",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:35:18+00:00 | [] | [
"ms"
] | TAGS
#transformers #safetensors #mistral #feature-extraction #custom_code #ms #text-generation-inference #region-us
|
# Malaysian Mistral 191M on MLM task using 512 context length
Replicating URL using URL done by URL URL
Source code at URL
WandB, URL | [
"# Malaysian Mistral 191M on MLM task using 512 context length\n\nReplicating URL using URL done by URL URL\n\nSource code at URL\n\nWandB, URL"
] | [
"TAGS\n#transformers #safetensors #mistral #feature-extraction #custom_code #ms #text-generation-inference #region-us \n",
"# Malaysian Mistral 191M on MLM task using 512 context length\n\nReplicating URL using URL done by URL URL\n\nSource code at URL\n\nWandB, URL"
] |
sentence-similarity | sentence-transformers |
# mteb-pt/average_fasttext_wiki.pt.300
This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model.
The original pre-trained word embeddings can be found at: [https://fasttext.cc/docs/en/pretrained-vectors.html](https://fasttext.cc/docs/en/pretrained-vectors.html).
This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mteb-pt/average_fasttext_wiki.pt.300')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(592109, 300)
)
(1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
```bibtex
@article{bojanowski2017enriching,
title={Enriching Word Vectors with Subword Information},
author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas},
journal={Transactions of the Association for Computational Linguistics},
volume={5},
year={2017},
issn={2307-387X},
pages={135--146}
}
``` | {"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | pt-mteb/average_fasttext_wiki.pt.300 | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"pt",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:36:21+00:00 | [] | [
"pt"
] | TAGS
#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
|
# mteb-pt/average_fasttext_wiki.pt.300
This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model.
The original pre-trained word embeddings can be found at: URL
This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard
## Full Model Architecture
## Citing & Authors
| [
"# mteb-pt/average_fasttext_wiki.pt.300\n\nThis is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n",
"# mteb-pt/average_fasttext_wiki.pt.300\n\nThis is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mixtral-8x7B-orpo-en-de
This model is a fine-tuned version of [mistralai/Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) on the maxidl/distilabel-capybara-dpo-7k-binarized_en_de dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 32
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"tags": ["alignment-handbook", "generated_from_trainer"], "datasets": ["maxidl/distilabel-capybara-dpo-7k-binarized_en_de"], "base_model": "mistralai/Mixtral-8x22B-v0.1", "model-index": [{"name": "Mixtral-8x7B-orpo-en-de", "results": []}]} | maxidl/Mixtral-8x22B-v0.1-capybara-orpo-en-de | null | [
"transformers",
"tensorboard",
"safetensors",
"mixtral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"conversational",
"dataset:maxidl/distilabel-capybara-dpo-7k-binarized_en_de",
"base_model:mistralai/Mixtral-8x22B-v0.1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:37:32+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #mixtral #text-generation #alignment-handbook #generated_from_trainer #conversational #dataset-maxidl/distilabel-capybara-dpo-7k-binarized_en_de #base_model-mistralai/Mixtral-8x22B-v0.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mixtral-8x7B-orpo-en-de
This model is a fine-tuned version of mistralai/Mixtral-8x22B-v0.1 on the maxidl/distilabel-capybara-dpo-7k-binarized_en_de dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 32
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# Mixtral-8x7B-orpo-en-de\n\nThis model is a fine-tuned version of mistralai/Mixtral-8x22B-v0.1 on the maxidl/distilabel-capybara-dpo-7k-binarized_en_de dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 32\n- total_train_batch_size: 32\n- total_eval_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: inverse_sqrt\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #mixtral #text-generation #alignment-handbook #generated_from_trainer #conversational #dataset-maxidl/distilabel-capybara-dpo-7k-binarized_en_de #base_model-mistralai/Mixtral-8x22B-v0.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mixtral-8x7B-orpo-en-de\n\nThis model is a fine-tuned version of mistralai/Mixtral-8x22B-v0.1 on the maxidl/distilabel-capybara-dpo-7k-binarized_en_de dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 32\n- total_train_batch_size: 32\n- total_eval_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: inverse_sqrt\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
sentence-similarity | sentence-transformers |
# mteb-pt/average_pt_nilc_fasttext_cbow_s300
This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model.
The original pre-trained word embeddings can be found at: [http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc](http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc).
This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mteb-pt/average_pt_nilc_fasttext_cbow_s300')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(929606, 300)
)
(1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
```bibtex
@inproceedings{hartmann2017portuguese,
title = Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks},
author = {Hartmann, Nathan S and
Fonseca, Erick R and
Shulby, Christopher D and
Treviso, Marcos V and
Rodrigues, J{'{e}}ssica S and
Alu{'{\i}}sio, Sandra Maria},
year = {2017},
publisher = {SBC},
booktitle = {Brazilian Symposium in Information and Human Language Technology - STIL},
url = {https://sol.sbc.org.br/index.php/stil/article/view/4008}
}
``` | {"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | pt-mteb/average_pt_nilc_fasttext_cbow_s300 | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"pt",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:38:23+00:00 | [] | [
"pt"
] | TAGS
#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
|
# mteb-pt/average_pt_nilc_fasttext_cbow_s300
This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model.
The original pre-trained word embeddings can be found at: URL
This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard
## Full Model Architecture
## Citing & Authors
| [
"# mteb-pt/average_pt_nilc_fasttext_cbow_s300\n\nThis is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n",
"# mteb-pt/average_pt_nilc_fasttext_cbow_s300\n\nThis is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-generation | transformers | # R136a1/InfinityLake-2x7B AWQ
- Model creator: [R136a1](https://huggingface.co/R136a1)
- Original model: [InfinityLake-2x7B](https://huggingface.co/R136a1/InfinityLake-2x7B)

## Model Summary
Experimental model from [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) models. Merged to MoE model with 2x7B parameters.
| {"language": ["en"], "license": "apache-2.0", "tags": ["safetensors", "mixtral", "not-for-all-audiences", "nsfw", "quantized", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "text-generation-inference"], "inference": false, "prompt_template": "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n", "quantized_by": "Suparious"} | solidrust/InfinityLake-2x7B-AWQ | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"not-for-all-audiences",
"nsfw",
"quantized",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T08:40:10+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #not-for-all-audiences #nsfw #quantized #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #en #license-apache-2.0 #region-us
| # R136a1/InfinityLake-2x7B AWQ
- Model creator: R136a1
- Original model: InfinityLake-2x7B
!InfinityKuno-2x7B
## Model Summary
Experimental model from Endevor/InfinityRP-v1-7B and senseable/WestLake-7B-v2 models. Merged to MoE model with 2x7B parameters.
| [
"# R136a1/InfinityLake-2x7B AWQ\n\n- Model creator: R136a1\n- Original model: InfinityLake-2x7B\n\n!InfinityKuno-2x7B",
"## Model Summary\n\nExperimental model from Endevor/InfinityRP-v1-7B and senseable/WestLake-7B-v2 models. Merged to MoE model with 2x7B parameters."
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #not-for-all-audiences #nsfw #quantized #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #en #license-apache-2.0 #region-us \n",
"# R136a1/InfinityLake-2x7B AWQ\n\n- Model creator: R136a1\n- Original model: InfinityLake-2x7B\n\n!InfinityKuno-2x7B",
"## Model Summary\n\nExperimental model from Endevor/InfinityRP-v1-7B and senseable/WestLake-7B-v2 models. Merged to MoE model with 2x7B parameters."
] |
text-to-image | diffusers |
# RealCartoon-Pixar_Miracle API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "realcartoon-pixarmiracle"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/realcartoon-pixarmiracle)
Model link: [View model](https://modelslab.com/models/realcartoon-pixarmiracle)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realcartoon-pixarmiracle",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** | {"license": "creativeml-openrail-m", "tags": ["modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic"], "pinned": true} | stablediffusionapi/realcartoon-pixarmiracle | null | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-17T08:40:19+00:00 | [] | [] | TAGS
#diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# RealCartoon-Pixar_Miracle API Inference
!generated from URL
## Get API Key
Get API key from ModelsLab API, No Payment needed.
Replace Key in below code, change model_id to "realcartoon-pixarmiracle"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs
Try model for free: Generate Images
Model link: View model
View all models: View Models
import requests
import json
url = "URL
payload = URL({
"key": "your_api_key",
"model_id": "realcartoon-pixarmiracle",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(URL)
> Use this coupon code to get 25% off DMGG0RBN | [
"# RealCartoon-Pixar_Miracle API Inference\n\n!generated from URL",
"## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"realcartoon-pixarmiracle\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"realcartoon-pixarmiracle\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN"
] | [
"TAGS\n#diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# RealCartoon-Pixar_Miracle API Inference\n\n!generated from URL",
"## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"realcartoon-pixarmiracle\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"realcartoon-pixarmiracle\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_4096_512_15M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6775
- F1 Score: 0.6384
- Accuracy: 0.64
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6665 | 14.29 | 200 | 0.6299 | 0.6338 | 0.642 |
| 0.6214 | 28.57 | 400 | 0.6286 | 0.6377 | 0.639 |
| 0.6011 | 42.86 | 600 | 0.6436 | 0.6457 | 0.646 |
| 0.5835 | 57.14 | 800 | 0.6555 | 0.6341 | 0.634 |
| 0.569 | 71.43 | 1000 | 0.6592 | 0.6276 | 0.631 |
| 0.5573 | 85.71 | 1200 | 0.7019 | 0.6382 | 0.639 |
| 0.5471 | 100.0 | 1400 | 0.7003 | 0.6467 | 0.647 |
| 0.5372 | 114.29 | 1600 | 0.7257 | 0.6276 | 0.63 |
| 0.5279 | 128.57 | 1800 | 0.7276 | 0.6471 | 0.647 |
| 0.5202 | 142.86 | 2000 | 0.7203 | 0.6353 | 0.636 |
| 0.5118 | 157.14 | 2200 | 0.7076 | 0.6300 | 0.63 |
| 0.5032 | 171.43 | 2400 | 0.7512 | 0.6327 | 0.633 |
| 0.4952 | 185.71 | 2600 | 0.7623 | 0.6390 | 0.639 |
| 0.4872 | 200.0 | 2800 | 0.7659 | 0.6335 | 0.634 |
| 0.4785 | 214.29 | 3000 | 0.8014 | 0.6270 | 0.629 |
| 0.4723 | 228.57 | 3200 | 0.7825 | 0.6317 | 0.632 |
| 0.4632 | 242.86 | 3400 | 0.8105 | 0.6230 | 0.623 |
| 0.4565 | 257.14 | 3600 | 0.7917 | 0.6201 | 0.62 |
| 0.4499 | 271.43 | 3800 | 0.8215 | 0.6230 | 0.623 |
| 0.4429 | 285.71 | 4000 | 0.8071 | 0.6221 | 0.622 |
| 0.4337 | 300.0 | 4200 | 0.8032 | 0.6281 | 0.628 |
| 0.428 | 314.29 | 4400 | 0.8091 | 0.6281 | 0.628 |
| 0.4218 | 328.57 | 4600 | 0.8566 | 0.6278 | 0.628 |
| 0.414 | 342.86 | 4800 | 0.8648 | 0.6316 | 0.632 |
| 0.4086 | 357.14 | 5000 | 0.8490 | 0.6275 | 0.628 |
| 0.4017 | 371.43 | 5200 | 0.8761 | 0.6281 | 0.628 |
| 0.3963 | 385.71 | 5400 | 0.8668 | 0.6331 | 0.633 |
| 0.3907 | 400.0 | 5600 | 0.8701 | 0.6280 | 0.628 |
| 0.3845 | 414.29 | 5800 | 0.8800 | 0.6290 | 0.629 |
| 0.379 | 428.57 | 6000 | 0.8974 | 0.6321 | 0.632 |
| 0.3742 | 442.86 | 6200 | 0.8882 | 0.6271 | 0.627 |
| 0.3697 | 457.14 | 6400 | 0.8963 | 0.6256 | 0.626 |
| 0.3645 | 471.43 | 6600 | 0.9046 | 0.6331 | 0.633 |
| 0.3595 | 485.71 | 6800 | 0.9037 | 0.6250 | 0.625 |
| 0.3547 | 500.0 | 7000 | 0.9066 | 0.6199 | 0.62 |
| 0.3512 | 514.29 | 7200 | 0.9091 | 0.6220 | 0.622 |
| 0.3494 | 528.57 | 7400 | 0.9217 | 0.6240 | 0.624 |
| 0.3454 | 542.86 | 7600 | 0.9340 | 0.6270 | 0.627 |
| 0.3416 | 557.14 | 7800 | 0.9208 | 0.6181 | 0.618 |
| 0.3383 | 571.43 | 8000 | 0.9286 | 0.6201 | 0.62 |
| 0.3344 | 585.71 | 8200 | 0.9528 | 0.6131 | 0.613 |
| 0.3309 | 600.0 | 8400 | 0.9264 | 0.6177 | 0.618 |
| 0.3307 | 614.29 | 8600 | 0.9351 | 0.6161 | 0.616 |
| 0.327 | 628.57 | 8800 | 0.9450 | 0.6131 | 0.613 |
| 0.3259 | 642.86 | 9000 | 0.9445 | 0.6211 | 0.621 |
| 0.3245 | 657.14 | 9200 | 0.9606 | 0.6151 | 0.615 |
| 0.3227 | 671.43 | 9400 | 0.9481 | 0.6191 | 0.619 |
| 0.3212 | 685.71 | 9600 | 0.9572 | 0.6151 | 0.615 |
| 0.3202 | 700.0 | 9800 | 0.9499 | 0.6121 | 0.612 |
| 0.3207 | 714.29 | 10000 | 0.9518 | 0.6111 | 0.611 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_3-seqsight_4096_512_15M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_4096_512_15M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-04-17T08:41:04+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_3-seqsight\_4096\_512\_15M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6775
* F1 Score: 0.6384
* Accuracy: 0.64
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_4096_512_15M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5690
- F1 Score: 0.6978
- Accuracy: 0.698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6232 | 20.0 | 200 | 0.6215 | 0.6517 | 0.659 |
| 0.5531 | 40.0 | 400 | 0.6163 | 0.6730 | 0.673 |
| 0.5275 | 60.0 | 600 | 0.6213 | 0.6705 | 0.671 |
| 0.5017 | 80.0 | 800 | 0.6305 | 0.6840 | 0.684 |
| 0.4817 | 100.0 | 1000 | 0.6395 | 0.6751 | 0.676 |
| 0.4673 | 120.0 | 1200 | 0.6411 | 0.6790 | 0.679 |
| 0.4542 | 140.0 | 1400 | 0.6654 | 0.6756 | 0.676 |
| 0.4431 | 160.0 | 1600 | 0.6558 | 0.6625 | 0.663 |
| 0.4294 | 180.0 | 1800 | 0.6924 | 0.6519 | 0.652 |
| 0.4172 | 200.0 | 2000 | 0.7058 | 0.6640 | 0.664 |
| 0.4045 | 220.0 | 2200 | 0.7269 | 0.6790 | 0.679 |
| 0.3935 | 240.0 | 2400 | 0.6998 | 0.6563 | 0.657 |
| 0.3812 | 260.0 | 2600 | 0.7085 | 0.6558 | 0.657 |
| 0.3704 | 280.0 | 2800 | 0.7457 | 0.6570 | 0.657 |
| 0.3607 | 300.0 | 3000 | 0.7681 | 0.6518 | 0.654 |
| 0.3493 | 320.0 | 3200 | 0.7793 | 0.6639 | 0.665 |
| 0.3401 | 340.0 | 3400 | 0.7973 | 0.6641 | 0.665 |
| 0.3297 | 360.0 | 3600 | 0.8323 | 0.6620 | 0.662 |
| 0.3227 | 380.0 | 3800 | 0.8349 | 0.6630 | 0.663 |
| 0.3122 | 400.0 | 4000 | 0.8535 | 0.6620 | 0.662 |
| 0.3054 | 420.0 | 4200 | 0.8417 | 0.6589 | 0.659 |
| 0.2973 | 440.0 | 4400 | 0.8677 | 0.6571 | 0.658 |
| 0.2903 | 460.0 | 4600 | 0.9014 | 0.6569 | 0.657 |
| 0.281 | 480.0 | 4800 | 0.8887 | 0.6544 | 0.655 |
| 0.2769 | 500.0 | 5000 | 0.8684 | 0.6625 | 0.663 |
| 0.2689 | 520.0 | 5200 | 0.9162 | 0.6640 | 0.664 |
| 0.2621 | 540.0 | 5400 | 0.9498 | 0.6588 | 0.659 |
| 0.2545 | 560.0 | 5600 | 0.9313 | 0.6559 | 0.656 |
| 0.2521 | 580.0 | 5800 | 0.9277 | 0.6599 | 0.66 |
| 0.2461 | 600.0 | 6000 | 0.9019 | 0.6477 | 0.648 |
| 0.2413 | 620.0 | 6200 | 0.9414 | 0.652 | 0.652 |
| 0.2368 | 640.0 | 6400 | 0.9497 | 0.6578 | 0.659 |
| 0.2316 | 660.0 | 6600 | 0.9750 | 0.6617 | 0.662 |
| 0.2263 | 680.0 | 6800 | 1.0003 | 0.6589 | 0.659 |
| 0.2218 | 700.0 | 7000 | 0.9775 | 0.6639 | 0.664 |
| 0.2171 | 720.0 | 7200 | 1.0029 | 0.6598 | 0.66 |
| 0.2134 | 740.0 | 7400 | 0.9766 | 0.6516 | 0.652 |
| 0.2112 | 760.0 | 7600 | 1.0015 | 0.6618 | 0.662 |
| 0.2074 | 780.0 | 7800 | 1.0218 | 0.6688 | 0.669 |
| 0.205 | 800.0 | 8000 | 1.0238 | 0.6640 | 0.664 |
| 0.2023 | 820.0 | 8200 | 1.0046 | 0.6526 | 0.653 |
| 0.201 | 840.0 | 8400 | 1.0158 | 0.6588 | 0.659 |
| 0.197 | 860.0 | 8600 | 1.0188 | 0.6608 | 0.661 |
| 0.1958 | 880.0 | 8800 | 1.0273 | 0.6590 | 0.659 |
| 0.1943 | 900.0 | 9000 | 1.0257 | 0.6588 | 0.659 |
| 0.1921 | 920.0 | 9200 | 1.0493 | 0.6579 | 0.658 |
| 0.1925 | 940.0 | 9400 | 1.0391 | 0.6619 | 0.662 |
| 0.1901 | 960.0 | 9600 | 1.0401 | 0.6609 | 0.661 |
| 0.1902 | 980.0 | 9800 | 1.0325 | 0.6618 | 0.662 |
| 0.1875 | 1000.0 | 10000 | 1.0397 | 0.6629 | 0.663 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_2-seqsight_4096_512_15M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_4096_512_15M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-04-17T08:41:14+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_2-seqsight\_4096\_512\_15M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5690
* F1 Score: 0.6978
* Accuracy: 0.698
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [tomaszki/mistral-33](https://huggingface.co/tomaszki/mistral-33)
* [zzttbrdd/sn6_04m](https://huggingface.co/zzttbrdd/sn6_04m)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: zzttbrdd/sn6_04m
layer_range: [0, 32]
- model: tomaszki/mistral-33
layer_range: [0, 32]
merge_method: slerp
base_model: zzttbrdd/sn6_04m
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.35
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["tomaszki/mistral-33", "zzttbrdd/sn6_04m"]} | Sumail/Ame6 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:tomaszki/mistral-33",
"base_model:zzttbrdd/sn6_04m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:41:30+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-tomaszki/mistral-33 #base_model-zzttbrdd/sn6_04m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* tomaszki/mistral-33
* zzttbrdd/sn6_04m
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* tomaszki/mistral-33\n* zzttbrdd/sn6_04m",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-tomaszki/mistral-33 #base_model-zzttbrdd/sn6_04m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* tomaszki/mistral-33\n* zzttbrdd/sn6_04m",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-15-layer](https://huggingface.co/Citaman/command-r-15-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-15-layer
layer_range: [0, 14]
- model: Citaman/command-r-15-layer
layer_range: [1, 15]
merge_method: slerp
base_model: Citaman/command-r-15-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-15-layer"]} | Citaman/command-r-14-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-15-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:42:58+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-15-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-15-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-15-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-15-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-15-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Reihaneh/wav2vec2_fy_nl_en_de_sv_common_voice_9 | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:43:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # perlthoughts/Falkor-7b AWQ
- Model creator: [perlthoughts](https://huggingface.co/perlthoughts)
- Original model: [Falkor-7b](https://huggingface.co/perlthoughts/Falkor-7b)
<img src="falkor.png" width="300">
## Model summary
dragon-mistral-7b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Mistral-7B base model.
DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation.
| {"license": "apache-2.0", "tags": ["quantized", "4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "chatml"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious", "model-index": [{"name": "Falkor-7b", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 68.26, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Falkor-7b", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 85.84, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Falkor-7b", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 63.98, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Falkor-7b", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 63.08}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Falkor-7b", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 80.35, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Falkor-7b", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 60.5, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Falkor-7b", "name": "Open LLM Leaderboard"}}]}]} | solidrust/Falkor-7b-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"quantized",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"license:apache-2.0",
"model-index",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:44:18+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #quantized #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #license-apache-2.0 #model-index #text-generation-inference #region-us
| # perlthoughts/Falkor-7b AWQ
- Model creator: perlthoughts
- Original model: Falkor-7b
<img src="URL" width="300">
## Model summary
dragon-mistral-7b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Mistral-7B base model.
DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation.
| [
"# perlthoughts/Falkor-7b AWQ\n\n- Model creator: perlthoughts\n- Original model: Falkor-7b\n\n<img src=\"URL\" width=\"300\">",
"## Model summary\n\ndragon-mistral-7b-v0 part of the dRAGon (\"Delivering RAG On ...\") model series, RAG-instruct trained on top of a Mistral-7B base model.\n\nDRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #quantized #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #license-apache-2.0 #model-index #text-generation-inference #region-us \n",
"# perlthoughts/Falkor-7b AWQ\n\n- Model creator: perlthoughts\n- Original model: Falkor-7b\n\n<img src=\"URL\" width=\"300\">",
"## Model summary\n\ndragon-mistral-7b-v0 part of the dRAGon (\"Delivering RAG On ...\") model series, RAG-instruct trained on top of a Mistral-7B base model.\n\nDRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation."
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
density: 0.4
weight: 0.5
- model: Open-Orca/Mistral-7B-OpenOrca
parameters:
density: 0.4
weight: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
``` | {"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["mistralai/Mistral-7B-Instruct-v0.1", "Open-Orca/Mistral-7B-OpenOrca", "mistralai/Mistral-7B-v0.1"]} | jpquiroga/Mistral_7B_dare_ties_merge_instruct_open_orca | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:44:34+00:00 | [
"2311.03099",
"2306.01708"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2311.03099 #arxiv-2306.01708 #base_model-mistralai/Mistral-7B-Instruct-v0.1 #base_model-Open-Orca/Mistral-7B-OpenOrca #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the DARE TIES merge method using mistralai/Mistral-7B-v0.1 as a base.
### Models Merged
The following models were included in the merge:
* mistralai/Mistral-7B-Instruct-v0.1
* Open-Orca/Mistral-7B-OpenOrca
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using mistralai/Mistral-7B-v0.1 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* mistralai/Mistral-7B-Instruct-v0.1\n* Open-Orca/Mistral-7B-OpenOrca",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2311.03099 #arxiv-2306.01708 #base_model-mistralai/Mistral-7B-Instruct-v0.1 #base_model-Open-Orca/Mistral-7B-OpenOrca #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using mistralai/Mistral-7B-v0.1 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* mistralai/Mistral-7B-Instruct-v0.1\n* Open-Orca/Mistral-7B-OpenOrca",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-to-speech | null |
<img src="https://huggingface.co/the-cramer-project/akylai-tts-mini/resolve/main/photo_2024-04-07_15-59-52.png" alt="image" width="300" height="auto">
# 🇰🇬 AkylAI, TTS for Kyrgyz language
We present to you a model trained in the Kyrgyz language, which has been trained on 13 hours of speech and 7,000 samples, complete with source code and training scripts. The architecture is based on Matcha-TTS.
It`s a new approach to non-autoregressive neural TTS, that uses [conditional flow matching](https://arxiv.org/abs/2210.02747) (similar to [rectified flows](https://arxiv.org/abs/2209.03003)) to speed up ODE-based speech synthesis. Our method:
- Is probabilistic
- Has compact memory footprint
- Sounds highly natural
- Is very fast to synthesise from
You can try our *AkylAI TTS* by visiting [SPACE](https://huggingface.co/spaces/the-cramer-project/akylai-tts-mini) and read [ICASSP 2024 paper](https://arxiv.org/abs/2309.03199) for more details.
# Inference
## Run via terminal
It is recommended to start by setting up a virtual environment using `venv`.
1. Clone this repository and install all modules and dependencies by running the commands:
```
git clone https://github.com/Akyl-AI/tts-mini
cd Matcha-TTS
pip install -e .
apt-get install espeak-ng
```
2. Run with CLI arguments:
- To synthesise from given text, run:
```bash
matcha-tts --text "<INPUT TEXT>"
```
- To synthesise from a file, run:
```bash
matcha-tts --file <PATH TO FILE>
```
- Speaking rate
```bash
matcha-tts --text "<INPUT TEXT>" --speaking_rate 1.0
```
- Sampling temperature
```bash
matcha-tts --text "<INPUT TEXT>" --temperature 0.667
```
- Euler ODE solver steps
```bash
matcha-tts --text "<INPUT TEXT>" --steps 10
```
# Train with your own dataset.
## Dataset
For training this model, it is suitable to organize data similar to [LJ Speech](https://keithito.com/LJ-Speech-Dataset/). Each audio file should be single-channel 16-bit PCM WAV with a sample rate of 22050 Hz. WAV files must have unique names, for example:
```
file_1.wav
file_2.wav
file_3.wav
file_4.wav
....
file_12454.wav
file_12455.wav
```
They should also be placed at the root of the project directory in a separate folder.
Additionally, the project should include two `.txt` files for Train and Test with metadata for the files. The names of these files can be arbitrary, and their structure is as follows:
```
.../Matcha-TTS/<your folder name>/wavs/<filename>.wav|Баарыңарга салам, менин атым Акылай.
.../Matcha-TTS/<your folder name>/wavs/<filename>.wav|Мен бардыгын бул жерде Инновация борборунда көргөнүмө абдан кубанычтамын.
.../Matcha-TTS/<your folder name>/wavs/<filename>.wav|<your sentence>
.../Matcha-TTS/<your folder name>/wavs/<filename>.wav|<your sentence>
.../Matcha-TTS/<your folder name>/wavs/<filename>.wav|<your sentence>
........
```
Where each line is the FULL path to the file located in the folder with the uploaded audio, and a sentence in its original form with punctuation is written after the delimiter '|'.
It is advisable to clean the text of unnecessary and unwanted characters beforehand. Be careful with abbreviations and contractions.
The text preprocessing does not include functionality for processing abbreviations and contractions; however, the built-in phonemizer can transcribe numbers, but to avoid errors, it is better to write numbers in words.
## Dataset from Hugging Face
If you want to use a dataset that you store on Hugging Face, it would be convenient to use the `create-dataset` script, which will handle the downloading and all the data preparation, including .txt files with metadata.
Here's what its structure might look like:
```
DatasetDict({
train: Dataset({
features: ['id', 'raw_transcription', 'transcription', 'sentence_type', 'speaker_id', 'gender', 'audio'],
num_rows: 7016
})
test: Dataset({
features: ['id', 'raw_transcription', 'transcription', 'sentence_type', 'speaker_id', 'gender', 'audio'],
num_rows: 31
})
})
```
Where the most important and mandatory features are:
```
['raw_transcription', 'audio']
```
Where:
`raw_transcription` - this is the text of your sentences in the original version (the requirements are the same as in the previous method).
`audio` - these are audio files with metadata, which are dictionaries with keys:
* `array` - audio in the form of a `numpy.ndarray` with a `float32` data type
* `path` - file name
* `sampling_rate` - Sampling rate, which should be no less than 22050 Hz.
Example a row:
```
{'array': array([-3.05175781e-05, -3.05175781e-05, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00]),
'path': '1353.wav',
'sampling_rate': 44100}
```
## Process by Terminal
* **Load this repo and connect to HF**
```
git clone https://github.com/Akyl-AI/tts-mini
cd Matcha-TTS
pip install -e .
```
Install this:
```
apt-get install espeak-ng
```
Connect to HF (Skip this step if you are not using data from Hugging Face.)
```
git config --global credential.helper store
huggingface-cli login
```
* **Load the Data** (Skip this step if you are not using data from Hugging Face.)
The script will automatically create a folder with audio recordings and text files with metadata. During the process, enter the HF repository name and the dataset name.
```
create-dataset
# If you see a cat, then everything is fine!
```
* Go to `configs/data/akylai<OR YOUR FILE NAME>.yaml` and change
```yaml
train_filelist_path: data/filelists/akylai_audio_text_train_filelist.txt # path to your TXT with metadata
valid_filelist_path: data/filelists/akylai_audio_text_val_filelist.txt # path to your TXT with metadata
```
* Generate normalisation statistics with the yaml file of dataset configuration
```bash
matcha-data-stats -i akylai.yaml
# Output:
#{'mel_mean': -5.53662231756592, 'mel_std': 2.1161014277038574}
```
* Update these values in `configs/data/akylai.yaml` under `data_statistics` key.
```bash
data_statistics: # Computed for akylai(or your) dataset
mel_mean: -5.536622
mel_std: 2.116101
```
* **Train**
```
python matcha/train.py experiment=akylai
```
OR
```
python matcha/train.py experiment=akylai trainer.devices=[0,1]
```
* **Checkpoints**
Checkpoints will be saved in `./Matcha-TTS/logs/train/<MODEL_NAME>/runs/<DATE>_<TIME>/checkpoints`. Unload them or select the last few checkpoints.
# Credits
- Shivam Mehta ([GitHub](https://github.com/shivammehta25))
- The Cramer Project (Data collection and preprocessing) [Official Space](https://thecramer.com/)
- Amantur Amatov (Expert)
- Timur Turatali (Expert, Research)
- Den Pavlov (Research, Data preprocessing and ML engineering) [GitHub](https://github.com/simonlobgromov/Matcha-TTS)
- Ulan Abdurazakov (Environment Developer)
- Nursultan Bakashov (CEO)
## Citation information
If you use our code or otherwise find this work useful, please cite our paper:
```text
@inproceedings{mehta2024matcha,
title={Matcha-{TTS}: A fast {TTS} architecture with conditional flow matching},
author={Mehta, Shivam and Tu, Ruibo and Beskow, Jonas and Sz{\'e}kely, {\'E}va and Henter, Gustav Eje},
booktitle={Proc. ICASSP},
year={2024}
}
```
## Acknowledgements
Since this code uses [Lightning-Hydra-Template](https://github.com/ashleve/lightning-hydra-template), you have all the powers that come with it.
Other source code we would like to acknowledge:
- [Coqui-TTS](https://github.com/coqui-ai/TTS/tree/dev): For helping me figure out how to make cython binaries pip installable and encouragement
- [Hugging Face Diffusers](https://huggingface.co/): For their awesome diffusers library and its components
- [Grad-TTS](https://github.com/huawei-noah/Speech-Backbones/tree/main/Grad-TTS): For the monotonic alignment search source code
- [torchdyn](https://github.com/DiffEqML/torchdyn): Useful for trying other ODE solvers during research and development
- [labml.ai](https://nn.labml.ai/transformers/rope/index.html): For the RoPE implementation | {"language": ["ky"], "license": "mit", "pipeline_tag": "text-to-speech"} | the-cramer-project/akylai-tts-mini | null | [
"text-to-speech",
"ky",
"arxiv:2210.02747",
"arxiv:2209.03003",
"arxiv:2309.03199",
"license:mit",
"region:us"
] | null | 2024-04-17T08:45:21+00:00 | [
"2210.02747",
"2209.03003",
"2309.03199"
] | [
"ky"
] | TAGS
#text-to-speech #ky #arxiv-2210.02747 #arxiv-2209.03003 #arxiv-2309.03199 #license-mit #region-us
|
<img src="URL alt="image" width="300" height="auto">
# 🇰🇬 AkylAI, TTS for Kyrgyz language
We present to you a model trained in the Kyrgyz language, which has been trained on 13 hours of speech and 7,000 samples, complete with source code and training scripts. The architecture is based on Matcha-TTS.
It's a new approach to non-autoregressive neural TTS, that uses conditional flow matching (similar to rectified flows) to speed up ODE-based speech synthesis. Our method:
- Is probabilistic
- Has compact memory footprint
- Sounds highly natural
- Is very fast to synthesise from
You can try our *AkylAI TTS* by visiting SPACE and read ICASSP 2024 paper for more details.
# Inference
## Run via terminal
It is recommended to start by setting up a virtual environment using 'venv'.
1. Clone this repository and install all modules and dependencies by running the commands:
2. Run with CLI arguments:
- To synthesise from given text, run:
- To synthesise from a file, run:
- Speaking rate
- Sampling temperature
- Euler ODE solver steps
# Train with your own dataset.
## Dataset
For training this model, it is suitable to organize data similar to LJ Speech. Each audio file should be single-channel 16-bit PCM WAV with a sample rate of 22050 Hz. WAV files must have unique names, for example:
They should also be placed at the root of the project directory in a separate folder.
Additionally, the project should include two '.txt' files for Train and Test with metadata for the files. The names of these files can be arbitrary, and their structure is as follows:
Where each line is the FULL path to the file located in the folder with the uploaded audio, and a sentence in its original form with punctuation is written after the delimiter '|'.
It is advisable to clean the text of unnecessary and unwanted characters beforehand. Be careful with abbreviations and contractions.
The text preprocessing does not include functionality for processing abbreviations and contractions; however, the built-in phonemizer can transcribe numbers, but to avoid errors, it is better to write numbers in words.
## Dataset from Hugging Face
If you want to use a dataset that you store on Hugging Face, it would be convenient to use the 'create-dataset' script, which will handle the downloading and all the data preparation, including .txt files with metadata.
Here's what its structure might look like:
Where the most important and mandatory features are:
Where:
'raw_transcription' - this is the text of your sentences in the original version (the requirements are the same as in the previous method).
'audio' - these are audio files with metadata, which are dictionaries with keys:
* 'array' - audio in the form of a 'numpy.ndarray' with a 'float32' data type
* 'path' - file name
* 'sampling_rate' - Sampling rate, which should be no less than 22050 Hz.
Example a row:
## Process by Terminal
* Load this repo and connect to HF
Install this:
Connect to HF (Skip this step if you are not using data from Hugging Face.)
* Load the Data (Skip this step if you are not using data from Hugging Face.)
The script will automatically create a folder with audio recordings and text files with metadata. During the process, enter the HF repository name and the dataset name.
* Go to 'configs/data/akylai<OR YOUR FILE NAME>.yaml' and change
* Generate normalisation statistics with the yaml file of dataset configuration
* Update these values in 'configs/data/URL' under 'data_statistics' key.
* Train
OR
* Checkpoints
Checkpoints will be saved in './Matcha-TTS/logs/train/<MODEL_NAME>/runs/<DATE>_<TIME>/checkpoints'. Unload them or select the last few checkpoints.
# Credits
- Shivam Mehta (GitHub)
- The Cramer Project (Data collection and preprocessing) Official Space
- Amantur Amatov (Expert)
- Timur Turatali (Expert, Research)
- Den Pavlov (Research, Data preprocessing and ML engineering) GitHub
- Ulan Abdurazakov (Environment Developer)
- Nursultan Bakashov (CEO)
information
If you use our code or otherwise find this work useful, please cite our paper:
## Acknowledgements
Since this code uses Lightning-Hydra-Template, you have all the powers that come with it.
Other source code we would like to acknowledge:
- Coqui-TTS: For helping me figure out how to make cython binaries pip installable and encouragement
- Hugging Face Diffusers: For their awesome diffusers library and its components
- Grad-TTS: For the monotonic alignment search source code
- torchdyn: Useful for trying other ODE solvers during research and development
- URL: For the RoPE implementation | [
"# 🇰🇬 AkylAI, TTS for Kyrgyz language\n\nWe present to you a model trained in the Kyrgyz language, which has been trained on 13 hours of speech and 7,000 samples, complete with source code and training scripts. The architecture is based on Matcha-TTS.\nIt's a new approach to non-autoregressive neural TTS, that uses conditional flow matching (similar to rectified flows) to speed up ODE-based speech synthesis. Our method:\n\n- Is probabilistic\n- Has compact memory footprint\n- Sounds highly natural\n- Is very fast to synthesise from\n\nYou can try our *AkylAI TTS* by visiting SPACE and read ICASSP 2024 paper for more details.",
"# Inference",
"## Run via terminal\n\n\nIt is recommended to start by setting up a virtual environment using 'venv'.\n\n1. Clone this repository and install all modules and dependencies by running the commands:\n\n\n\n\n2. Run with CLI arguments:\n\n- To synthesise from given text, run:\n\n\n\n- To synthesise from a file, run:\n\n\n- Speaking rate\n\n\n\n- Sampling temperature\n\n\n\n- Euler ODE solver steps",
"# Train with your own dataset.",
"## Dataset\n\nFor training this model, it is suitable to organize data similar to LJ Speech. Each audio file should be single-channel 16-bit PCM WAV with a sample rate of 22050 Hz. WAV files must have unique names, for example:\n\n\n\n\nThey should also be placed at the root of the project directory in a separate folder.\n\nAdditionally, the project should include two '.txt' files for Train and Test with metadata for the files. The names of these files can be arbitrary, and their structure is as follows:\n\nWhere each line is the FULL path to the file located in the folder with the uploaded audio, and a sentence in its original form with punctuation is written after the delimiter '|'. \nIt is advisable to clean the text of unnecessary and unwanted characters beforehand. Be careful with abbreviations and contractions.\nThe text preprocessing does not include functionality for processing abbreviations and contractions; however, the built-in phonemizer can transcribe numbers, but to avoid errors, it is better to write numbers in words.",
"## Dataset from Hugging Face\n\nIf you want to use a dataset that you store on Hugging Face, it would be convenient to use the 'create-dataset' script, which will handle the downloading and all the data preparation, including .txt files with metadata.\nHere's what its structure might look like:\n\n\n\nWhere the most important and mandatory features are:\n\n\nWhere:\n\n'raw_transcription' - this is the text of your sentences in the original version (the requirements are the same as in the previous method).\n\n'audio' - these are audio files with metadata, which are dictionaries with keys:\n\n* 'array' - audio in the form of a 'numpy.ndarray' with a 'float32' data type\n* 'path' - file name\n* 'sampling_rate' - Sampling rate, which should be no less than 22050 Hz.\n \nExample a row:",
"## Process by Terminal\n\n* Load this repo and connect to HF\n\n\n\nInstall this:\n\n\nConnect to HF (Skip this step if you are not using data from Hugging Face.)\n\n\n\n* Load the Data (Skip this step if you are not using data from Hugging Face.)\n\nThe script will automatically create a folder with audio recordings and text files with metadata. During the process, enter the HF repository name and the dataset name.\n \n\n\n\n* Go to 'configs/data/akylai<OR YOUR FILE NAME>.yaml' and change\n\n\n\n* Generate normalisation statistics with the yaml file of dataset configuration\n\n\n\n* Update these values in 'configs/data/URL' under 'data_statistics' key.\n\n\n\n\n\n* Train\n\n\n\nOR\n\n\n\n\n* Checkpoints\n\nCheckpoints will be saved in './Matcha-TTS/logs/train/<MODEL_NAME>/runs/<DATE>_<TIME>/checkpoints'. Unload them or select the last few checkpoints.",
"# Credits\n\n\n- Shivam Mehta (GitHub)\n- The Cramer Project (Data collection and preprocessing) Official Space\n- Amantur Amatov (Expert)\n- Timur Turatali (Expert, Research)\n- Den Pavlov (Research, Data preprocessing and ML engineering) GitHub\n- Ulan Abdurazakov (Environment Developer)\n- Nursultan Bakashov (CEO)\n\ninformation\n\nIf you use our code or otherwise find this work useful, please cite our paper:",
"## Acknowledgements\n\nSince this code uses Lightning-Hydra-Template, you have all the powers that come with it.\n\nOther source code we would like to acknowledge:\n\n- Coqui-TTS: For helping me figure out how to make cython binaries pip installable and encouragement\n- Hugging Face Diffusers: For their awesome diffusers library and its components\n- Grad-TTS: For the monotonic alignment search source code\n- torchdyn: Useful for trying other ODE solvers during research and development\n- URL: For the RoPE implementation"
] | [
"TAGS\n#text-to-speech #ky #arxiv-2210.02747 #arxiv-2209.03003 #arxiv-2309.03199 #license-mit #region-us \n",
"# 🇰🇬 AkylAI, TTS for Kyrgyz language\n\nWe present to you a model trained in the Kyrgyz language, which has been trained on 13 hours of speech and 7,000 samples, complete with source code and training scripts. The architecture is based on Matcha-TTS.\nIt's a new approach to non-autoregressive neural TTS, that uses conditional flow matching (similar to rectified flows) to speed up ODE-based speech synthesis. Our method:\n\n- Is probabilistic\n- Has compact memory footprint\n- Sounds highly natural\n- Is very fast to synthesise from\n\nYou can try our *AkylAI TTS* by visiting SPACE and read ICASSP 2024 paper for more details.",
"# Inference",
"## Run via terminal\n\n\nIt is recommended to start by setting up a virtual environment using 'venv'.\n\n1. Clone this repository and install all modules and dependencies by running the commands:\n\n\n\n\n2. Run with CLI arguments:\n\n- To synthesise from given text, run:\n\n\n\n- To synthesise from a file, run:\n\n\n- Speaking rate\n\n\n\n- Sampling temperature\n\n\n\n- Euler ODE solver steps",
"# Train with your own dataset.",
"## Dataset\n\nFor training this model, it is suitable to organize data similar to LJ Speech. Each audio file should be single-channel 16-bit PCM WAV with a sample rate of 22050 Hz. WAV files must have unique names, for example:\n\n\n\n\nThey should also be placed at the root of the project directory in a separate folder.\n\nAdditionally, the project should include two '.txt' files for Train and Test with metadata for the files. The names of these files can be arbitrary, and their structure is as follows:\n\nWhere each line is the FULL path to the file located in the folder with the uploaded audio, and a sentence in its original form with punctuation is written after the delimiter '|'. \nIt is advisable to clean the text of unnecessary and unwanted characters beforehand. Be careful with abbreviations and contractions.\nThe text preprocessing does not include functionality for processing abbreviations and contractions; however, the built-in phonemizer can transcribe numbers, but to avoid errors, it is better to write numbers in words.",
"## Dataset from Hugging Face\n\nIf you want to use a dataset that you store on Hugging Face, it would be convenient to use the 'create-dataset' script, which will handle the downloading and all the data preparation, including .txt files with metadata.\nHere's what its structure might look like:\n\n\n\nWhere the most important and mandatory features are:\n\n\nWhere:\n\n'raw_transcription' - this is the text of your sentences in the original version (the requirements are the same as in the previous method).\n\n'audio' - these are audio files with metadata, which are dictionaries with keys:\n\n* 'array' - audio in the form of a 'numpy.ndarray' with a 'float32' data type\n* 'path' - file name\n* 'sampling_rate' - Sampling rate, which should be no less than 22050 Hz.\n \nExample a row:",
"## Process by Terminal\n\n* Load this repo and connect to HF\n\n\n\nInstall this:\n\n\nConnect to HF (Skip this step if you are not using data from Hugging Face.)\n\n\n\n* Load the Data (Skip this step if you are not using data from Hugging Face.)\n\nThe script will automatically create a folder with audio recordings and text files with metadata. During the process, enter the HF repository name and the dataset name.\n \n\n\n\n* Go to 'configs/data/akylai<OR YOUR FILE NAME>.yaml' and change\n\n\n\n* Generate normalisation statistics with the yaml file of dataset configuration\n\n\n\n* Update these values in 'configs/data/URL' under 'data_statistics' key.\n\n\n\n\n\n* Train\n\n\n\nOR\n\n\n\n\n* Checkpoints\n\nCheckpoints will be saved in './Matcha-TTS/logs/train/<MODEL_NAME>/runs/<DATE>_<TIME>/checkpoints'. Unload them or select the last few checkpoints.",
"# Credits\n\n\n- Shivam Mehta (GitHub)\n- The Cramer Project (Data collection and preprocessing) Official Space\n- Amantur Amatov (Expert)\n- Timur Turatali (Expert, Research)\n- Den Pavlov (Research, Data preprocessing and ML engineering) GitHub\n- Ulan Abdurazakov (Environment Developer)\n- Nursultan Bakashov (CEO)\n\ninformation\n\nIf you use our code or otherwise find this work useful, please cite our paper:",
"## Acknowledgements\n\nSince this code uses Lightning-Hydra-Template, you have all the powers that come with it.\n\nOther source code we would like to acknowledge:\n\n- Coqui-TTS: For helping me figure out how to make cython binaries pip installable and encouragement\n- Hugging Face Diffusers: For their awesome diffusers library and its components\n- Grad-TTS: For the monotonic alignment search source code\n- torchdyn: Useful for trying other ODE solvers during research and development\n- URL: For the RoPE implementation"
] |
question-answering | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Dofla/distilbert-squad | null | [
"transformers",
"safetensors",
"distilbert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-04-17T08:47:22+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #distilbert #question-answering #arxiv-1910.09700 #endpoints_compatible #has_space #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #distilbert #question-answering #arxiv-1910.09700 #endpoints_compatible #has_space #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | batprem/my-finetuned-model | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-17T08:48:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
translation | transformers | # Gemago 2B Model Card
**Original Gemma Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Model Page On Github**: [Gemago](https://github.com/deveworld/Gemago)
**Resources and Technical Documentation**:
* [Blog(Korean)](https://blog.worldsw.dev/tag/gemago/)
* [Original Google's Gemma-2B](https://huggingface.co/google/gemma-2b)
* [Training Code @ Github: Gemma-EasyLM (Orginial by Beomi)](https://github.com/deveworld/Gemma-EasyLM/tree/2b)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Orginal Google, Fine-tuned by DevWorld
## Model Information
Translate English/Korean to Korean/English.
### Description
Gemago is a lightweight English-and-Korean translation model based on Gemma.
### Context Length
Models are trained on a context length of 8192 tokens, which is equivalent to Gemma.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model with transformers
[](https://colab.research.google.com/github/deveworld/Gemago/blob/main/Gemago_2b_Infer.ipynb)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("devworld/gemago-2b")
model = AutoModelForCausalLM.from_pretrained("devworld/gemago-2b")
def gen(text, max_length):
input_ids = tokenizer(text, return_tensors="pt")
outputs = model.generate(**input_ids, max_length=max_length)
return tokenizer.decode(outputs[0])
def e2k(e):
input_text = f"English:\n{e}\n\nKorean:\n"
return gen(input_text, 1024)
def k2e(k):
input_text = f"Korean:\n{k}\n\nEnglish:\n"
return gen(input_text, 1024)
``` | {"language": ["ko", "en"], "license": ["apache-2.0", "gemma"], "tags": ["gemma"], "datasets": ["traintogpb/aihub-koen-translation-integrated-base-10m"], "pipeline_tag": "translation", "widget": [{"text": "Korean:\n\ub098\ub77c\uc758 \ub9d0\uc774 \uc911\uad6d\uacfc \ub2ec\ub77c \ubb38\uc790\uc640 \uc11c\ub85c \ud1b5\ud558\uc9c0 \uc544\ub2c8\ud558\ub2e4.\n\nEnglish:\n", "example_title": "K2E"}, {"text": "English:\nMr. and Mrs. Dursley were proud to say that they were perfectly normal.\n\nKorean:\n", "example_title": "E2K"}], "inference": {"parameters": {"max_length": 200}}} | DevWorld/Gemago-2b | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"translation",
"ko",
"en",
"dataset:traintogpb/aihub-koen-translation-integrated-base-10m",
"license:apache-2.0",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:48:55+00:00 | [] | [
"ko",
"en"
] | TAGS
#transformers #safetensors #gemma #text-generation #translation #ko #en #dataset-traintogpb/aihub-koen-translation-integrated-base-10m #license-apache-2.0 #license-gemma #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Gemago 2B Model Card
Original Gemma Model Page: Gemma
Model Page On Github: Gemago
Resources and Technical Documentation:
* Blog(Korean)
* Original Google's Gemma-2B
* Training Code @ Github: Gemma-EasyLM (Orginial by Beomi)
Terms of Use: Terms
Authors: Orginal Google, Fine-tuned by DevWorld
## Model Information
Translate English/Korean to Korean/English.
### Description
Gemago is a lightweight English-and-Korean translation model based on Gemma.
### Context Length
Models are trained on a context length of 8192 tokens, which is equivalent to Gemma.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to 'pip install -U transformers', then copy the snippet from the section that is relevant for your usecase.
#### Running the model with transformers
\n* Original Google's Gemma-2B\n* Training Code @ Github: Gemma-EasyLM (Orginial by Beomi)\n\nTerms of Use: Terms\n\nAuthors: Orginal Google, Fine-tuned by DevWorld",
"## Model Information\n\nTranslate English/Korean to Korean/English.",
"### Description\n\nGemago is a lightweight English-and-Korean translation model based on Gemma.",
"### Context Length\nModels are trained on a context length of 8192 tokens, which is equivalent to Gemma.",
"### Usage\n\nBelow we share some code snippets on how to get quickly started with running the model. First make sure to 'pip install -U transformers', then copy the snippet from the section that is relevant for your usecase.",
"#### Running the model with transformers\n\n* Original Google's Gemma-2B\n* Training Code @ Github: Gemma-EasyLM (Orginial by Beomi)\n\nTerms of Use: Terms\n\nAuthors: Orginal Google, Fine-tuned by DevWorld",
"## Model Information\n\nTranslate English/Korean to Korean/English.",
"### Description\n\nGemago is a lightweight English-and-Korean translation model based on Gemma.",
"### Context Length\nModels are trained on a context length of 8192 tokens, which is equivalent to Gemma.",
"### Usage\n\nBelow we share some code snippets on how to get quickly started with running the model. First make sure to 'pip install -U transformers', then copy the snippet from the section that is relevant for your usecase.",
"#### Running the model with transformers\n
**Model Page On Github**: [Gemago](https://github.com/deveworld/Gemago)
**Resources and Technical Documentation**:
* [Blog(Korean)](https://blog.worldsw.dev/tag/gemago/)
* [Original Google's Gemma-7B](https://huggingface.co/google/gemma-7b)
* [Training Code @ Github: Gemma-EasyLM (Orginial by Beomi)](https://github.com/deveworld/Gemma-EasyLM/)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Orginal Google, Fine-tuned by DevWorld
## Model Information
Translate English/Korean to Korean/English.
### Description
Gemago is a lightweight English-and-Korean translation model based on Gemma.
### Context Length
Models are trained on a context length of 8192 tokens, which is equivalent to Gemma.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model with transformers
[](https://colab.research.google.com/github/deveworld/Gemago/blob/main/Gemago_7b_Infer.ipynb)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("devworld/gemago-7b")
model = AutoModelForCausalLM.from_pretrained("devworld/gemago-7b")
def gen(text, max_length):
input_ids = tokenizer(text, return_tensors="pt")
outputs = model.generate(**input_ids, max_length=max_length)
return tokenizer.decode(outputs[0])
def e2k(e):
input_text = f"English:\n{e}\n\nKorean:\n"
return gen(input_text, 1024)
def k2e(k):
input_text = f"Korean:\n{k}\n\nEnglish:\n"
return gen(input_text, 1024)
``` | {"language": ["ko", "en"], "license": ["apache-2.0", "gemma"], "tags": ["gemma"], "datasets": ["traintogpb/aihub-koen-translation-integrated-base-10m"], "pipeline_tag": "translation"} | DevWorld/Gemago-7b | null | [
"gemma",
"translation",
"ko",
"en",
"dataset:traintogpb/aihub-koen-translation-integrated-base-10m",
"license:apache-2.0",
"license:gemma",
"region:us"
] | null | 2024-04-17T08:51:20+00:00 | [] | [
"ko",
"en"
] | TAGS
#gemma #translation #ko #en #dataset-traintogpb/aihub-koen-translation-integrated-base-10m #license-apache-2.0 #license-gemma #region-us
| # Gemago 7B Model Card
Original Gemma Model Page: Gemma
Model Page On Github: Gemago
Resources and Technical Documentation:
* Blog(Korean)
* Original Google's Gemma-7B
* Training Code @ Github: Gemma-EasyLM (Orginial by Beomi)
Terms of Use: Terms
Authors: Orginal Google, Fine-tuned by DevWorld
## Model Information
Translate English/Korean to Korean/English.
### Description
Gemago is a lightweight English-and-Korean translation model based on Gemma.
### Context Length
Models are trained on a context length of 8192 tokens, which is equivalent to Gemma.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to 'pip install -U transformers', then copy the snippet from the section that is relevant for your usecase.
#### Running the model with transformers
\n* Original Google's Gemma-7B\n* Training Code @ Github: Gemma-EasyLM (Orginial by Beomi)\n\nTerms of Use: Terms\n\nAuthors: Orginal Google, Fine-tuned by DevWorld",
"## Model Information\n\nTranslate English/Korean to Korean/English.",
"### Description\n\nGemago is a lightweight English-and-Korean translation model based on Gemma.",
"### Context Length\nModels are trained on a context length of 8192 tokens, which is equivalent to Gemma.",
"### Usage\n\nBelow we share some code snippets on how to get quickly started with running the model. First make sure to 'pip install -U transformers', then copy the snippet from the section that is relevant for your usecase.",
"#### Running the model with transformers\n\n* Original Google's Gemma-7B\n* Training Code @ Github: Gemma-EasyLM (Orginial by Beomi)\n\nTerms of Use: Terms\n\nAuthors: Orginal Google, Fine-tuned by DevWorld",
"## Model Information\n\nTranslate English/Korean to Korean/English.",
"### Description\n\nGemago is a lightweight English-and-Korean translation model based on Gemma.",
"### Context Length\nModels are trained on a context length of 8192 tokens, which is equivalent to Gemma.",
"### Usage\n\nBelow we share some code snippets on how to get quickly started with running the model. First make sure to 'pip install -U transformers', then copy the snippet from the section that is relevant for your usecase.",
"#### Running the model with transformers\n
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
distilgpt2 - bnb 4bits
- Model creator: https://huggingface.co/distilbert/
- Original model: https://huggingface.co/distilbert/distilgpt2/
Original model description:
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- openwebtext
model-index:
- name: distilgpt2
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: wikitext
name: WikiText-103
metrics:
- type: perplexity
name: Perplexity
value: 21.1
co2_eq_emissions: 149200
---
# DistilGPT2
DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of [GPT-2](https://huggingface.co/gpt2).
## Model Details
- **Developed by:** Hugging Face
- **Model type:** Transformer-based Language Model
- **Language:** English
- **License:** Apache 2.0
- **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2.
- **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/).
## Uses, Limitations and Risks
#### Limitations and Risks
<details>
<summary>Click to expand</summary>
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
As the developers of GPT-2 (OpenAI) note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md), “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:
- [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
- [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
- [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(48)
>>> generator("The White man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': "The White man worked as a salesman at a McDonald's restaurant called Kia at the time of the"},
{'generated_text': 'The White man worked as a contractor in the Army in the late 1990s. He became a "'},
{'generated_text': 'The White man worked as a police spokesman to the US Navy in the 1930s.'}]
>>> set_seed(48)
>>> generator("The Black man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': 'The Black man worked as a shop assistant for an hour at Wal-Mart at Wal-Mart in'},
{'generated_text': 'The Black man worked as a waiter in the hotel when he was assaulted when he got out of a'},
{'generated_text': 'The Black man worked as a police spokesman four months ago...'}]
```
</details>
#### Potential Uses
Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
> - *Entertainment: Creation of games, chat bots, and amusing generations.*
Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
#### Out-of-scope Uses
OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
### How to Get Started with the Model
<details>
<summary>Click to expand</summary>
*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(42)
>>> generator("Hello, I’m a language model", max_length=20, num_return_sequences=5)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
[{'generated_text': "Hello, I'm a language model, I'm a language model. In my previous post I've"},
{'generated_text': "Hello, I'm a language model, and I'd love to hear what you think about it."},
{'generated_text': "Hello, I'm a language model, but I don't get much of a connection anymore, so"},
{'generated_text': "Hello, I'm a language model, a functional language... It's not an example, and that"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = GPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
And in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = TFGPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
</details>
## Training Data
DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText.
## Training Procedure
The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108).
## Evaluation Results
The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
## Environmental Impact
*Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- **Hardware Type:** 8 16GB V100
- **Hours used:** 168 (1 week)
- **Cloud Provider:** Azure
- **Compute Region:** unavailable, assumed East US for calculations
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
## Citation
```bibtex
@inproceedings{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
booktitle={NeurIPS EMC^2 Workshop},
year={2019}
}
```
## Glossary
- <a name="knowledge-distillation">**Knowledge Distillation**</a>: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531).
<a href="https://huggingface.co/exbert/?model=distilgpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {} | RichardErkhov/distilbert_-_distilgpt2-4bits | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.01108",
"arxiv:2201.08542",
"arxiv:2203.12574",
"arxiv:1910.09700",
"arxiv:1503.02531",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-17T08:52:53+00:00 | [
"1910.01108",
"2201.08542",
"2203.12574",
"1910.09700",
"1503.02531"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
distilgpt2 - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- openwebtext
model-index:
- name: distilgpt2
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: wikitext
name: WikiText-103
metrics:
- type: perplexity
name: Perplexity
value: 21.1
co2_eq_emissions: 149200
---
# DistilGPT2
DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2.
## Model Details
- Developed by: Hugging Face
- Model type: Transformer-based Language Model
- Language: English
- License: Apache 2.0
- Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.
- Resources for more information: See this repository for more about Distil\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2.
## Uses, Limitations and Risks
#### Limitations and Risks
<details>
<summary>Click to expand</summary>
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
As the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:
- Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
- Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
- Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
</details>
#### Potential Uses
Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
The developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
> - *Entertainment: Creation of games, chat bots, and amusing generations.*
Using DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.
#### Out-of-scope Uses
OpenAI states in the GPT-2 model card:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
### How to Get Started with the Model
<details>
<summary>Click to expand</summary>
*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
And in TensorFlow:
</details>
## Training Data
DistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.
## Training Procedure
The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).
## Evaluation Results
The creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
## Environmental Impact
*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- Hardware Type: 8 16GB V100
- Hours used: 168 (1 week)
- Cloud Provider: Azure
- Compute Region: unavailable, assumed East US for calculations
- Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
## Glossary
- <a name="knowledge-distillation">Knowledge Distillation</a>: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).
<a href="URL
<img width="300px" src="URL
</a>
| [
"# DistilGPT2\n\nDistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2.",
"## Model Details\n\n- Developed by: Hugging Face\n- Model type: Transformer-based Language Model\n- Language: English\n- License: Apache 2.0\n- Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.\n- Resources for more information: See this repository for more about Distil\\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2.",
"## Uses, Limitations and Risks",
"#### Limitations and Risks\n\n<details>\n<summary>Click to expand</summary>\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nAs the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). \n\nDistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.\n\nThe impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: \n\n- Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.\n- Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias). \n- Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2. \n\n\n\n</details>",
"#### Potential Uses\n\nSince DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. \n\nThe developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: \n\n> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*\n> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*\n> - *Entertainment: Creation of games, chat bots, and amusing generations.*\n\nUsing DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.",
"#### Out-of-scope Uses\n\nOpenAI states in the GPT-2 model card: \n\n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n>\n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.",
"### How to Get Started with the Model \n\n<details>\n<summary>Click to expand</summary>\n\n*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*\n\nUsing DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:\n\n \n \nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nAnd in TensorFlow:\n\n\n\n</details>",
"## Training Data\n\nDistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.",
"## Training Procedure\n\nThe texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).",
"## Evaluation Results\n\nThe creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).",
"## Environmental Impact\n\n*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*\n\n- Hardware Type: 8 16GB V100\n- Hours used: 168 (1 week)\n- Cloud Provider: Azure\n- Compute Region: unavailable, assumed East US for calculations\n- Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2",
"## Glossary\n\n-\t<a name=\"knowledge-distillation\">Knowledge Distillation</a>: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# DistilGPT2\n\nDistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2.",
"## Model Details\n\n- Developed by: Hugging Face\n- Model type: Transformer-based Language Model\n- Language: English\n- License: Apache 2.0\n- Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.\n- Resources for more information: See this repository for more about Distil\\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2.",
"## Uses, Limitations and Risks",
"#### Limitations and Risks\n\n<details>\n<summary>Click to expand</summary>\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nAs the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). \n\nDistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.\n\nThe impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: \n\n- Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.\n- Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias). \n- Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2. \n\n\n\n</details>",
"#### Potential Uses\n\nSince DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. \n\nThe developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: \n\n> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*\n> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*\n> - *Entertainment: Creation of games, chat bots, and amusing generations.*\n\nUsing DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.",
"#### Out-of-scope Uses\n\nOpenAI states in the GPT-2 model card: \n\n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n>\n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.",
"### How to Get Started with the Model \n\n<details>\n<summary>Click to expand</summary>\n\n*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*\n\nUsing DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:\n\n \n \nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nAnd in TensorFlow:\n\n\n\n</details>",
"## Training Data\n\nDistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.",
"## Training Procedure\n\nThe texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).",
"## Evaluation Results\n\nThe creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).",
"## Environmental Impact\n\n*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*\n\n- Hardware Type: 8 16GB V100\n- Hours used: 168 (1 week)\n- Cloud Provider: Azure\n- Compute Region: unavailable, assumed East US for calculations\n- Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2",
"## Glossary\n\n-\t<a name=\"knowledge-distillation\">Knowledge Distillation</a>: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
text-generation | transformers |
# stablelm-3b-4e1t-dare1
stablelm-3b-4e1t-dare1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [stabilityai/japanese-stablelm-3b-4e1t-instruct](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-instruct)
* [stabilityai/japanese-stablelm-3b-4e1t-instruct](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-instruct)
## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 30]
model: stabilityai/japanese-stablelm-3b-4e1t-instruct
parameters:
density: [1, 0.5, 0.1]
weight: 0.5
- layer_range: [2, 32]
model: stabilityai/japanese-stablelm-3b-4e1t-instruct
parameters:
density: [0.1, 0.5, 1]
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: dare_ties
base_model: stabilityai/japanese-stablelm-3b-4e1t-instruct
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/stablelm-3b-4e1t-dare1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "stabilityai/japanese-stablelm-3b-4e1t-instruct"], "base_model": ["stabilityai/japanese-stablelm-3b-4e1t-instruct", "stabilityai/japanese-stablelm-3b-4e1t-instruct"]} | aipib/stablelm-3b-4e1t-dare1 | null | [
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"stabilityai/japanese-stablelm-3b-4e1t-instruct",
"custom_code",
"base_model:stabilityai/japanese-stablelm-3b-4e1t-instruct",
"autotrain_compatible",
"region:us"
] | null | 2024-04-17T08:53:58+00:00 | [] | [] | TAGS
#transformers #safetensors #stablelm_epoch #text-generation #merge #mergekit #lazymergekit #stabilityai/japanese-stablelm-3b-4e1t-instruct #custom_code #base_model-stabilityai/japanese-stablelm-3b-4e1t-instruct #autotrain_compatible #region-us
|
# stablelm-3b-4e1t-dare1
stablelm-3b-4e1t-dare1 is a merge of the following models using LazyMergekit:
* stabilityai/japanese-stablelm-3b-4e1t-instruct
* stabilityai/japanese-stablelm-3b-4e1t-instruct
## Configuration
## Usage
| [
"# stablelm-3b-4e1t-dare1\n\nstablelm-3b-4e1t-dare1 is a merge of the following models using LazyMergekit:\n* stabilityai/japanese-stablelm-3b-4e1t-instruct\n* stabilityai/japanese-stablelm-3b-4e1t-instruct",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #stablelm_epoch #text-generation #merge #mergekit #lazymergekit #stabilityai/japanese-stablelm-3b-4e1t-instruct #custom_code #base_model-stabilityai/japanese-stablelm-3b-4e1t-instruct #autotrain_compatible #region-us \n",
"# stablelm-3b-4e1t-dare1\n\nstablelm-3b-4e1t-dare1 is a merge of the following models using LazyMergekit:\n* stabilityai/japanese-stablelm-3b-4e1t-instruct\n* stabilityai/japanese-stablelm-3b-4e1t-instruct",
"## Configuration",
"## Usage"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-JaspervanLeuven/controlnet5k
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "controlnet", "diffusers-training"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true} | JaspervanLeuven/controlnet2.5k | null | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-17T08:54:09+00:00 | [] | [] | TAGS
#diffusers #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #controlnet #diffusers-training #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #region-us
|
# controlnet-JaspervanLeuven/controlnet5k
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# controlnet-JaspervanLeuven/controlnet5k\n\nThese are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #controlnet #diffusers-training #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #region-us \n",
"# controlnet-JaspervanLeuven/controlnet5k\n\nThese are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
distilgpt2 - bnb 8bits
- Model creator: https://huggingface.co/distilbert/
- Original model: https://huggingface.co/distilbert/distilgpt2/
Original model description:
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- openwebtext
model-index:
- name: distilgpt2
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: wikitext
name: WikiText-103
metrics:
- type: perplexity
name: Perplexity
value: 21.1
co2_eq_emissions: 149200
---
# DistilGPT2
DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of [GPT-2](https://huggingface.co/gpt2).
## Model Details
- **Developed by:** Hugging Face
- **Model type:** Transformer-based Language Model
- **Language:** English
- **License:** Apache 2.0
- **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2.
- **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/).
## Uses, Limitations and Risks
#### Limitations and Risks
<details>
<summary>Click to expand</summary>
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
As the developers of GPT-2 (OpenAI) note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md), “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:
- [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
- [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
- [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(48)
>>> generator("The White man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': "The White man worked as a salesman at a McDonald's restaurant called Kia at the time of the"},
{'generated_text': 'The White man worked as a contractor in the Army in the late 1990s. He became a "'},
{'generated_text': 'The White man worked as a police spokesman to the US Navy in the 1930s.'}]
>>> set_seed(48)
>>> generator("The Black man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': 'The Black man worked as a shop assistant for an hour at Wal-Mart at Wal-Mart in'},
{'generated_text': 'The Black man worked as a waiter in the hotel when he was assaulted when he got out of a'},
{'generated_text': 'The Black man worked as a police spokesman four months ago...'}]
```
</details>
#### Potential Uses
Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
> - *Entertainment: Creation of games, chat bots, and amusing generations.*
Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
#### Out-of-scope Uses
OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
### How to Get Started with the Model
<details>
<summary>Click to expand</summary>
*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(42)
>>> generator("Hello, I’m a language model", max_length=20, num_return_sequences=5)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
[{'generated_text': "Hello, I'm a language model, I'm a language model. In my previous post I've"},
{'generated_text': "Hello, I'm a language model, and I'd love to hear what you think about it."},
{'generated_text': "Hello, I'm a language model, but I don't get much of a connection anymore, so"},
{'generated_text': "Hello, I'm a language model, a functional language... It's not an example, and that"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = GPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
And in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = TFGPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
</details>
## Training Data
DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText.
## Training Procedure
The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108).
## Evaluation Results
The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
## Environmental Impact
*Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- **Hardware Type:** 8 16GB V100
- **Hours used:** 168 (1 week)
- **Cloud Provider:** Azure
- **Compute Region:** unavailable, assumed East US for calculations
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
## Citation
```bibtex
@inproceedings{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
booktitle={NeurIPS EMC^2 Workshop},
year={2019}
}
```
## Glossary
- <a name="knowledge-distillation">**Knowledge Distillation**</a>: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531).
<a href="https://huggingface.co/exbert/?model=distilgpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {} | RichardErkhov/distilbert_-_distilgpt2-8bits | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.01108",
"arxiv:2201.08542",
"arxiv:2203.12574",
"arxiv:1910.09700",
"arxiv:1503.02531",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-17T08:54:41+00:00 | [
"1910.01108",
"2201.08542",
"2203.12574",
"1910.09700",
"1503.02531"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
distilgpt2 - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- openwebtext
model-index:
- name: distilgpt2
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: wikitext
name: WikiText-103
metrics:
- type: perplexity
name: Perplexity
value: 21.1
co2_eq_emissions: 149200
---
# DistilGPT2
DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2.
## Model Details
- Developed by: Hugging Face
- Model type: Transformer-based Language Model
- Language: English
- License: Apache 2.0
- Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.
- Resources for more information: See this repository for more about Distil\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2.
## Uses, Limitations and Risks
#### Limitations and Risks
<details>
<summary>Click to expand</summary>
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
As the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:
- Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
- Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
- Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
</details>
#### Potential Uses
Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
The developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
> - *Entertainment: Creation of games, chat bots, and amusing generations.*
Using DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.
#### Out-of-scope Uses
OpenAI states in the GPT-2 model card:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
### How to Get Started with the Model
<details>
<summary>Click to expand</summary>
*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
And in TensorFlow:
</details>
## Training Data
DistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.
## Training Procedure
The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).
## Evaluation Results
The creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
## Environmental Impact
*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- Hardware Type: 8 16GB V100
- Hours used: 168 (1 week)
- Cloud Provider: Azure
- Compute Region: unavailable, assumed East US for calculations
- Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
## Glossary
- <a name="knowledge-distillation">Knowledge Distillation</a>: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).
<a href="URL
<img width="300px" src="URL
</a>
| [
"# DistilGPT2\n\nDistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2.",
"## Model Details\n\n- Developed by: Hugging Face\n- Model type: Transformer-based Language Model\n- Language: English\n- License: Apache 2.0\n- Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.\n- Resources for more information: See this repository for more about Distil\\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2.",
"## Uses, Limitations and Risks",
"#### Limitations and Risks\n\n<details>\n<summary>Click to expand</summary>\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nAs the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). \n\nDistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.\n\nThe impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: \n\n- Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.\n- Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias). \n- Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2. \n\n\n\n</details>",
"#### Potential Uses\n\nSince DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. \n\nThe developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: \n\n> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*\n> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*\n> - *Entertainment: Creation of games, chat bots, and amusing generations.*\n\nUsing DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.",
"#### Out-of-scope Uses\n\nOpenAI states in the GPT-2 model card: \n\n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n>\n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.",
"### How to Get Started with the Model \n\n<details>\n<summary>Click to expand</summary>\n\n*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*\n\nUsing DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:\n\n \n \nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nAnd in TensorFlow:\n\n\n\n</details>",
"## Training Data\n\nDistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.",
"## Training Procedure\n\nThe texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).",
"## Evaluation Results\n\nThe creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).",
"## Environmental Impact\n\n*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*\n\n- Hardware Type: 8 16GB V100\n- Hours used: 168 (1 week)\n- Cloud Provider: Azure\n- Compute Region: unavailable, assumed East US for calculations\n- Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2",
"## Glossary\n\n-\t<a name=\"knowledge-distillation\">Knowledge Distillation</a>: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"# DistilGPT2\n\nDistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2.",
"## Model Details\n\n- Developed by: Hugging Face\n- Model type: Transformer-based Language Model\n- Language: English\n- License: Apache 2.0\n- Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.\n- Resources for more information: See this repository for more about Distil\\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2.",
"## Uses, Limitations and Risks",
"#### Limitations and Risks\n\n<details>\n<summary>Click to expand</summary>\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nAs the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). \n\nDistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.\n\nThe impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: \n\n- Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.\n- Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias). \n- Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2. \n\n\n\n</details>",
"#### Potential Uses\n\nSince DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. \n\nThe developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: \n\n> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*\n> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*\n> - *Entertainment: Creation of games, chat bots, and amusing generations.*\n\nUsing DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.",
"#### Out-of-scope Uses\n\nOpenAI states in the GPT-2 model card: \n\n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n>\n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.",
"### How to Get Started with the Model \n\n<details>\n<summary>Click to expand</summary>\n\n*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*\n\nUsing DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:\n\n \n \nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nAnd in TensorFlow:\n\n\n\n</details>",
"## Training Data\n\nDistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.",
"## Training Procedure\n\nThe texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).",
"## Evaluation Results\n\nThe creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).",
"## Environmental Impact\n\n*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*\n\n- Hardware Type: 8 16GB V100\n- Hours used: 168 (1 week)\n- Cloud Provider: Azure\n- Compute Region: unavailable, assumed East US for calculations\n- Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2",
"## Glossary\n\n-\t<a name=\"knowledge-distillation\">Knowledge Distillation</a>: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-medium - bnb 4bits
- Model creator: https://huggingface.co/openai-community/
- Original model: https://huggingface.co/openai-community/gpt2-medium/
Original model description:
---
language: en
license: mit
---
# GPT-2 Medium
## Model Details
**Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"},
{'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"},
{'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = TFGPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a military'},
{'generated_text': 'The man worked as a salesman in Mexico and eventually'},
{'generated_text': 'The man worked as a supervisor at the department for'},
{'generated_text': 'The man worked as a cleaner for the same corporation'},
{'generated_text': 'The man worked as a barman and was involved'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a social worker in a children'},
{'generated_text': 'The woman worked as a marketing manager, and her'},
{'generated_text': 'The woman worked as a customer service agent in a'},
{'generated_text': 'The woman worked as a cleaner for the same corporation'},
{'generated_text': 'The woman worked as a barista and was involved'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
| {} | RichardErkhov/openai-community_-_gpt2-medium-4bits | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-17T08:55:09+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
gpt2-medium - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language: en
license: mit
-------------------------
GPT-2 Medium
============
Model Details
-------------
Model Description: GPT-2 Medium is the 355M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
* Developed by: OpenAI, see associated research paper and GitHub repo for model developers.
* Model Type: Transformer-based language model
* Language(s): English
* License: Modified MIT License
* Related Models: GPT2, GPT2-Large and GPT2-XL
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ OpenAI Model Card for GPT-2
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
In their model card about GPT-2, OpenAI wrote:
>
> The primary intended users of these models are AI researchers and practitioners.
>
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
>
>
>
#### Downstream Use
In their model card about GPT-2, OpenAI wrote:
>
> Here are some secondary use cases we believe are likely:
>
>
> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> * Entertainment: Creation of games, chat bots, and amusing generations.
>
>
>
#### Misuse and Out-of-scope Use
In their model card about GPT-2, OpenAI wrote:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training
--------
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
Evaluation
----------
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model authors write in the associated paper that:
>
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
>
>
>
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: Unknown
* Hours used: Unknown
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
distilgpt2 - GGUF
- Model creator: https://huggingface.co/distilbert/
- Original model: https://huggingface.co/distilbert/distilgpt2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [distilgpt2.Q2_K.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q2_K.gguf) | Q2_K | 0.06GB |
| [distilgpt2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.IQ3_XS.gguf) | IQ3_XS | 0.06GB |
| [distilgpt2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.IQ3_S.gguf) | IQ3_S | 0.07GB |
| [distilgpt2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q3_K_S.gguf) | Q3_K_S | 0.07GB |
| [distilgpt2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.IQ3_M.gguf) | IQ3_M | 0.07GB |
| [distilgpt2.Q3_K.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q3_K.gguf) | Q3_K | 0.07GB |
| [distilgpt2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q3_K_M.gguf) | Q3_K_M | 0.07GB |
| [distilgpt2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q3_K_L.gguf) | Q3_K_L | 0.07GB |
| [distilgpt2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.IQ4_XS.gguf) | IQ4_XS | 0.07GB |
| [distilgpt2.Q4_0.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q4_0.gguf) | Q4_0 | 0.08GB |
| [distilgpt2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.IQ4_NL.gguf) | IQ4_NL | 0.08GB |
| [distilgpt2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q4_K_S.gguf) | Q4_K_S | 0.08GB |
| [distilgpt2.Q4_K.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q4_K.gguf) | Q4_K | 0.08GB |
| [distilgpt2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q4_K_M.gguf) | Q4_K_M | 0.08GB |
| [distilgpt2.Q4_1.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q4_1.gguf) | Q4_1 | 0.08GB |
| [distilgpt2.Q5_0.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q5_0.gguf) | Q5_0 | 0.08GB |
| [distilgpt2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q5_K_S.gguf) | Q5_K_S | 0.08GB |
| [distilgpt2.Q5_K.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q5_K.gguf) | Q5_K | 0.09GB |
| [distilgpt2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q5_K_M.gguf) | Q5_K_M | 0.09GB |
| [distilgpt2.Q5_1.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q5_1.gguf) | Q5_1 | 0.09GB |
| [distilgpt2.Q6_K.gguf](https://huggingface.co/RichardErkhov/distilbert_-_distilgpt2-gguf/blob/main/distilgpt2.Q6_K.gguf) | Q6_K | 0.09GB |
Original model description:
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- openwebtext
model-index:
- name: distilgpt2
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: wikitext
name: WikiText-103
metrics:
- type: perplexity
name: Perplexity
value: 21.1
co2_eq_emissions: 149200
---
# DistilGPT2
DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of [GPT-2](https://huggingface.co/gpt2).
## Model Details
- **Developed by:** Hugging Face
- **Model type:** Transformer-based Language Model
- **Language:** English
- **License:** Apache 2.0
- **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2.
- **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/).
## Uses, Limitations and Risks
#### Limitations and Risks
<details>
<summary>Click to expand</summary>
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
As the developers of GPT-2 (OpenAI) note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md), “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:
- [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
- [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
- [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(48)
>>> generator("The White man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': "The White man worked as a salesman at a McDonald's restaurant called Kia at the time of the"},
{'generated_text': 'The White man worked as a contractor in the Army in the late 1990s. He became a "'},
{'generated_text': 'The White man worked as a police spokesman to the US Navy in the 1930s.'}]
>>> set_seed(48)
>>> generator("The Black man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': 'The Black man worked as a shop assistant for an hour at Wal-Mart at Wal-Mart in'},
{'generated_text': 'The Black man worked as a waiter in the hotel when he was assaulted when he got out of a'},
{'generated_text': 'The Black man worked as a police spokesman four months ago...'}]
```
</details>
#### Potential Uses
Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
> - *Entertainment: Creation of games, chat bots, and amusing generations.*
Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
#### Out-of-scope Uses
OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
### How to Get Started with the Model
<details>
<summary>Click to expand</summary>
*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(42)
>>> generator("Hello, I’m a language model", max_length=20, num_return_sequences=5)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
[{'generated_text': "Hello, I'm a language model, I'm a language model. In my previous post I've"},
{'generated_text': "Hello, I'm a language model, and I'd love to hear what you think about it."},
{'generated_text': "Hello, I'm a language model, but I don't get much of a connection anymore, so"},
{'generated_text': "Hello, I'm a language model, a functional language... It's not an example, and that"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = GPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
And in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = TFGPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
</details>
## Training Data
DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText.
## Training Procedure
The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108).
## Evaluation Results
The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
## Environmental Impact
*Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- **Hardware Type:** 8 16GB V100
- **Hours used:** 168 (1 week)
- **Cloud Provider:** Azure
- **Compute Region:** unavailable, assumed East US for calculations
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
## Citation
```bibtex
@inproceedings{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
booktitle={NeurIPS EMC^2 Workshop},
year={2019}
}
```
## Glossary
- <a name="knowledge-distillation">**Knowledge Distillation**</a>: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531).
<a href="https://huggingface.co/exbert/?model=distilgpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {} | RichardErkhov/distilbert_-_distilgpt2-gguf | null | [
"gguf",
"arxiv:1910.01108",
"arxiv:2201.08542",
"arxiv:2203.12574",
"arxiv:1910.09700",
"arxiv:1503.02531",
"region:us"
] | null | 2024-04-17T08:56:13+00:00 | [
"1910.01108",
"2201.08542",
"2203.12574",
"1910.09700",
"1503.02531"
] | [] | TAGS
#gguf #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
distilgpt2 - GGUF
* Model creator: URL
* Original model: URL
Name: distilgpt2.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.06GB
Name: distilgpt2.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.06GB
Name: distilgpt2.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.07GB
Name: distilgpt2.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.07GB
Name: distilgpt2.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.07GB
Name: distilgpt2.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.07GB
Name: distilgpt2.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.07GB
Name: distilgpt2.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.07GB
Name: distilgpt2.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.07GB
Name: distilgpt2.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.08GB
Name: distilgpt2.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.08GB
Name: distilgpt2.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.08GB
Name: distilgpt2.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.08GB
Name: distilgpt2.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.08GB
Name: distilgpt2.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.08GB
Name: distilgpt2.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.08GB
Name: distilgpt2.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.08GB
Name: distilgpt2.Q5\_K.gguf, Quant method: Q5\_K, Size: 0.09GB
Name: distilgpt2.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 0.09GB
Name: distilgpt2.Q5\_1.gguf, Quant method: Q5\_1, Size: 0.09GB
Name: distilgpt2.Q6\_K.gguf, Quant method: Q6\_K, Size: 0.09GB
Original model description:
---------------------------
language: en
tags:
* exbert
license: apache-2.0
datasets:
* openwebtext
model-index:
* name: distilgpt2
results:
+ task:
type: text-generation
name: Text Generation
dataset:
type: wikitext
name: WikiText-103
metrics:
- type: perplexity
name: Perplexity
value: 21.1
co2\_eq\_emissions: 149200
--------------------------
DistilGPT2
==========
DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2.
Model Details
-------------
* Developed by: Hugging Face
* Model type: Transformer-based Language Model
* Language: English
* License: Apache 2.0
* Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.
* Resources for more information: See this repository for more about Distil\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2.
Uses, Limitations and Risks
---------------------------
#### Limitations and Risks
Click to expand
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
As the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:
* Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
* Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
* Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
#### Potential Uses
Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
The developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
>
> * *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
> * *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
> * *Entertainment: Creation of games, chat bots, and amusing generations.*
>
>
>
Using DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.
#### Out-of-scope Uses
OpenAI states in the GPT-2 model card:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
>
>
>
### How to Get Started with the Model
Click to expand
*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
And in TensorFlow:
Training Data
-------------
DistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.
Training Procedure
------------------
The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).
Evaluation Results
------------------
The creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
Environmental Impact
--------------------
*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
* Hardware Type: 8 16GB V100
* Hours used: 168 (1 week)
* Cloud Provider: Azure
* Compute Region: unavailable, assumed East US for calculations
* Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
Glossary
--------
* Knowledge Distillation: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).
<a href="URL
<img width="300px" src="URL
| [
"#### Limitations and Risks\n\n\n\nClick to expand\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nAs the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nDistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.\n\n\nThe impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:\n\n\n* Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.\n* Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).\n* Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.",
"#### Potential Uses\n\n\nSince DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.\n\n\nThe developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:\n\n\n\n> \n> * *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*\n> * *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*\n> * *Entertainment: Creation of games, chat bots, and amusing generations.*\n> \n> \n> \n\n\nUsing DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.",
"#### Out-of-scope Uses\n\n\nOpenAI states in the GPT-2 model card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.\n> \n> \n>",
"### How to Get Started with the Model\n\n\n\nClick to expand\n*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*\n\n\nUsing DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nAnd in TensorFlow:\n\n\n\nTraining Data\n-------------\n\n\nDistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.\n\n\nTraining Procedure\n------------------\n\n\nThe texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).\n\n\nEvaluation Results\n------------------\n\n\nThe creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).\n\n\nEnvironmental Impact\n--------------------\n\n\n*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*\n\n\n* Hardware Type: 8 16GB V100\n* Hours used: 168 (1 week)\n* Cloud Provider: Azure\n* Compute Region: unavailable, assumed East US for calculations\n* Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2\n\n\nGlossary\n--------\n\n\n* Knowledge Distillation: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
"TAGS\n#gguf #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #region-us \n",
"#### Limitations and Risks\n\n\n\nClick to expand\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nAs the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nDistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.\n\n\nThe impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:\n\n\n* Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.\n* Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).\n* Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.",
"#### Potential Uses\n\n\nSince DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.\n\n\nThe developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:\n\n\n\n> \n> * *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*\n> * *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*\n> * *Entertainment: Creation of games, chat bots, and amusing generations.*\n> \n> \n> \n\n\nUsing DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.",
"#### Out-of-scope Uses\n\n\nOpenAI states in the GPT-2 model card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.\n> \n> \n>",
"### How to Get Started with the Model\n\n\n\nClick to expand\n*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*\n\n\nUsing DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nAnd in TensorFlow:\n\n\n\nTraining Data\n-------------\n\n\nDistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.\n\n\nTraining Procedure\n------------------\n\n\nThe texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).\n\n\nEvaluation Results\n------------------\n\n\nThe creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).\n\n\nEnvironmental Impact\n--------------------\n\n\n*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*\n\n\n* Hardware Type: 8 16GB V100\n* Hours used: 168 (1 week)\n* Cloud Provider: Azure\n* Compute Region: unavailable, assumed East US for calculations\n* Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2\n\n\nGlossary\n--------\n\n\n* Knowledge Distillation: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-medium - bnb 8bits
- Model creator: https://huggingface.co/openai-community/
- Original model: https://huggingface.co/openai-community/gpt2-medium/
Original model description:
---
language: en
license: mit
---
# GPT-2 Medium
## Model Details
**Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"},
{'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"},
{'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = TFGPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a military'},
{'generated_text': 'The man worked as a salesman in Mexico and eventually'},
{'generated_text': 'The man worked as a supervisor at the department for'},
{'generated_text': 'The man worked as a cleaner for the same corporation'},
{'generated_text': 'The man worked as a barman and was involved'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a social worker in a children'},
{'generated_text': 'The woman worked as a marketing manager, and her'},
{'generated_text': 'The woman worked as a customer service agent in a'},
{'generated_text': 'The woman worked as a cleaner for the same corporation'},
{'generated_text': 'The woman worked as a barista and was involved'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
| {} | RichardErkhov/openai-community_-_gpt2-medium-8bits | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-17T08:56:23+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
gpt2-medium - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language: en
license: mit
-------------------------
GPT-2 Medium
============
Model Details
-------------
Model Description: GPT-2 Medium is the 355M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
* Developed by: OpenAI, see associated research paper and GitHub repo for model developers.
* Model Type: Transformer-based language model
* Language(s): English
* License: Modified MIT License
* Related Models: GPT2, GPT2-Large and GPT2-XL
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ OpenAI Model Card for GPT-2
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
In their model card about GPT-2, OpenAI wrote:
>
> The primary intended users of these models are AI researchers and practitioners.
>
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
>
>
>
#### Downstream Use
In their model card about GPT-2, OpenAI wrote:
>
> Here are some secondary use cases we believe are likely:
>
>
> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> * Entertainment: Creation of games, chat bots, and amusing generations.
>
>
>
#### Misuse and Out-of-scope Use
In their model card about GPT-2, OpenAI wrote:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training
--------
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
Evaluation
----------
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model authors write in the associated paper that:
>
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
>
>
>
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: Unknown
* Hours used: Unknown
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "diffusers"} | Niggendar/MatrixHentaiPony_v143 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-04-17T08:57:05+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-medium - GGUF
- Model creator: https://huggingface.co/openai-community/
- Original model: https://huggingface.co/openai-community/gpt2-medium/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-medium.Q2_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q2_K.gguf) | Q2_K | 0.16GB |
| [gpt2-medium.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.IQ3_XS.gguf) | IQ3_XS | 0.18GB |
| [gpt2-medium.IQ3_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.IQ3_S.gguf) | IQ3_S | 0.19GB |
| [gpt2-medium.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q3_K_S.gguf) | Q3_K_S | 0.19GB |
| [gpt2-medium.IQ3_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.IQ3_M.gguf) | IQ3_M | 0.2GB |
| [gpt2-medium.Q3_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q3_K.gguf) | Q3_K | 0.21GB |
| [gpt2-medium.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q3_K_M.gguf) | Q3_K_M | 0.21GB |
| [gpt2-medium.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q3_K_L.gguf) | Q3_K_L | 0.23GB |
| [gpt2-medium.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.IQ4_XS.gguf) | IQ4_XS | 0.22GB |
| [gpt2-medium.Q4_0.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q4_0.gguf) | Q4_0 | 0.23GB |
| [gpt2-medium.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.IQ4_NL.gguf) | IQ4_NL | 0.23GB |
| [gpt2-medium.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q4_K_S.gguf) | Q4_K_S | 0.23GB |
| [gpt2-medium.Q4_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q4_K.gguf) | Q4_K | 0.25GB |
| [gpt2-medium.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q4_K_M.gguf) | Q4_K_M | 0.25GB |
| [gpt2-medium.Q4_1.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q4_1.gguf) | Q4_1 | 0.25GB |
| [gpt2-medium.Q5_0.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q5_0.gguf) | Q5_0 | 0.27GB |
| [gpt2-medium.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q5_K_S.gguf) | Q5_K_S | 0.27GB |
| [gpt2-medium.Q5_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q5_K.gguf) | Q5_K | 0.29GB |
| [gpt2-medium.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q5_K_M.gguf) | Q5_K_M | 0.29GB |
| [gpt2-medium.Q5_1.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q5_1.gguf) | Q5_1 | 0.29GB |
| [gpt2-medium.Q6_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-medium-gguf/blob/main/gpt2-medium.Q6_K.gguf) | Q6_K | 0.31GB |
Original model description:
---
language: en
license: mit
---
# GPT-2 Medium
## Model Details
**Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"},
{'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"},
{'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = TFGPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a military'},
{'generated_text': 'The man worked as a salesman in Mexico and eventually'},
{'generated_text': 'The man worked as a supervisor at the department for'},
{'generated_text': 'The man worked as a cleaner for the same corporation'},
{'generated_text': 'The man worked as a barman and was involved'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a social worker in a children'},
{'generated_text': 'The woman worked as a marketing manager, and her'},
{'generated_text': 'The woman worked as a customer service agent in a'},
{'generated_text': 'The woman worked as a cleaner for the same corporation'},
{'generated_text': 'The woman worked as a barista and was involved'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
| {} | RichardErkhov/openai-community_-_gpt2-medium-gguf | null | [
"gguf",
"arxiv:1910.09700",
"region:us"
] | null | 2024-04-17T08:57:16+00:00 | [
"1910.09700"
] | [] | TAGS
#gguf #arxiv-1910.09700 #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
gpt2-medium - GGUF
* Model creator: URL
* Original model: URL
Name: gpt2-medium.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.16GB
Name: gpt2-medium.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.18GB
Name: gpt2-medium.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.19GB
Name: gpt2-medium.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.19GB
Name: gpt2-medium.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.2GB
Name: gpt2-medium.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.21GB
Name: gpt2-medium.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.21GB
Name: gpt2-medium.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.23GB
Name: gpt2-medium.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.22GB
Name: gpt2-medium.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.23GB
Name: gpt2-medium.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.23GB
Name: gpt2-medium.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.23GB
Name: gpt2-medium.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.25GB
Name: gpt2-medium.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.25GB
Name: gpt2-medium.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.25GB
Name: gpt2-medium.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.27GB
Name: gpt2-medium.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.27GB
Name: gpt2-medium.Q5\_K.gguf, Quant method: Q5\_K, Size: 0.29GB
Name: gpt2-medium.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 0.29GB
Name: gpt2-medium.Q5\_1.gguf, Quant method: Q5\_1, Size: 0.29GB
Name: gpt2-medium.Q6\_K.gguf, Quant method: Q6\_K, Size: 0.31GB
Original model description:
---------------------------
language: en
license: mit
-------------------------
GPT-2 Medium
============
Model Details
-------------
Model Description: GPT-2 Medium is the 355M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
* Developed by: OpenAI, see associated research paper and GitHub repo for model developers.
* Model Type: Transformer-based language model
* Language(s): English
* License: Modified MIT License
* Related Models: GPT2, GPT2-Large and GPT2-XL
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ OpenAI Model Card for GPT-2
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
In their model card about GPT-2, OpenAI wrote:
>
> The primary intended users of these models are AI researchers and practitioners.
>
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
>
>
>
#### Downstream Use
In their model card about GPT-2, OpenAI wrote:
>
> Here are some secondary use cases we believe are likely:
>
>
> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> * Entertainment: Creation of games, chat bots, and amusing generations.
>
>
>
#### Misuse and Out-of-scope Use
In their model card about GPT-2, OpenAI wrote:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training
--------
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
Evaluation
----------
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model authors write in the associated paper that:
>
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
>
>
>
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: Unknown
* Hours used: Unknown
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#gguf #arxiv-1910.09700 #region-us \n",
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
text-generation | transformers | # nbeerbower/HeroBophades-2x7B AWQ
- Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
- Original model: [HeroBophades-2x7B](https://huggingface.co/HeroBophades-2x7B)

## Model Summary
This is an experimental Mixture of Experts LLM built with (mergekit)[https://github.com/arcee-ai/mergekit/blob/main/docs/moe.md].
### Configuration
```yaml
base_model: nbeerbower/bophades-mistral-truthy-DPO-7B
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: nbeerbower/bophades-mistral-truthy-DPO-7B
positive_prompts:
- "You are an unbiased, uncensored, helpful assistant."
- source_model: nbeerbower/bophades-mistral-math-DPO-7B
positive_prompts:
- "How do you solve a system of quadratic equations simultaneously using substitution?. Take a deep breath, think step by step, and give an accurate response"
``` | {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "chatml"], "datasets": ["jondurbin/truthy-dpo-v0.1", "kyujinpy/orca_math_dpo"], "base_model": ["nbeerbower/bophades-mistral-truthy-DPO-7B", "nbeerbower/bophades-mistral-math-DPO-7B"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/HeroBophades-2x7B-AWQ | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"en",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:kyujinpy/orca_math_dpo",
"base_model:nbeerbower/bophades-mistral-truthy-DPO-7B",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:57:20+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #en #dataset-jondurbin/truthy-dpo-v0.1 #dataset-kyujinpy/orca_math_dpo #base_model-nbeerbower/bophades-mistral-truthy-DPO-7B #license-apache-2.0 #text-generation-inference #region-us
| # nbeerbower/HeroBophades-2x7B AWQ
- Model creator: nbeerbower
- Original model: HeroBophades-2x7B
!image/png
## Model Summary
This is an experimental Mixture of Experts LLM built with (mergekit)[URL
### Configuration
| [
"# nbeerbower/HeroBophades-2x7B AWQ\n\n- Model creator: nbeerbower\n- Original model: HeroBophades-2x7B\n\n!image/png",
"## Model Summary\n\nThis is an experimental Mixture of Experts LLM built with (mergekit)[URL",
"### Configuration"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #en #dataset-jondurbin/truthy-dpo-v0.1 #dataset-kyujinpy/orca_math_dpo #base_model-nbeerbower/bophades-mistral-truthy-DPO-7B #license-apache-2.0 #text-generation-inference #region-us \n",
"# nbeerbower/HeroBophades-2x7B AWQ\n\n- Model creator: nbeerbower\n- Original model: HeroBophades-2x7B\n\n!image/png",
"## Model Summary\n\nThis is an experimental Mixture of Experts LLM built with (mergekit)[URL",
"### Configuration"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-14-layer](https://huggingface.co/Citaman/command-r-14-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-14-layer
layer_range: [0, 13]
- model: Citaman/command-r-14-layer
layer_range: [1, 14]
merge_method: slerp
base_model: Citaman/command-r-14-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-14-layer"]} | Citaman/command-r-13-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-14-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:57:43+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-14-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-14-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-14-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-14-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-14-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | synapseml/nous-7b-v1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:58:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lab_11_distilbert_sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0275
- Accuracy: 0.3771
- F1: 0.2633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "lab_11_distilbert_sentiment", "results": []}]} | Malecc/lab_11_distilbert_sentiment | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:58:28+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# lab_11_distilbert_sentiment
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0275
- Accuracy: 0.3771
- F1: 0.2633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# lab_11_distilbert_sentiment\n\nThis model is a fine-tuned version of distilbert-base-uncased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.0275\n- Accuracy: 0.3771\n- F1: 0.2633",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# lab_11_distilbert_sentiment\n\nThis model is a fine-tuned version of distilbert-base-uncased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.0275\n- Accuracy: 0.3771\n- F1: 0.2633",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1881
- F1: 0.8633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2897 | 1.0 | 2145 | 0.2041 | 0.8183 |
| 0.1619 | 2.0 | 4290 | 0.1899 | 0.8482 |
| 0.1039 | 3.0 | 6435 | 0.1881 | 0.8633 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de-fr", "results": []}]} | taoyoung/xlm-roberta-base-finetuned-panx-de-fr | null | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:58:29+00:00 | [] | [] | TAGS
#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-base-finetuned-panx-de-fr
=====================================
This model is a fine-tuned version of xlm-roberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1881
* F1: 0.8633
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu118
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny MR - Chirag Brahme
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7911
- Wer: 69.4410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.145 | 4.07 | 1000 | 0.4587 | 66.9138 |
| 0.046 | 8.13 | 2000 | 0.5455 | 68.4804 |
| 0.0142 | 12.2 | 3000 | 0.6589 | 69.0091 |
| 0.004 | 16.26 | 4000 | 0.7581 | 69.4797 |
| 0.002 | 20.33 | 5000 | 0.7911 | 69.4410 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"language": ["mr"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Tiny MR - Chirag Brahme", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 11.0", "type": "mozilla-foundation/common_voice_11_0", "config": "mr", "split": "None", "args": "config: mr, split: test"}, "metrics": [{"type": "wer", "value": 69.44104184127393, "name": "Wer"}]}]}]} | PolyChirag/Marathi_WhisperASR | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"mr",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T08:58:30+00:00 | [] | [
"mr"
] | TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #mr #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Whisper Tiny MR - Chirag Brahme
===============================
This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7911
* Wer: 69.4410
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 5000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #mr #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-large - bnb 4bits
- Model creator: https://huggingface.co/openai-community/
- Original model: https://huggingface.co/openai-community/gpt2-large/
Original model description:
---
language: en
license: mit
---
# GPT-2 Large
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)
## Model Details
**Model Description:** GPT-2 Large is the **774M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I can do language modeling. In fact, this is one of the reasons I use languages. To get a"},
{'generated_text': "Hello, I'm a language model, which in its turn implements a model of how a human can reason about a language, and is in turn an"},
{'generated_text': "Hello, I'm a language model, why does this matter for you?\n\nWhen I hear new languages, I tend to start thinking in terms"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI don't need to know anything else. If I want to understand about how"},
{'generated_text': "Hello, I'm a language model, not a toolbox.\n\nIn a nutshell, a language model is a set of attributes that define how"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = GPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = TFGPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a hotel'},
{'generated_text': 'The man worked as a salesman in Mexico and in'},
{'generated_text': 'The man worked as a supervisor at the warehouse for'},
{'generated_text': "The man worked as a cleaner for the store's"},
{'generated_text': 'The man worked as a barbershop apprentice.'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a clerk at the bank.'},
{'generated_text': 'The woman worked as a caregiver, and her'},
{'generated_text': 'The woman worked as a customer service agent for a'},
{'generated_text': 'The woman worked as a cleaner at the store,'},
{'generated_text': 'The woman worked as a barista and was "'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 10.87 | 60.12 | 93.45 | 88.0 | 19.93 | 40.31 | 0.97 | 1.02 | 22.05 | 44.575|
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
| {} | RichardErkhov/openai-community_-_gpt2-large-4bits | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-17T08:59:02+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
gpt2-large - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language: en
license: mit
-------------------------
GPT-2 Large
===========
Table of Contents
-----------------
* Model Details
* How To Get Started With the Model
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Environmental Impact
* Technical Specifications
* Citation Information
* Model Card Authors
Model Details
-------------
Model Description: GPT-2 Large is the 774M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
* Developed by: OpenAI, see associated research paper and GitHub repo for model developers.
* Model Type: Transformer-based language model
* Language(s): English
* License: Modified MIT License
* Related Models: GPT-2, GPT-Medium and GPT-XL
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ OpenAI Model Card for GPT-2
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
In their model card about GPT-2, OpenAI wrote:
>
> The primary intended users of these models are AI researchers and practitioners.
>
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
>
>
>
#### Downstream Use
In their model card about GPT-2, OpenAI wrote:
>
> Here are some secondary use cases we believe are likely:
>
>
> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> * Entertainment: Creation of games, chat bots, and amusing generations.
>
>
>
#### Misuse and Out-of-scope Use
In their model card about GPT-2, OpenAI wrote:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training
--------
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
Evaluation
----------
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model authors write in the associated paper that:
>
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
>
>
>
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: Unknown
* Hours used: Unknown
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cantonese-gemma-sft-lora
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the HuggingFaceH4/deita-10k-v0-sft and the hon9kon9ize/yue-alpaca-chat datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "gemma", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "alignment-handbook", "generated_from_trainer"], "datasets": ["HuggingFaceH4/deita-10k-v0-sft", "hon9kon9ize/yue-alpaca-chat"], "base_model": "google/gemma-2b", "model-index": [{"name": "cantonese-gemma-sft-lora", "results": []}]} | hon9kon9ize/cantonese-gemma-sft-lora | null | [
"peft",
"tensorboard",
"safetensors",
"gemma",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"dataset:hon9kon9ize/yue-alpaca-chat",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-04-17T08:59:13+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #gemma #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #dataset-hon9kon9ize/yue-alpaca-chat #base_model-google/gemma-2b #license-gemma #region-us
|
# cantonese-gemma-sft-lora
This model is a fine-tuned version of google/gemma-2b on the HuggingFaceH4/deita-10k-v0-sft and the hon9kon9ize/yue-alpaca-chat datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# cantonese-gemma-sft-lora\n\nThis model is a fine-tuned version of google/gemma-2b on the HuggingFaceH4/deita-10k-v0-sft and the hon9kon9ize/yue-alpaca-chat datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- distributed_type: multi-GPU\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.1\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #gemma #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #dataset-hon9kon9ize/yue-alpaca-chat #base_model-google/gemma-2b #license-gemma #region-us \n",
"# cantonese-gemma-sft-lora\n\nThis model is a fine-tuned version of google/gemma-2b on the HuggingFaceH4/deita-10k-v0-sft and the hon9kon9ize/yue-alpaca-chat datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- distributed_type: multi-GPU\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.1\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | HoangLe1312/gemma-2b-code-solver | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T08:59:52+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
| {"library_name": "peft"} | harish2962/tinyllama_Disease | null | [
"peft",
"safetensors",
"llama",
"region:us"
] | null | 2024-04-17T09:01:13+00:00 | [] | [] | TAGS
#peft #safetensors #llama #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] | [
"TAGS\n#peft #safetensors #llama #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Reihaneh/wav2vec2_fy10 | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T09:01:48+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-large - bnb 8bits
- Model creator: https://huggingface.co/openai-community/
- Original model: https://huggingface.co/openai-community/gpt2-large/
Original model description:
---
language: en
license: mit
---
# GPT-2 Large
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)
## Model Details
**Model Description:** GPT-2 Large is the **774M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I can do language modeling. In fact, this is one of the reasons I use languages. To get a"},
{'generated_text': "Hello, I'm a language model, which in its turn implements a model of how a human can reason about a language, and is in turn an"},
{'generated_text': "Hello, I'm a language model, why does this matter for you?\n\nWhen I hear new languages, I tend to start thinking in terms"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI don't need to know anything else. If I want to understand about how"},
{'generated_text': "Hello, I'm a language model, not a toolbox.\n\nIn a nutshell, a language model is a set of attributes that define how"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = GPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = TFGPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a hotel'},
{'generated_text': 'The man worked as a salesman in Mexico and in'},
{'generated_text': 'The man worked as a supervisor at the warehouse for'},
{'generated_text': "The man worked as a cleaner for the store's"},
{'generated_text': 'The man worked as a barbershop apprentice.'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a clerk at the bank.'},
{'generated_text': 'The woman worked as a caregiver, and her'},
{'generated_text': 'The woman worked as a customer service agent for a'},
{'generated_text': 'The woman worked as a cleaner at the store,'},
{'generated_text': 'The woman worked as a barista and was "'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 10.87 | 60.12 | 93.45 | 88.0 | 19.93 | 40.31 | 0.97 | 1.02 | 22.05 | 44.575|
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
| {} | RichardErkhov/openai-community_-_gpt2-large-8bits | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-17T09:02:40+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
gpt2-large - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language: en
license: mit
-------------------------
GPT-2 Large
===========
Table of Contents
-----------------
* Model Details
* How To Get Started With the Model
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Environmental Impact
* Technical Specifications
* Citation Information
* Model Card Authors
Model Details
-------------
Model Description: GPT-2 Large is the 774M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
* Developed by: OpenAI, see associated research paper and GitHub repo for model developers.
* Model Type: Transformer-based language model
* Language(s): English
* License: Modified MIT License
* Related Models: GPT-2, GPT-Medium and GPT-XL
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ OpenAI Model Card for GPT-2
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
In their model card about GPT-2, OpenAI wrote:
>
> The primary intended users of these models are AI researchers and practitioners.
>
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
>
>
>
#### Downstream Use
In their model card about GPT-2, OpenAI wrote:
>
> Here are some secondary use cases we believe are likely:
>
>
> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> * Entertainment: Creation of games, chat bots, and amusing generations.
>
>
>
#### Misuse and Out-of-scope Use
In their model card about GPT-2, OpenAI wrote:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training
--------
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
Evaluation
----------
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model authors write in the associated paper that:
>
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
>
>
>
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: Unknown
* Hours used: Unknown
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | 0x0mom/sl8 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T09:02:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
zero-shot-image-classification | transformers | # Model Card for PickScore v1
This model is a scoring function for images generated from text. It takes as input a prompt and a generated image and outputs a score.
It can be used as a general scoring function, and for tasks such as human preference prediction, model evaluation, image ranking, and more.
See our paper [Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation](https://arxiv.org/abs/2305.01569) for more details.
## Model Details
### Model Description
This model was finetuned from CLIP-H using the [Pick-a-Pic dataset](https://huggingface.co/datasets/yuvalkirstain/pickapic_v1).
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [See the PickScore repo](https://github.com/yuvalkirstain/PickScore)
- **Paper:** [Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation](https://arxiv.org/abs/2305.01569).
- **Demo [optional]:** [Huggingface Spaces demo for PickScore](https://huggingface.co/spaces/yuvalkirstain/PickScore)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# import
from transformers import AutoProcessor, AutoModel
# load model
device = "cuda"
processor_name_or_path = "laion/CLIP-ViT-H-14-laion2B-s32B-b79K"
model_pretrained_name_or_path = "yuvalkirstain/PickScore_v1"
processor = AutoProcessor.from_pretrained(processor_name_or_path)
model = AutoModel.from_pretrained(model_pretrained_name_or_path).eval().to(device)
def calc_probs(prompt, images):
# preprocess
image_inputs = processor(
images=images,
padding=True,
truncation=True,
max_length=77,
return_tensors="pt",
).to(device)
text_inputs = processor(
text=prompt,
padding=True,
truncation=True,
max_length=77,
return_tensors="pt",
).to(device)
with torch.no_grad():
# embed
image_embs = model.get_image_features(**image_inputs)
image_embs = image_embs / torch.norm(image_embs, dim=-1, keepdim=True)
text_embs = model.get_text_features(**text_inputs)
text_embs = text_embs / torch.norm(text_embs, dim=-1, keepdim=True)
# score
scores = model.logit_scale.exp() * (text_embs @ image_embs.T)[0]
# get probabilities if you have multiple images to choose from
probs = torch.softmax(scores, dim=-1)
return probs.cpu().tolist()
pil_images = [Image.open("my_amazing_images/1.jpg"), Image.open("my_amazing_images/2.jpg")]
prompt = "fantastic, increadible prompt"
print(calc_probs(prompt, pil_images))
```
## Training Details
### Training Data
This model was trained on the [Pick-a-Pic dataset](https://huggingface.co/datasets/yuvalkirstain/pickapic_v1).
### Training Procedure
TODO - add paper.
## Citation [optional]
If you find this work useful, please cite:
```bibtex
@inproceedings{Kirstain2023PickaPicAO,
title={Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation},
author={Yuval Kirstain and Adam Polyak and Uriel Singer and Shahbuland Matiana and Joe Penna and Omer Levy},
year={2023}
}
```
**APA:**
[More Information Needed]
| {} | krnl/PickScore_v1 | null | [
"transformers",
"pytorch",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:2305.01569",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T09:03:28+00:00 | [
"2305.01569"
] | [] | TAGS
#transformers #pytorch #safetensors #clip #zero-shot-image-classification #arxiv-2305.01569 #endpoints_compatible #region-us
| # Model Card for PickScore v1
This model is a scoring function for images generated from text. It takes as input a prompt and a generated image and outputs a score.
It can be used as a general scoring function, and for tasks such as human preference prediction, model evaluation, image ranking, and more.
See our paper Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation for more details.
## Model Details
### Model Description
This model was finetuned from CLIP-H using the Pick-a-Pic dataset.
### Model Sources [optional]
- Repository: See the PickScore repo
- Paper: Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation.
- Demo [optional]: Huggingface Spaces demo for PickScore
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
This model was trained on the Pick-a-Pic dataset.
### Training Procedure
TODO - add paper.
[optional]
If you find this work useful, please cite:
APA:
| [
"# Model Card for PickScore v1\n\nThis model is a scoring function for images generated from text. It takes as input a prompt and a generated image and outputs a score. \nIt can be used as a general scoring function, and for tasks such as human preference prediction, model evaluation, image ranking, and more. \nSee our paper Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation for more details.",
"## Model Details",
"### Model Description\n\nThis model was finetuned from CLIP-H using the Pick-a-Pic dataset.",
"### Model Sources [optional]\n\n\n\n- Repository: See the PickScore repo\n- Paper: Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation.\n- Demo [optional]: Huggingface Spaces demo for PickScore",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data\n\nThis model was trained on the Pick-a-Pic dataset.",
"### Training Procedure \n\nTODO - add paper.\n\n\n[optional]\n\nIf you find this work useful, please cite:\n\n\n\nAPA:"
] | [
"TAGS\n#transformers #pytorch #safetensors #clip #zero-shot-image-classification #arxiv-2305.01569 #endpoints_compatible #region-us \n",
"# Model Card for PickScore v1\n\nThis model is a scoring function for images generated from text. It takes as input a prompt and a generated image and outputs a score. \nIt can be used as a general scoring function, and for tasks such as human preference prediction, model evaluation, image ranking, and more. \nSee our paper Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation for more details.",
"## Model Details",
"### Model Description\n\nThis model was finetuned from CLIP-H using the Pick-a-Pic dataset.",
"### Model Sources [optional]\n\n\n\n- Repository: See the PickScore repo\n- Paper: Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation.\n- Demo [optional]: Huggingface Spaces demo for PickScore",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data\n\nThis model was trained on the Pick-a-Pic dataset.",
"### Training Procedure \n\nTODO - add paper.\n\n\n[optional]\n\nIf you find this work useful, please cite:\n\n\n\nAPA:"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-large - GGUF
- Model creator: https://huggingface.co/openai-community/
- Original model: https://huggingface.co/openai-community/gpt2-large/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-large.Q2_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q2_K.gguf) | Q2_K | 0.32GB |
| [gpt2-large.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.IQ3_XS.gguf) | IQ3_XS | 0.36GB |
| [gpt2-large.IQ3_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.IQ3_S.gguf) | IQ3_S | 0.36GB |
| [gpt2-large.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q3_K_S.gguf) | Q3_K_S | 0.36GB |
| [gpt2-large.IQ3_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.IQ3_M.gguf) | IQ3_M | 0.4GB |
| [gpt2-large.Q3_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q3_K.gguf) | Q3_K | 0.42GB |
| [gpt2-large.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q3_K_M.gguf) | Q3_K_M | 0.42GB |
| [gpt2-large.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q3_K_L.gguf) | Q3_K_L | 0.46GB |
| [gpt2-large.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.IQ4_XS.gguf) | IQ4_XS | 0.44GB |
| [gpt2-large.Q4_0.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q4_0.gguf) | Q4_0 | 0.46GB |
| [gpt2-large.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.IQ4_NL.gguf) | IQ4_NL | 0.46GB |
| [gpt2-large.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q4_K_S.gguf) | Q4_K_S | 0.46GB |
| [gpt2-large.Q4_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q4_K.gguf) | Q4_K | 0.51GB |
| [gpt2-large.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q4_K_M.gguf) | Q4_K_M | 0.51GB |
| [gpt2-large.Q4_1.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q4_1.gguf) | Q4_1 | 0.5GB |
| [gpt2-large.Q5_0.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q5_0.gguf) | Q5_0 | 0.55GB |
| [gpt2-large.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q5_K_S.gguf) | Q5_K_S | 0.55GB |
| [gpt2-large.Q5_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q5_K.gguf) | Q5_K | 0.59GB |
| [gpt2-large.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q5_K_M.gguf) | Q5_K_M | 0.59GB |
| [gpt2-large.Q5_1.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q5_1.gguf) | Q5_1 | 0.59GB |
| [gpt2-large.Q6_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q6_K.gguf) | Q6_K | 0.65GB |
Original model description:
---
language: en
license: mit
---
# GPT-2 Large
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)
## Model Details
**Model Description:** GPT-2 Large is the **774M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I can do language modeling. In fact, this is one of the reasons I use languages. To get a"},
{'generated_text': "Hello, I'm a language model, which in its turn implements a model of how a human can reason about a language, and is in turn an"},
{'generated_text': "Hello, I'm a language model, why does this matter for you?\n\nWhen I hear new languages, I tend to start thinking in terms"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI don't need to know anything else. If I want to understand about how"},
{'generated_text': "Hello, I'm a language model, not a toolbox.\n\nIn a nutshell, a language model is a set of attributes that define how"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = GPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = TFGPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a hotel'},
{'generated_text': 'The man worked as a salesman in Mexico and in'},
{'generated_text': 'The man worked as a supervisor at the warehouse for'},
{'generated_text': "The man worked as a cleaner for the store's"},
{'generated_text': 'The man worked as a barbershop apprentice.'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a clerk at the bank.'},
{'generated_text': 'The woman worked as a caregiver, and her'},
{'generated_text': 'The woman worked as a customer service agent for a'},
{'generated_text': 'The woman worked as a cleaner at the store,'},
{'generated_text': 'The woman worked as a barista and was "'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 10.87 | 60.12 | 93.45 | 88.0 | 19.93 | 40.31 | 0.97 | 1.02 | 22.05 | 44.575|
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
| {} | RichardErkhov/openai-community_-_gpt2-large-gguf | null | [
"gguf",
"arxiv:1910.09700",
"region:us"
] | null | 2024-04-17T09:04:19+00:00 | [
"1910.09700"
] | [] | TAGS
#gguf #arxiv-1910.09700 #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
gpt2-large - GGUF
* Model creator: URL
* Original model: URL
Name: gpt2-large.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.32GB
Name: gpt2-large.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.36GB
Name: gpt2-large.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.36GB
Name: gpt2-large.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.36GB
Name: gpt2-large.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.4GB
Name: gpt2-large.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.42GB
Name: gpt2-large.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.42GB
Name: gpt2-large.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.46GB
Name: gpt2-large.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.44GB
Name: gpt2-large.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.46GB
Name: gpt2-large.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.46GB
Name: gpt2-large.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.46GB
Name: gpt2-large.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.51GB
Name: gpt2-large.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.51GB
Name: gpt2-large.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.5GB
Name: gpt2-large.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.55GB
Name: gpt2-large.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.55GB
Name: gpt2-large.Q5\_K.gguf, Quant method: Q5\_K, Size: 0.59GB
Name: gpt2-large.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 0.59GB
Name: gpt2-large.Q5\_1.gguf, Quant method: Q5\_1, Size: 0.59GB
Name: gpt2-large.Q6\_K.gguf, Quant method: Q6\_K, Size: 0.65GB
Original model description:
---------------------------
language: en
license: mit
-------------------------
GPT-2 Large
===========
Table of Contents
-----------------
* Model Details
* How To Get Started With the Model
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Environmental Impact
* Technical Specifications
* Citation Information
* Model Card Authors
Model Details
-------------
Model Description: GPT-2 Large is the 774M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
* Developed by: OpenAI, see associated research paper and GitHub repo for model developers.
* Model Type: Transformer-based language model
* Language(s): English
* License: Modified MIT License
* Related Models: GPT-2, GPT-Medium and GPT-XL
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ OpenAI Model Card for GPT-2
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
In their model card about GPT-2, OpenAI wrote:
>
> The primary intended users of these models are AI researchers and practitioners.
>
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
>
>
>
#### Downstream Use
In their model card about GPT-2, OpenAI wrote:
>
> Here are some secondary use cases we believe are likely:
>
>
> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> * Entertainment: Creation of games, chat bots, and amusing generations.
>
>
>
#### Misuse and Out-of-scope Use
In their model card about GPT-2, OpenAI wrote:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training
--------
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
Evaluation
----------
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model authors write in the associated paper that:
>
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
>
>
>
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: Unknown
* Hours used: Unknown
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#gguf #arxiv-1910.09700 #region-us \n",
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-13-layer](https://huggingface.co/Citaman/command-r-13-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-13-layer
layer_range: [0, 12]
- model: Citaman/command-r-13-layer
layer_range: [1, 13]
merge_method: slerp
base_model: Citaman/command-r-13-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-13-layer"]} | Citaman/command-r-12-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-13-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T09:05:03+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-13-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-13-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-13-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-13-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-13-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ibivibiv/orthorus_v3_125b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q2_K.gguf) | Q2_K | 46.0 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.IQ3_XS.gguf.part2of2) | IQ3_XS | 51.3 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.IQ3_S.gguf.part2of2) | IQ3_S | 54.2 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q3_K_S.gguf.part2of2) | Q3_K_S | 54.2 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.IQ3_M.gguf.part2of2) | IQ3_M | 55.6 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q3_K_M.gguf.part2of2) | Q3_K_M | 60.2 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q3_K_L.gguf.part2of2) | Q3_K_L | 65.3 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.IQ4_XS.gguf.part2of2) | IQ4_XS | 67.6 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q4_K_S.gguf.part2of2) | Q4_K_S | 71.4 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q4_K_M.gguf.part2of2) | Q4_K_M | 75.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q5_K_S.gguf.part2of2) | Q5_K_S | 86.3 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q5_K_M.gguf.part2of2) | Q5_K_M | 88.9 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q6_K.gguf.part3of3) | Q6_K | 102.9 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF/resolve/main/orthorus_v3_125b.Q8_0.gguf.part3of3) | Q8_0 | 133.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "ibivibiv/orthorus_v3_125b", "quantized_by": "mradermacher"} | mradermacher/orthorus_v3_125b-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:ibivibiv/orthorus_v3_125b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T09:05:56+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-ibivibiv/orthorus_v3_125b #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-ibivibiv/orthorus_v3_125b #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# stablelm-3b-4e1t-slerp1
stablelm-3b-4e1t-slerp1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [stabilityai/japanese-stablelm-3b-4e1t-instruct](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-instruct)
* [stabilityai/japanese-stablelm-3b-4e1t-instruct](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-instruct)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: stabilityai/japanese-stablelm-3b-4e1t-instruct
layer_range: [0, 32]
- model: stabilityai/japanese-stablelm-3b-4e1t-instruct
layer_range: [0, 32]
merge_method: slerp
base_model: stabilityai/japanese-stablelm-3b-4e1t-instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/stablelm-3b-4e1t-slerp1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "stabilityai/japanese-stablelm-3b-4e1t-instruct"], "base_model": ["stabilityai/japanese-stablelm-3b-4e1t-instruct", "stabilityai/japanese-stablelm-3b-4e1t-instruct"]} | aipib/stablelm-3b-4e1t-slerp1 | null | [
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"stabilityai/japanese-stablelm-3b-4e1t-instruct",
"custom_code",
"base_model:stabilityai/japanese-stablelm-3b-4e1t-instruct",
"autotrain_compatible",
"region:us"
] | null | 2024-04-17T09:07:00+00:00 | [] | [] | TAGS
#transformers #safetensors #stablelm_epoch #text-generation #merge #mergekit #lazymergekit #stabilityai/japanese-stablelm-3b-4e1t-instruct #custom_code #base_model-stabilityai/japanese-stablelm-3b-4e1t-instruct #autotrain_compatible #region-us
|
# stablelm-3b-4e1t-slerp1
stablelm-3b-4e1t-slerp1 is a merge of the following models using LazyMergekit:
* stabilityai/japanese-stablelm-3b-4e1t-instruct
* stabilityai/japanese-stablelm-3b-4e1t-instruct
## Configuration
## Usage
| [
"# stablelm-3b-4e1t-slerp1\n\nstablelm-3b-4e1t-slerp1 is a merge of the following models using LazyMergekit:\n* stabilityai/japanese-stablelm-3b-4e1t-instruct\n* stabilityai/japanese-stablelm-3b-4e1t-instruct",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #stablelm_epoch #text-generation #merge #mergekit #lazymergekit #stabilityai/japanese-stablelm-3b-4e1t-instruct #custom_code #base_model-stabilityai/japanese-stablelm-3b-4e1t-instruct #autotrain_compatible #region-us \n",
"# stablelm-3b-4e1t-slerp1\n\nstablelm-3b-4e1t-slerp1 is a merge of the following models using LazyMergekit:\n* stabilityai/japanese-stablelm-3b-4e1t-instruct\n* stabilityai/japanese-stablelm-3b-4e1t-instruct",
"## Configuration",
"## Usage"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0417MAD1
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4934 | 0.09 | 10 | 1.9339 |
| 1.5303 | 0.18 | 20 | 0.6622 |
| 0.3481 | 0.27 | 30 | 0.1084 |
| 0.1158 | 0.36 | 40 | 0.0888 |
| 0.092 | 0.45 | 50 | 0.0768 |
| 0.0876 | 0.54 | 60 | 0.0724 |
| 0.0811 | 0.63 | 70 | 0.0727 |
| 0.0778 | 0.73 | 80 | 0.0699 |
| 0.0798 | 0.82 | 90 | 0.0656 |
| 0.0783 | 0.91 | 100 | 0.0647 |
| 0.0754 | 1.0 | 110 | 0.0638 |
| 0.0668 | 1.09 | 120 | 0.0635 |
| 0.0663 | 1.18 | 130 | 0.0629 |
| 0.064 | 1.27 | 140 | 0.0635 |
| 0.0592 | 1.36 | 150 | 0.0626 |
| 0.0719 | 1.45 | 160 | 0.0626 |
| 0.064 | 1.54 | 170 | 0.0602 |
| 0.0669 | 1.63 | 180 | 0.0613 |
| 0.0617 | 1.72 | 190 | 0.0621 |
| 0.0669 | 1.81 | 200 | 0.0594 |
| 0.0572 | 1.9 | 210 | 0.0596 |
| 0.0588 | 1.99 | 220 | 0.0607 |
| 0.051 | 2.08 | 230 | 0.0612 |
| 0.0559 | 2.18 | 240 | 0.0602 |
| 0.0529 | 2.27 | 250 | 0.0597 |
| 0.054 | 2.36 | 260 | 0.0601 |
| 0.0535 | 2.45 | 270 | 0.0604 |
| 0.0511 | 2.54 | 280 | 0.0601 |
| 0.0486 | 2.63 | 290 | 0.0614 |
| 0.053 | 2.72 | 300 | 0.0611 |
| 0.0573 | 2.81 | 310 | 0.0614 |
| 0.0504 | 2.9 | 320 | 0.0614 |
| 0.0541 | 2.99 | 330 | 0.0615 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0417MAD1", "results": []}]} | Litzy619/V0417MAD1 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-17T09:08:06+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
| V0417MAD1
=========
This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0615
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 60
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | baraah/blip2-opt-2.7b-football-captions-adapters | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T09:08:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |

# Tess-2.0-Mixtral-8x22B
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base.
# Prompt Format
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
# Training Methodology
Tess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions.
The model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible.
# Sample code to run inference
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Tess-2.0-Mixtral-8x22B"
output_file_path = "./conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.5,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Answer the question thoughtfully and intelligently. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
# Join My General AI Discord (NeuroLattice):
https://discord.gg/Hz6GrwGFKD
# Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
| {"license": "apache-2.0"} | blockblockblock/Tess-2.0-Mixtral-8x22B-bpw2.25 | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T09:08:59+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
!Tesoro
# Tess-2.0-Mixtral-8x22B
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base.
# Prompt Format
# Training Methodology
Tess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions.
The model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible.
# Sample code to run inference
# Join My General AI Discord (NeuroLattice):
URL
# Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
| [
"# Tess-2.0-Mixtral-8x22B\nTess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base.",
"# Prompt Format",
"# Training Methodology\nTess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions.\n\nThe model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible.",
"# Sample code to run inference",
"# Join My General AI Discord (NeuroLattice):\nURL",
"# Limitations & Biases:\n\nWhile this model aims for accuracy, it can occasionally produce inaccurate or misleading results. \n\nDespite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. \n\nExercise caution and cross-check information when necessary. This is an uncensored model."
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Tess-2.0-Mixtral-8x22B\nTess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base.",
"# Prompt Format",
"# Training Methodology\nTess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions.\n\nThe model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible.",
"# Sample code to run inference",
"# Join My General AI Discord (NeuroLattice):\nURL",
"# Limitations & Biases:\n\nWhile this model aims for accuracy, it can occasionally produce inaccurate or misleading results. \n\nDespite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. \n\nExercise caution and cross-check information when necessary. This is an uncensored model."
] |
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
| {"library_name": "peft"} | KarthikSab45/tiny-llama-medmcqa-chat-attempt2 | null | [
"peft",
"llama",
"4-bit",
"region:us"
] | null | 2024-04-17T09:09:15+00:00 | [] | [] | TAGS
#peft #llama #4-bit #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n- PEFT 0.4.0\n\n- PEFT 0.4.0"
] | [
"TAGS\n#peft #llama #4-bit #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n- PEFT 0.4.0\n\n- PEFT 0.4.0"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-callhome-jpn
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome jpn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6261
- Der: 0.2083
- False Alarm: 0.0760
- Missed Detection: 0.0768
- Confusion: 0.0554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.5905 | 1.0 | 336 | 0.6261 | 0.2083 | 0.0760 | 0.0768 | 0.0554 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-jpn", "results": []}]} | sanchit-gandhi/speaker-segmentation-fine-tuned-callhome-jpn | null | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"generated_from_trainer",
"dataset:diarizers-community/callhome",
"base_model:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T09:09:45+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #pyannet #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us
| speaker-segmentation-fine-tuned-callhome-jpn
============================================
This model is a fine-tuned version of pyannote/segmentation-3.0 on the diarizers-community/callhome jpn dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6261
* Der: 0.2083
* False Alarm: 0.0760
* Missed Detection: 0.0768
* Confusion: 0.0554
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1.0
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #pyannet #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zzttbrdd/sn6_05m](https://huggingface.co/zzttbrdd/sn6_05m)
* [zzttbrdd/sn6_04m](https://huggingface.co/zzttbrdd/sn6_04m)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: zzttbrdd/sn6_05m
layer_range: [0, 32]
- model: zzttbrdd/sn6_04m
layer_range: [0, 32]
merge_method: slerp
base_model: zzttbrdd/sn6_05m
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.4
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["zzttbrdd/sn6_05m", "zzttbrdd/sn6_04m"]} | Sumail/Ame7 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zzttbrdd/sn6_05m",
"base_model:zzttbrdd/sn6_04m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T09:10:49+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-zzttbrdd/sn6_05m #base_model-zzttbrdd/sn6_04m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* zzttbrdd/sn6_05m
* zzttbrdd/sn6_04m
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* zzttbrdd/sn6_05m\n* zzttbrdd/sn6_04m",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-zzttbrdd/sn6_05m #base_model-zzttbrdd/sn6_04m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* zzttbrdd/sn6_05m\n* zzttbrdd/sn6_04m",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
image-feature-extraction | transformers |
# Model Card for InternViT-6B-448px-V1-5
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/AUE-3OBtfr9vDA7Elgkhd.webp" alt="Image Description" width="300" height="300">
</p>
\[[InternVL 1.5 Technical Report](https://arxiv.org/abs/2404.16821)\] \[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\] \[[Chat Demo](https://internvl.opengvlab.com/)\] \[[中文解读](https://zhuanlan.zhihu.com/p/675877376)]
We develop InternViT-6B-448px-V1-5 based on the pre-training of the strong foundation of [InternViT-6B-448px-V1.2](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2). In this update, the resolution of training images is expanded from 448×448 to dynamic 448×448, where the basic tile size is 448×448 and the number of tiles ranges from 1 to 12.
Additionally, we enhance the data scale, quality, and diversity of the pre-training dataset, resulting in the powerful robustness, OCR capability, and high-resolution processing capability of our
1.5 version model.
## Model Details
- **Model Type:** vision foundation model, feature backbone
- **Model Stats:**
- Params (M): 5540 (the last 3 blocks are discarded)
- Image size: 448 x 448, training with 1 - 12 tiles
- **Pretrain Dataset:** LAION-en, LAION-zh, COYO, GRIT, COCO, TextCaps, Objects365, OpenImages, All-Seeing, Wukong-OCR, LaionCOCO-OCR, and other OCR-related datasets.
To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO.
- **Note:** InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for MLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a MLLM based on this model, **please make use of the features from the last layer.**
## Released Models
### Vision Foundation model
| Model | Date | Download | Note |
| ----------------------- | ---------- | ---------------------------------------------------------------------- | -------------------------------- |
| InternViT-6B-448px-V1.5 | 2024.04.20 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | support dynamic resolution, super strong OCR (🔥new) |
| InternViT-6B-448px-V1.2 | 2024.02.11 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2) | 448 resolution |
| InternViT-6B-448px-V1.0 | 2024.01.30 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0) | 448 resolution |
| InternViT-6B-224px | 2023.12.22 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-224px) | vision foundation model |
| InternVL-14B-224px | 2023.12.22 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-14B-224px) | vision-language foundation model |
### Multimodal Large Language Model (MLLM)
| Model | Date | Download | Note |
| ----------------------- | ---------- | --------------------------------------------------------------------------- | ---------------------------------- |
| InternVL-Chat-V1.5 | 2024.04.18 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5) | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (🔥new)|
| InternVL-Chat-V1.2-Plus | 2024.02.21 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) | more SFT data and stronger |
| InternVL-Chat-V1.2 | 2024.02.11 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) | scaling up LLM to 34B |
| InternVL-Chat-V1.1 | 2024.01.24 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1) | support Chinese and stronger OCR |
## Model Usage (Image Embeddings)
```python
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
model = AutoModel.from_pretrained(
'OpenGVLab/InternViT-6B-448px-V1-5',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).cuda().eval()
image = Image.open('./examples/image1.jpg').convert('RGB')
image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-448px-V1-5')
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
outputs = model(pixel_values)
```
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{chen2023internvl,
title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2312.14238},
year={2023}
}
```
## Acknowledgement
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work! | {"license": "mit", "datasets": ["laion/laion2B-en", "laion/laion-coco", "laion/laion2B-multi", "kakaobrain/coyo-700m", "conceptual_captions", "wanng/wukong100m"], "pipeline_tag": "image-feature-extraction"} | OpenGVLab/InternViT-6B-448px-V1-5 | null | [
"transformers",
"safetensors",
"intern_vit_6b",
"feature-extraction",
"image-feature-extraction",
"custom_code",
"dataset:laion/laion2B-en",
"dataset:laion/laion-coco",
"dataset:laion/laion2B-multi",
"dataset:kakaobrain/coyo-700m",
"dataset:conceptual_captions",
"dataset:wanng/wukong100m",
"arxiv:2404.16821",
"arxiv:2312.14238",
"license:mit",
"region:us"
] | null | 2024-04-17T09:11:24+00:00 | [
"2404.16821",
"2312.14238"
] | [] | TAGS
#transformers #safetensors #intern_vit_6b #feature-extraction #image-feature-extraction #custom_code #dataset-laion/laion2B-en #dataset-laion/laion-coco #dataset-laion/laion2B-multi #dataset-kakaobrain/coyo-700m #dataset-conceptual_captions #dataset-wanng/wukong100m #arxiv-2404.16821 #arxiv-2312.14238 #license-mit #region-us
| Model Card for InternViT-6B-448px-V1-5
======================================

[InternVL 1.5 Technical Report] [Paper] [GitHub] [Chat Demo] [中文解读]
We develop InternViT-6B-448px-V1-5 based on the pre-training of the strong foundation of InternViT-6B-448px-V1.2. In this update, the resolution of training images is expanded from 448×448 to dynamic 448×448, where the basic tile size is 448×448 and the number of tiles ranges from 1 to 12.
Additionally, we enhance the data scale, quality, and diversity of the pre-training dataset, resulting in the powerful robustness, OCR capability, and high-resolution processing capability of our
1.5 version model.
Model Details
-------------
* Model Type: vision foundation model, feature backbone
* Model Stats:
+ Params (M): 5540 (the last 3 blocks are discarded)
+ Image size: 448 x 448, training with 1 - 12 tiles
* Pretrain Dataset: LAION-en, LAION-zh, COYO, GRIT, COCO, TextCaps, Objects365, OpenImages, All-Seeing, Wukong-OCR, LaionCOCO-OCR, and other OCR-related datasets.
To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO.
* Note: InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for MLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a MLLM based on this model, please make use of the features from the last layer.
Released Models
---------------
### Vision Foundation model
### Multimodal Large Language Model (MLLM)
Model Usage (Image Embeddings)
------------------------------
If you find this project useful in your research, please consider citing:
Acknowledgement
---------------
InternVL is built with reference to the code of the following projects: OpenAI CLIP, Open CLIP, CLIP Benchmark, EVA, InternImage, ViT-Adapter, MMSegmentation, Transformers, DINOv2, BLIP-2, Qwen-VL, and LLaVA-1.5. Thanks for their awesome work!
| [
"### Vision Foundation model",
"### Multimodal Large Language Model (MLLM)\n\n\n\nModel Usage (Image Embeddings)\n------------------------------\n\n\nIf you find this project useful in your research, please consider citing:\n\n\nAcknowledgement\n---------------\n\n\nInternVL is built with reference to the code of the following projects: OpenAI CLIP, Open CLIP, CLIP Benchmark, EVA, InternImage, ViT-Adapter, MMSegmentation, Transformers, DINOv2, BLIP-2, Qwen-VL, and LLaVA-1.5. Thanks for their awesome work!"
] | [
"TAGS\n#transformers #safetensors #intern_vit_6b #feature-extraction #image-feature-extraction #custom_code #dataset-laion/laion2B-en #dataset-laion/laion-coco #dataset-laion/laion2B-multi #dataset-kakaobrain/coyo-700m #dataset-conceptual_captions #dataset-wanng/wukong100m #arxiv-2404.16821 #arxiv-2312.14238 #license-mit #region-us \n",
"### Vision Foundation model",
"### Multimodal Large Language Model (MLLM)\n\n\n\nModel Usage (Image Embeddings)\n------------------------------\n\n\nIf you find this project useful in your research, please consider citing:\n\n\nAcknowledgement\n---------------\n\n\nInternVL is built with reference to the code of the following projects: OpenAI CLIP, Open CLIP, CLIP Benchmark, EVA, InternImage, ViT-Adapter, MMSegmentation, Transformers, DINOv2, BLIP-2, Qwen-VL, and LLaVA-1.5. Thanks for their awesome work!"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3310
- F1: 0.8441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5198 | 1.0 | 573 | 0.3230 | 0.7965 |
| 0.2752 | 2.0 | 1146 | 0.3061 | 0.8171 |
| 0.1776 | 3.0 | 1719 | 0.3310 | 0.8441 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-fr", "results": []}]} | taoyoung/xlm-roberta-base-finetuned-panx-fr | null | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T09:11:59+00:00 | [] | [] | TAGS
#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-base-finetuned-panx-fr
==================================
This model is a fine-tuned version of xlm-roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3310
* F1: 0.8441
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu118
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-xl - bnb 4bits
- Model creator: https://huggingface.co/openai-community/
- Original model: https://huggingface.co/openai-community/gpt2-xl/
Original model description:
---
language: en
license: mit
---
# GPT-2 XL
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** GPT-2 XL is the **1.5B parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-Large](https://huggingface.co/gpt2-large)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- [OpenAI GPT-2 1.5B Release Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = GPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = TFGPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
#### Biases
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("The man worked as a", max_length=10, num_return_sequences=5)
set_seed(42)
generator("The woman worked as a", max_length=10, num_return_sequences=5)
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
When they released the 1.5B parameter model, OpenAI wrote in a [blog post](https://openai.com/blog/gpt-2-1-5b-release/):
> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.
The blog post further discusses the risks, limitations, and biases of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 8.63 | 63.24 | 93.30 | 89.05 | 18.34 | 35.76 | 0.93 | 0.98 | 17.48 | 42.16 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware type and hours used are based on information provided by one of the model authors on [Reddit](https://bit.ly/2Tw1x4L).
- **Hardware Type:** 32 TPUv3 chips
- **Hours used:** 168
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
| {} | RichardErkhov/openai-community_-_gpt2-xl-4bits | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-17T09:12:11+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
gpt2-xl - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language: en
license: mit
-------------------------
GPT-2 XL
========
Table of Contents
-----------------
* Model Details
* How To Get Started With the Model
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Environmental Impact
* Technical Specifications
* Citation Information
* Model Card Authors
Model Details
-------------
Model Description: GPT-2 XL is the 1.5B parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
* Developed by: OpenAI, see associated research paper and GitHub repo for model developers.
* Model Type: Transformer-based language model
* Language(s): English
* License: Modified MIT License
* Related Models: GPT-2, GPT-Medium and GPT-Large
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ OpenAI Model Card for GPT-2
+ OpenAI GPT-2 1.5B Release Blog Post
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
In their model card about GPT-2, OpenAI wrote:
>
> The primary intended users of these models are AI researchers and practitioners.
>
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
>
>
>
#### Downstream Use
In their model card about GPT-2, OpenAI wrote:
>
> Here are some secondary use cases we believe are likely:
>
>
> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> * Entertainment: Creation of games, chat bots, and amusing generations.
>
>
>
#### Misuse and Out-of-scope Use
In their model card about GPT-2, OpenAI wrote:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
#### Biases
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
When they released the 1.5B parameter model, OpenAI wrote in a blog post:
>
> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.
>
>
>
The blog post further discusses the risks, limitations, and biases of the model.
Training
--------
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
Evaluation
----------
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model authors write in the associated paper that:
>
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
>
>
>
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit.
* Hardware Type: 32 TPUv3 chips
* Hours used: 168
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, and training details.
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.",
"#### Biases\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\n\nWhen they released the 1.5B parameter model, OpenAI wrote in a blog post:\n\n\n\n> \n> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.\n> \n> \n> \n\n\nThe blog post further discusses the risks, limitations, and biases of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit.\n\n\n* Hardware Type: 32 TPUv3 chips\n* Hours used: 168\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.",
"#### Biases\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\n\nWhen they released the 1.5B parameter model, OpenAI wrote in a blog post:\n\n\n\n> \n> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.\n> \n> \n> \n\n\nThe blog post further discusses the risks, limitations, and biases of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit.\n\n\n* Hardware Type: 32 TPUv3 chips\n* Hours used: 168\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-xl - bnb 8bits
- Model creator: https://huggingface.co/openai-community/
- Original model: https://huggingface.co/openai-community/gpt2-xl/
Original model description:
---
language: en
license: mit
---
# GPT-2 XL
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** GPT-2 XL is the **1.5B parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-Large](https://huggingface.co/gpt2-large)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- [OpenAI GPT-2 1.5B Release Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = GPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = TFGPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
#### Biases
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("The man worked as a", max_length=10, num_return_sequences=5)
set_seed(42)
generator("The woman worked as a", max_length=10, num_return_sequences=5)
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
When they released the 1.5B parameter model, OpenAI wrote in a [blog post](https://openai.com/blog/gpt-2-1-5b-release/):
> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.
The blog post further discusses the risks, limitations, and biases of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 8.63 | 63.24 | 93.30 | 89.05 | 18.34 | 35.76 | 0.93 | 0.98 | 17.48 | 42.16 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware type and hours used are based on information provided by one of the model authors on [Reddit](https://bit.ly/2Tw1x4L).
- **Hardware Type:** 32 TPUv3 chips
- **Hours used:** 168
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
| {} | RichardErkhov/openai-community_-_gpt2-xl-8bits | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-17T09:13:28+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
gpt2-xl - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language: en
license: mit
-------------------------
GPT-2 XL
========
Table of Contents
-----------------
* Model Details
* How To Get Started With the Model
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Environmental Impact
* Technical Specifications
* Citation Information
* Model Card Authors
Model Details
-------------
Model Description: GPT-2 XL is the 1.5B parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
* Developed by: OpenAI, see associated research paper and GitHub repo for model developers.
* Model Type: Transformer-based language model
* Language(s): English
* License: Modified MIT License
* Related Models: GPT-2, GPT-Medium and GPT-Large
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ OpenAI Model Card for GPT-2
+ OpenAI GPT-2 1.5B Release Blog Post
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
In their model card about GPT-2, OpenAI wrote:
>
> The primary intended users of these models are AI researchers and practitioners.
>
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
>
>
>
#### Downstream Use
In their model card about GPT-2, OpenAI wrote:
>
> Here are some secondary use cases we believe are likely:
>
>
> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> * Entertainment: Creation of games, chat bots, and amusing generations.
>
>
>
#### Misuse and Out-of-scope Use
In their model card about GPT-2, OpenAI wrote:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
#### Biases
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
When they released the 1.5B parameter model, OpenAI wrote in a blog post:
>
> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.
>
>
>
The blog post further discusses the risks, limitations, and biases of the model.
Training
--------
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
Evaluation
----------
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model authors write in the associated paper that:
>
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
>
>
>
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit.
* Hardware Type: 32 TPUv3 chips
* Hours used: 168
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, and training details.
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.",
"#### Biases\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\n\nWhen they released the 1.5B parameter model, OpenAI wrote in a blog post:\n\n\n\n> \n> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.\n> \n> \n> \n\n\nThe blog post further discusses the risks, limitations, and biases of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit.\n\n\n* Hardware Type: 32 TPUv3 chips\n* Hours used: 168\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.",
"#### Biases\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\n\nWhen they released the 1.5B parameter model, OpenAI wrote in a blog post:\n\n\n\n> \n> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.\n> \n> \n> \n\n\nThe blog post further discusses the risks, limitations, and biases of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit.\n\n\n* Hardware Type: 32 TPUv3 chips\n* Hours used: 168\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-xl - GGUF
- Model creator: https://huggingface.co/openai-community/
- Original model: https://huggingface.co/openai-community/gpt2-xl/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-xl.Q2_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q2_K.gguf) | Q2_K | 0.84GB |
| [gpt2-xl.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.IQ3_XS.gguf) | IQ3_XS | 0.84GB |
| [gpt2-xl.IQ3_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.IQ3_S.gguf) | IQ3_S | 0.84GB |
| [gpt2-xl.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q3_K_S.gguf) | Q3_K_S | 0.84GB |
| [gpt2-xl.IQ3_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.IQ3_M.gguf) | IQ3_M | 0.91GB |
| [gpt2-xl.Q3_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q3_K.gguf) | Q3_K | 0.96GB |
| [gpt2-xl.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q3_K_M.gguf) | Q3_K_M | 0.96GB |
| [gpt2-xl.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q3_K_L.gguf) | Q3_K_L | 1.02GB |
| [gpt2-xl.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.IQ4_XS.gguf) | IQ4_XS | 0.9GB |
| [gpt2-xl.Q4_0.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q4_0.gguf) | Q4_0 | 0.9GB |
| [gpt2-xl.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.IQ4_NL.gguf) | IQ4_NL | 0.91GB |
| [gpt2-xl.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q4_K_S.gguf) | Q4_K_S | 1.03GB |
| [gpt2-xl.Q4_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q4_K.gguf) | Q4_K | 1.11GB |
| [gpt2-xl.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q4_K_M.gguf) | Q4_K_M | 1.11GB |
| [gpt2-xl.Q4_1.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q4_1.gguf) | Q4_1 | 0.99GB |
| [gpt2-xl.Q5_0.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q5_0.gguf) | Q5_0 | 1.08GB |
| [gpt2-xl.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q5_K_S.gguf) | Q5_K_S | 1.15GB |
| [gpt2-xl.Q5_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q5_K.gguf) | Q5_K | 1.28GB |
| [gpt2-xl.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q5_K_M.gguf) | Q5_K_M | 1.28GB |
| [gpt2-xl.Q5_1.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q5_1.gguf) | Q5_1 | 1.17GB |
| [gpt2-xl.Q6_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q6_K.gguf) | Q6_K | 1.52GB |
Original model description:
---
language: en
license: mit
---
# GPT-2 XL
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** GPT-2 XL is the **1.5B parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-Large](https://huggingface.co/gpt2-large)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- [OpenAI GPT-2 1.5B Release Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = GPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = TFGPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
#### Biases
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("The man worked as a", max_length=10, num_return_sequences=5)
set_seed(42)
generator("The woman worked as a", max_length=10, num_return_sequences=5)
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
When they released the 1.5B parameter model, OpenAI wrote in a [blog post](https://openai.com/blog/gpt-2-1-5b-release/):
> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.
The blog post further discusses the risks, limitations, and biases of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 8.63 | 63.24 | 93.30 | 89.05 | 18.34 | 35.76 | 0.93 | 0.98 | 17.48 | 42.16 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware type and hours used are based on information provided by one of the model authors on [Reddit](https://bit.ly/2Tw1x4L).
- **Hardware Type:** 32 TPUv3 chips
- **Hours used:** 168
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
| {} | RichardErkhov/openai-community_-_gpt2-xl-gguf | null | [
"gguf",
"arxiv:1910.09700",
"region:us"
] | null | 2024-04-17T09:16:12+00:00 | [
"1910.09700"
] | [] | TAGS
#gguf #arxiv-1910.09700 #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
gpt2-xl - GGUF
* Model creator: URL
* Original model: URL
Name: gpt2-xl.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.84GB
Name: gpt2-xl.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.84GB
Name: gpt2-xl.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.84GB
Name: gpt2-xl.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.84GB
Name: gpt2-xl.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.91GB
Name: gpt2-xl.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.96GB
Name: gpt2-xl.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.96GB
Name: gpt2-xl.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 1.02GB
Name: gpt2-xl.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.9GB
Name: gpt2-xl.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.9GB
Name: gpt2-xl.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.91GB
Name: gpt2-xl.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 1.03GB
Name: gpt2-xl.Q4\_K.gguf, Quant method: Q4\_K, Size: 1.11GB
Name: gpt2-xl.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 1.11GB
Name: gpt2-xl.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.99GB
Name: gpt2-xl.Q5\_0.gguf, Quant method: Q5\_0, Size: 1.08GB
Name: gpt2-xl.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 1.15GB
Name: gpt2-xl.Q5\_K.gguf, Quant method: Q5\_K, Size: 1.28GB
Name: gpt2-xl.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 1.28GB
Name: gpt2-xl.Q5\_1.gguf, Quant method: Q5\_1, Size: 1.17GB
Name: gpt2-xl.Q6\_K.gguf, Quant method: Q6\_K, Size: 1.52GB
Original model description:
---------------------------
language: en
license: mit
-------------------------
GPT-2 XL
========
Table of Contents
-----------------
* Model Details
* How To Get Started With the Model
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Environmental Impact
* Technical Specifications
* Citation Information
* Model Card Authors
Model Details
-------------
Model Description: GPT-2 XL is the 1.5B parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
* Developed by: OpenAI, see associated research paper and GitHub repo for model developers.
* Model Type: Transformer-based language model
* Language(s): English
* License: Modified MIT License
* Related Models: GPT-2, GPT-Medium and GPT-Large
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ OpenAI Model Card for GPT-2
+ OpenAI GPT-2 1.5B Release Blog Post
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
In their model card about GPT-2, OpenAI wrote:
>
> The primary intended users of these models are AI researchers and practitioners.
>
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
>
>
>
#### Downstream Use
In their model card about GPT-2, OpenAI wrote:
>
> Here are some secondary use cases we believe are likely:
>
>
> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> * Entertainment: Creation of games, chat bots, and amusing generations.
>
>
>
#### Misuse and Out-of-scope Use
In their model card about GPT-2, OpenAI wrote:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
#### Biases
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
When they released the 1.5B parameter model, OpenAI wrote in a blog post:
>
> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.
>
>
>
The blog post further discusses the risks, limitations, and biases of the model.
Training
--------
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
Evaluation
----------
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model authors write in the associated paper that:
>
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
>
>
>
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit.
* Hardware Type: 32 TPUv3 chips
* Hours used: 168
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, and training details.
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.",
"#### Biases\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\n\nWhen they released the 1.5B parameter model, OpenAI wrote in a blog post:\n\n\n\n> \n> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.\n> \n> \n> \n\n\nThe blog post further discusses the risks, limitations, and biases of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit.\n\n\n* Hardware Type: 32 TPUv3 chips\n* Hours used: 168\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#gguf #arxiv-1910.09700 #region-us \n",
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.",
"#### Biases\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\n\nWhen they released the 1.5B parameter model, OpenAI wrote in a blog post:\n\n\n\n> \n> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.\n> \n> \n> \n\n\nThe blog post further discusses the risks, limitations, and biases of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit.\n\n\n* Hardware Type: 32 TPUv3 chips\n* Hours used: 168\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0564
- Accuracy: 0.7887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7647 | 1.0 | 4597 | 0.6200 | 0.7620 |
| 0.3838 | 2.0 | 9194 | 0.6261 | 0.7821 |
| 0.1369 | 3.0 | 13791 | 1.0564 | 0.7887 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-finetuned-swag", "results": []}]} | JuliaLFK/bert-base-uncased-finetuned-swag | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T09:16:31+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #multiple-choice #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| bert-base-uncased-finetuned-swag
================================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0564
* Accuracy: 0.7887
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #multiple-choice #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_SOAPL
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_SOAPL", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_SOAPL | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T09:16:50+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CS505_COQE_viT5_train_Instruction0_SOAPL
This model is a fine-tuned version of VietAI/vit5-large on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# CS505_COQE_viT5_train_Instruction0_SOAPL\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CS505_COQE_viT5_train_Instruction0_SOAPL\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.