pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) as a base.
### Models Merged
The following models were included in the merge:
* [Undi95/Toppy-M-7B](https://huggingface.co/Undi95/Toppy-M-7B)
* [Epiculous/Fett-uccine-7B](https://huggingface.co/Epiculous/Fett-uccine-7B)
* [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NeverSleep/Noromaid-7B-0.4-DPO
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
- model: Undi95/Toppy-M-7B
- model: Epiculous/Fett-uccine-7B
merge_method: model_stock
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
dtype: bfloat16
``` | {"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Undi95/Toppy-M-7B", "SanjiWatsuki/Kunoichi-DPO-v2-7B", "Epiculous/Fett-uccine-7B", "NeverSleep/Noromaid-7B-0.4-DPO"]} | varox34/7B-Model_Stock | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:Undi95/Toppy-M-7B",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:Epiculous/Fett-uccine-7B",
"base_model:NeverSleep/Noromaid-7B-0.4-DPO",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T07:02:28+00:00 | [
"2403.19522"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-Undi95/Toppy-M-7B #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-Epiculous/Fett-uccine-7B #base_model-NeverSleep/Noromaid-7B-0.4-DPO #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the Model Stock merge method using SanjiWatsuki/Kunoichi-DPO-v2-7B as a base.
### Models Merged
The following models were included in the merge:
* Undi95/Toppy-M-7B
* Epiculous/Fett-uccine-7B
* NeverSleep/Noromaid-7B-0.4-DPO
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using SanjiWatsuki/Kunoichi-DPO-v2-7B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* Undi95/Toppy-M-7B\n* Epiculous/Fett-uccine-7B\n* NeverSleep/Noromaid-7B-0.4-DPO",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-Undi95/Toppy-M-7B #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-Epiculous/Fett-uccine-7B #base_model-NeverSleep/Noromaid-7B-0.4-DPO #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using SanjiWatsuki/Kunoichi-DPO-v2-7B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* Undi95/Toppy-M-7B\n* Epiculous/Fett-uccine-7B\n* NeverSleep/Noromaid-7B-0.4-DPO",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | nmdr/Llama-3-8B-Instruct-Physics-2k-Mufasa | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T07:03:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | rubbrband/juggernaut_reborn | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-23T07:06:53+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt-neox-20b - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/gpt-neox-20b/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- EleutherAI/pile
---
GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained
on [the Pile](https://pile.eleuther.ai/) using the [GPT-NeoX
library](https://github.com/EleutherAI/gpt-neox). Its architecture intentionally
resembles that of GPT-3, and is almost identical to that of [GPT-J-
6B](https://huggingface.co/EleutherAI/gpt-j-6B). Its training dataset contains
a multitude of English-language texts, reflecting the general-purpose nature
of this model. See the [accompanying paper](https://arxiv.org/abs/2204.06745)
for details about model architecture (including how it differs from GPT-3),
training procedure, and additional evaluations.
### Model details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [GPT-NeoX-20B: An Open-Source Autoregressive Language
Model](https://arxiv.org/abs/2204.06745). For details about the training dataset,
see [the Pile paper](https://arxiv.org/abs/2101.00027), and [its data
sheet](https://arxiv.org/abs/2201.07311).
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing GPT-NeoX-20B documentation before asking about the model
on Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure style="width:30em">
| Hyperparameter | Value |
| ---------------------- | ----------- |
| n<sub>parameters</sub> | 20554567680 |
| n<sub>layers</sub> | 44 |
| d<sub>model</sub> | 6144 |
| n<sub>heads</sub> | 64 |
| d<sub>head</sub> | 96 |
| n<sub>vocab</sub> | 50257 |
| Sequence Length | 2048 |
| Learning Rate | 0.97 x 10<sup>-5</sup> |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
</figure>
### Uses and limitations
#### Intended use
GPT-NeoX-20B was developed primarily for research purposes. It learns an inner
representation of the English language that can be used to extract features
useful for downstream tasks.
In addition to scientific uses, you may also further fine-tune and adapt
GPT-NeoX-20B for deployment, as long as your use is in accordance with the
Apache 2.0 license. This model works with the [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained GPT-NeoX-20B as a basis for your fine-tuned model, please note that
you need to conduct your own risk and bias assessment.
#### Out-of-scope use
GPT-NeoX-20B is **not** intended for deployment as-is. It is not a product
and cannot be used for human-facing interactions without supervision.
GPT-NeoX-20B has not been fine-tuned for downstream tasks for which language
models are commonly deployed, such as writing genre prose, or commercial
chatbots. This means GPT-NeoX-20B will likely **not** respond to a given prompt
the way products such as ChatGPT do. This is because, unlike GPT-NeoX-20B,
ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human
Feedback (RLHF) to better “understand” human instructions and dialogue.
This model is English-language only, and thus cannot be used for translation
or generating text in other languages.
#### Limitations and biases
The core functionality of GPT-NeoX-20B is to take a string of text and predict
the next token. Remember that the statistically most likely next token need
not result in the most “accurate” text. Never rely on GPT-NeoX-20B to produce
factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
GPT-NeoX-20B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
We recommend curating the outputs of this model before presenting it to a human
reader. Please inform your audience that you are using artificially generated
text.
#### How to use
If you simply want to try out some prompts, check out [this
playground](https://20b.eleuther.ai/).
GPT-NeoX-20B can be loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b")
```
### Training
#### Training dataset
The Pile is a 825GiB general-purpose dataset in English. It was created by
EleutherAI specifically for training large language models. It contains texts
from 22 diverse sources, roughly broken down into five categories: academic
writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project
Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub,
Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for
a breakdown of all data sources, methodology, and a discussion of ethical
implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for
more detailed documentation about the Pile and its component datasets. The
Pile can be downloaded from the [official website](https://pile.eleuther.ai/),
or from a [community mirror](https://the-eye.eu/public/AI/pile/).
The Pile was **not** deduplicated before being used to train GPT-NeoX-20B.
#### Training procedure
GPT-NeoX-20B was trained with a batch size of approximately 3.15M tokens
(1538 sequences of 2048 tokens each), for a total of 150,000 steps. Tensor
parallelism and pipeline parallelism were used to distribute the model across
GPUs. Additional details about the training procedure are in [Section 3 of
the accompanying paper](https://arxiv.org/abs/2204.06745).
### Evaluations
<figure style="width:55em">
| Model | OpenAI’s LAMBADA | SciQ | PIQA | TriviaQA | ARC (Challenge) |
| ------------- | :--------------: | :-----------: | :-----------: | :-----------: | :-------------: |
| GPT-J-6B | 0.683 ± 0.006 | 0.910 ± 0.009 | 0.752 ± 0.010 | 0.170 ± 0.004 | 0.340 ± 0.014 |
| FairSeq 6.7B | 0.673 ± 0.007 | 0.895 ± 0.010 | 0.762 ± 0.010 | 0.221 ± 0.004 | 0.329 ± 0.014 |
| GPT-3 Curie | 0.693 ± 0.006 | 0.918 ± 0.009 | 0.767 ± 0.010 | 0.196 ± 0.004 | 0.334 ± 0.014 |
| FairSeq 13B | 0.709 ± 0.006 | 0.910 ± 0.009 | 0.769 ± 0.010 | 0.270 ± 0.004 | 0.345 ± 0.014 |
| GPT-NeoX-20B | 0.720 ± 0.006 | 0.928 ± 0.008 | 0.779 ± 0.010 | 0.259 ± 0.004 | 0.380 ± 0.014 |
| GPT-3 DaVinci | 0.752 ± 0.006 | 0.949 ± 0.007 | 0.791 ± 0.009 | 0.409 ± 0.005 | 0.435 ± 0.014 |
<figcaption>Zero-shot performance on selected natural language tasks.</figcaption>
</figure>
This is a heavily abridged version of the evaluation results. Appendix D of the
[GPT-NeoX-20B paper](https://arxiv.org/abs/2204.06745) compares more model
sizes, and contains additional evaluations, including on: zero and five-shot
natural language tasks, zero and five-shot Basic Arithmetic and MATH,
and zero-shot Hendrycks tasks.
### BibTeX
To cite the GPT-NeoX-20B paper:
```
@misc{https://doi.org/10.48550/arxiv.2204.06745,
doi = {10.48550/ARXIV.2204.06745},
url = {https://arxiv.org/abs/2204.06745},
author = {Black, Sid and Biderman, Stella and Hallahan, Eric and Anthony, Quentin and Gao, Leo and Golding, Laurence and He, Horace and Leahy, Connor and McDonell, Kyle and Phang, Jason and Pieler, Michael and Prashanth, USVSN Sai and Purohit, Shivanshu and Reynolds, Laria and Tow, Jonathan and Wang, Ben and Weinbach, Samuel},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {GPT-NeoX-20B: An Open-Source Autoregressive Language Model},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__gpt-neox-20b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 36.02 |
| ARC (25-shot) | 45.73 |
| HellaSwag (10-shot) | 73.45 |
| MMLU (5-shot) | 25.0 |
| TruthfulQA (0-shot) | 31.61 |
| Winogrande (5-shot) | 68.9 |
| GSM8K (5-shot) | 2.43 |
| DROP (3-shot) | 5.04 |
| {} | RichardErkhov/EleutherAI_-_gpt-neox-20b-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2204.06745",
"arxiv:2101.00027",
"arxiv:2201.07311",
"arxiv:2104.09864",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:07:43+00:00 | [
"2204.06745",
"2101.00027",
"2201.07311",
"2104.09864"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2204.06745 #arxiv-2101.00027 #arxiv-2201.07311 #arxiv-2104.09864 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
gpt-neox-20b - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
license: apache-2.0
datasets:
* EleutherAI/pile
---
GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained
on the Pile using the GPT-NeoX
library. Its architecture intentionally
resembles that of GPT-3, and is almost identical to that of GPT-J-
6B. Its training dataset contains
a multitude of English-language texts, reflecting the general-purpose nature
of this model. See the accompanying paper
for details about model architecture (including how it differs from GPT-3),
training procedure, and additional evaluations.
### Model details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: GPT-NeoX-20B: An Open-Source Autoregressive Language
Model. For details about the training dataset,
see the Pile paper, and its data
sheet.
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing GPT-NeoX-20B documentation before asking about the model
on Discord. For general correspondence: contact@eleuther.
ai.
### Uses and limitations
#### Intended use
GPT-NeoX-20B was developed primarily for research purposes. It learns an inner
representation of the English language that can be used to extract features
useful for downstream tasks.
In addition to scientific uses, you may also further fine-tune and adapt
GPT-NeoX-20B for deployment, as long as your use is in accordance with the
Apache 2.0 license. This model works with the Transformers
Library. If you decide to use
pre-trained GPT-NeoX-20B as a basis for your fine-tuned model, please note that
you need to conduct your own risk and bias assessment.
#### Out-of-scope use
GPT-NeoX-20B is not intended for deployment as-is. It is not a product
and cannot be used for human-facing interactions without supervision.
GPT-NeoX-20B has not been fine-tuned for downstream tasks for which language
models are commonly deployed, such as writing genre prose, or commercial
chatbots. This means GPT-NeoX-20B will likely not respond to a given prompt
the way products such as ChatGPT do. This is because, unlike GPT-NeoX-20B,
ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human
Feedback (RLHF) to better “understand” human instructions and dialogue.
This model is English-language only, and thus cannot be used for translation
or generating text in other languages.
#### Limitations and biases
The core functionality of GPT-NeoX-20B is to take a string of text and predict
the next token. Remember that the statistically most likely next token need
not result in the most “accurate” text. Never rely on GPT-NeoX-20B to produce
factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
GPT-NeoX-20B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
We recommend curating the outputs of this model before presenting it to a human
reader. Please inform your audience that you are using artificially generated
text.
#### How to use
If you simply want to try out some prompts, check out this
playground.
GPT-NeoX-20B can be loaded using the 'AutoModelForCausalLM' functionality:
### Training
#### Training dataset
The Pile is a 825GiB general-purpose dataset in English. It was created by
EleutherAI specifically for training large language models. It contains texts
from 22 diverse sources, roughly broken down into five categories: academic
writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project
Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub,
Enron Emails). See the Pile paper for
a breakdown of all data sources, methodology, and a discussion of ethical
implications. Consult the datasheet for
more detailed documentation about the Pile and its component datasets. The
Pile can be downloaded from the official website,
or from a community mirror.
The Pile was not deduplicated before being used to train GPT-NeoX-20B.
#### Training procedure
GPT-NeoX-20B was trained with a batch size of approximately 3.15M tokens
(1538 sequences of 2048 tokens each), for a total of 150,000 steps. Tensor
parallelism and pipeline parallelism were used to distribute the model across
GPUs. Additional details about the training procedure are in Section 3 of
the accompanying paper.
### Evaluations
Zero-shot performance on selected natural language tasks.
This is a heavily abridged version of the evaluation results. Appendix D of the
GPT-NeoX-20B paper compares more model
sizes, and contains additional evaluations, including on: zero and five-shot
natural language tasks, zero and five-shot Basic Arithmetic and MATH,
and zero-shot Hendrycks tasks.
### BibTeX
To cite the GPT-NeoX-20B paper:
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [
"### Model details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: GPT-NeoX-20B: An Open-Source Autoregressive Language\nModel. For details about the training dataset,\nsee the Pile paper, and its data\nsheet.\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing GPT-NeoX-20B documentation before asking about the model\non Discord. For general correspondence: contact@eleuther.\nai.",
"### Uses and limitations",
"#### Intended use\n\n\nGPT-NeoX-20B was developed primarily for research purposes. It learns an inner\nrepresentation of the English language that can be used to extract features\nuseful for downstream tasks.\n\n\nIn addition to scientific uses, you may also further fine-tune and adapt\nGPT-NeoX-20B for deployment, as long as your use is in accordance with the\nApache 2.0 license. This model works with the Transformers\nLibrary. If you decide to use\npre-trained GPT-NeoX-20B as a basis for your fine-tuned model, please note that\nyou need to conduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nGPT-NeoX-20B is not intended for deployment as-is. It is not a product\nand cannot be used for human-facing interactions without supervision.\n\n\nGPT-NeoX-20B has not been fine-tuned for downstream tasks for which language\nmodels are commonly deployed, such as writing genre prose, or commercial\nchatbots. This means GPT-NeoX-20B will likely not respond to a given prompt\nthe way products such as ChatGPT do. This is because, unlike GPT-NeoX-20B,\nChatGPT was fine-tuned using methods such as Reinforcement Learning from Human\nFeedback (RLHF) to better “understand” human instructions and dialogue.\n\n\nThis model is English-language only, and thus cannot be used for translation\nor generating text in other languages.",
"#### Limitations and biases\n\n\nThe core functionality of GPT-NeoX-20B is to take a string of text and predict\nthe next token. Remember that the statistically most likely next token need\nnot result in the most “accurate” text. Never rely on GPT-NeoX-20B to produce\nfactually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nGPT-NeoX-20B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nWe recommend curating the outputs of this model before presenting it to a human\nreader. Please inform your audience that you are using artificially generated\ntext.",
"#### How to use\n\n\nIf you simply want to try out some prompts, check out this\nplayground.\n\n\nGPT-NeoX-20B can be loaded using the 'AutoModelForCausalLM' functionality:",
"### Training",
"#### Training dataset\n\n\nThe Pile is a 825GiB general-purpose dataset in English. It was created by\nEleutherAI specifically for training large language models. It contains texts\nfrom 22 diverse sources, roughly broken down into five categories: academic\nwriting (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project\nGutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub,\nEnron Emails). See the Pile paper for\na breakdown of all data sources, methodology, and a discussion of ethical\nimplications. Consult the datasheet for\nmore detailed documentation about the Pile and its component datasets. The\nPile can be downloaded from the official website,\nor from a community mirror.\n\n\nThe Pile was not deduplicated before being used to train GPT-NeoX-20B.",
"#### Training procedure\n\n\nGPT-NeoX-20B was trained with a batch size of approximately 3.15M tokens\n(1538 sequences of 2048 tokens each), for a total of 150,000 steps. Tensor\nparallelism and pipeline parallelism were used to distribute the model across\nGPUs. Additional details about the training procedure are in Section 3 of\nthe accompanying paper.",
"### Evaluations\n\n\n\n\nZero-shot performance on selected natural language tasks.\n\nThis is a heavily abridged version of the evaluation results. Appendix D of the\nGPT-NeoX-20B paper compares more model\nsizes, and contains additional evaluations, including on: zero and five-shot\nnatural language tasks, zero and five-shot Basic Arithmetic and MATH,\nand zero-shot Hendrycks tasks.",
"### BibTeX\n\n\nTo cite the GPT-NeoX-20B paper:\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2204.06745 #arxiv-2101.00027 #arxiv-2201.07311 #arxiv-2104.09864 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: GPT-NeoX-20B: An Open-Source Autoregressive Language\nModel. For details about the training dataset,\nsee the Pile paper, and its data\nsheet.\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing GPT-NeoX-20B documentation before asking about the model\non Discord. For general correspondence: contact@eleuther.\nai.",
"### Uses and limitations",
"#### Intended use\n\n\nGPT-NeoX-20B was developed primarily for research purposes. It learns an inner\nrepresentation of the English language that can be used to extract features\nuseful for downstream tasks.\n\n\nIn addition to scientific uses, you may also further fine-tune and adapt\nGPT-NeoX-20B for deployment, as long as your use is in accordance with the\nApache 2.0 license. This model works with the Transformers\nLibrary. If you decide to use\npre-trained GPT-NeoX-20B as a basis for your fine-tuned model, please note that\nyou need to conduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nGPT-NeoX-20B is not intended for deployment as-is. It is not a product\nand cannot be used for human-facing interactions without supervision.\n\n\nGPT-NeoX-20B has not been fine-tuned for downstream tasks for which language\nmodels are commonly deployed, such as writing genre prose, or commercial\nchatbots. This means GPT-NeoX-20B will likely not respond to a given prompt\nthe way products such as ChatGPT do. This is because, unlike GPT-NeoX-20B,\nChatGPT was fine-tuned using methods such as Reinforcement Learning from Human\nFeedback (RLHF) to better “understand” human instructions and dialogue.\n\n\nThis model is English-language only, and thus cannot be used for translation\nor generating text in other languages.",
"#### Limitations and biases\n\n\nThe core functionality of GPT-NeoX-20B is to take a string of text and predict\nthe next token. Remember that the statistically most likely next token need\nnot result in the most “accurate” text. Never rely on GPT-NeoX-20B to produce\nfactually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nGPT-NeoX-20B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nWe recommend curating the outputs of this model before presenting it to a human\nreader. Please inform your audience that you are using artificially generated\ntext.",
"#### How to use\n\n\nIf you simply want to try out some prompts, check out this\nplayground.\n\n\nGPT-NeoX-20B can be loaded using the 'AutoModelForCausalLM' functionality:",
"### Training",
"#### Training dataset\n\n\nThe Pile is a 825GiB general-purpose dataset in English. It was created by\nEleutherAI specifically for training large language models. It contains texts\nfrom 22 diverse sources, roughly broken down into five categories: academic\nwriting (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project\nGutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub,\nEnron Emails). See the Pile paper for\na breakdown of all data sources, methodology, and a discussion of ethical\nimplications. Consult the datasheet for\nmore detailed documentation about the Pile and its component datasets. The\nPile can be downloaded from the official website,\nor from a community mirror.\n\n\nThe Pile was not deduplicated before being used to train GPT-NeoX-20B.",
"#### Training procedure\n\n\nGPT-NeoX-20B was trained with a batch size of approximately 3.15M tokens\n(1538 sequences of 2048 tokens each), for a total of 150,000 steps. Tensor\nparallelism and pipeline parallelism were used to distribute the model across\nGPUs. Additional details about the training procedure are in Section 3 of\nthe accompanying paper.",
"### Evaluations\n\n\n\n\nZero-shot performance on selected natural language tasks.\n\nThis is a heavily abridged version of the evaluation results. Appendix D of the\nGPT-NeoX-20B paper compares more model\nsizes, and contains additional evaluations, including on: zero and five-shot\nnatural language tasks, zero and five-shot Basic Arithmetic and MATH,\nand zero-shot Hendrycks tasks.",
"### BibTeX\n\n\nTo cite the GPT-NeoX-20B paper:\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "260.21 +/- 15.02", "name": "mean_reward", "verified": false}]}]}]} | Itamarb123/LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-23T07:08:13+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
polyglot-ko-5.8b - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/polyglot-ko-5.8b/
Original model description:
---
language:
- ko
tags:
- pytorch
- causal-lm
license: apache-2.0
---
# Polyglot-Ko-5.8B
## Model Description
Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team.
| Hyperparameter | Value |
|----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| \\(n_{parameters}\\) | 5,885,059,072 |
| \\(n_{layers}\\) | 28 |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16,384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2,048 |
| \\(n_{vocab}\\) | 30,003 / 30,080 |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
The model consists of 28 transformer layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 30003.
## Training data
Polyglot-Ko-5.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by [TUNiB](https://tunib.ai/). The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use.
| Source |Size (GB) | Link |
|-------------------------------------|---------|------------------------------------------|
| Korean blog posts | 682.3 | - |
| Korean news dataset | 87.0 | - |
| Modu corpus | 26.4 |corpus.korean.go.kr |
| Korean patent dataset | 19.0 | - |
| Korean Q & A dataset | 18.1 | - |
| KcBert dataset | 12.7 | github.com/Beomi/KcBERT |
| Korean fiction dataset | 6.1 | - |
| Korean online comments | 4.2 | - |
| Korean wikipedia | 1.4 | ko.wikipedia.org |
| Clova call | < 1.0 | github.com/clovaai/ClovaCall |
| Naver sentiment movie corpus | < 1.0 | github.com/e9t/nsmc |
| Korean hate speech dataset | < 1.0 | - |
| Open subtitles | < 1.0 | opus.nlpl.eu/OpenSubtitles.php |
| AIHub various tasks datasets | < 1.0 |aihub.or.kr |
| Standard Korean language dictionary | < 1.0 | stdict.korean.go.kr/main/main.do |
Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage:
* `<|acc|>` : bank account number
* `<|rrn|>` : resident registration number
* `<|tell|>` : phone number
## Training procedure
Polyglot-Ko-5.8B was trained for 172 billion tokens over 320,000 steps on 256 A100 GPUs with the [GPT-NeoX framework](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` class:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/polyglot-ko-5.8b")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-5.8b")
```
## Evaluation results
We evaluate Polyglot-Ko-3.8B on [KOBEST dataset](https://arxiv.org/abs/2204.04541), a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper.
The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the [polyglot branch of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, `n` refers to the number of few-shot examples.
In case of WiC dataset, all models show random performance.
```console
python main.py \
--model gpt2 \
--model_args pretrained='EleutherAI/polyglot-ko-3.8b' \
--tasks kobest_copa,kobest_hellaswag \
--num_fewshot $YOUR_NUM_FEWSHOT \
--batch_size $YOUR_BATCH_SIZE \
--device $YOUR_DEVICE \
--output_path $/path/to/output/
```
### COPA (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
| **[EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) (this)** | **5.8B** | **0.7745** | **0.7676** | **0.7775** | **0.7887** |
| [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.7937 | 0.8108 | 0.8037 | 0.8369 |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/d5b49364-aed5-4467-bae2-5a322c8e2ceb" width="800px">
### HellaSwag (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.5243 | 0.5272 | 0.5166 | 0.5352 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.5590 | 0.5833 | 0.5828 | 0.5907 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.5707 | 0.5830 | 0.5670 | 0.5787 |
| **[EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) (this)** | **5.8B** | **0.5976** | **0.5998** | **0.5979** | **0.6208** |
| [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.5954 | 0.6306 | 0.6098 | 0.6118 |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/5acb60ac-161a-4ab3-a296-db4442e08b7f" width="800px">
### BoolQ (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3356 | 0.4014 | 0.3640 | 0.3560 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.4514 | 0.5981 | 0.5499 | 0.5202 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.4320 | 0.5263 | 0.4930 | 0.4038 |
| **[EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) (this)** | **5.8B** | **0.4356** | **0.5698** | **0.5187** | **0.5236** |
| [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.4818 | 0.6041 | 0.6289 | 0.6448 |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/b74c23c0-01f3-4b68-9e10-a48e9aa052ab" width="800px">
### SentiNeg (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6065 | 0.6878 | 0.7280 | 0.8413 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3747 | 0.8942 | 0.9294 | 0.9698 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.4858 | 0.7950 | 0.7320 | 0.7851 |
| **[EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) (this)** | **5.8B** | **0.3394** | **0.8841** | **0.8808** | **0.9521** |
| [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.9117 | 0.9015 | 0.9345 | 0.9723 |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/95b56b19-d349-4b70-9ff9-94a5560f89ee" width="800px">
### WiC (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3290 | 0.4313 | 0.4001 | 0.3621 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3526 | 0.4775 | 0.4358 | 0.4061 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3280 | 0.4903 | 0.4945 | 0.3656 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3297 | 0.4850 | 0.4650 | 0.3290 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.3390 | 0.4944 | 0.4203 | 0.3835 |
| **[EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) (this)** | **5.8B** | **0.3913** | **0.4688** | **0.4189** | **0.3910** |
| [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.3985 | 0.3683 | 0.3307 | 0.3273 |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/4de4a4c3-d7ac-4e04-8b0c-0d533fe88294" width="800px">
## Limitations and Biases
Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.
## Citation and Related Information
### BibTeX entry
If you find our work useful, please consider citing:
```bibtex
@misc{ko2023technical,
title={A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models},
author={Hyunwoong Ko and Kichang Yang and Minho Ryu and Taekyoon Choi and Seungmu Yang and jiwung Hyun and Sungho Park},
year={2023},
eprint={2306.02254},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Licensing
All our models are licensed under the terms of the Apache License 2.0.
```
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
### Acknowledgement
This project was made possible thanks to the computing resources from [Stability.ai](https://stability.ai), and thanks to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.
| {} | RichardErkhov/EleutherAI_-_polyglot-ko-5.8b-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2104.09864",
"arxiv:2204.04541",
"arxiv:2306.02254",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:08:26+00:00 | [
"2104.09864",
"2204.04541",
"2306.02254"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2104.09864 #arxiv-2204.04541 #arxiv-2306.02254 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
polyglot-ko-5.8b - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* ko
tags:
* pytorch
* causal-lm
license: apache-2.0
---
Polyglot-Ko-5.8B
================
Model Description
-----------------
Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team.
The model consists of 28 transformer layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 30003.
Training data
-------------
Polyglot-Ko-5.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by TUNiB. The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use.
Source: Korean blog posts, Size (GB): 682.3, Link: -
Source: Korean news dataset, Size (GB): 87.0, Link: -
Source: Modu corpus, Size (GB): 26.4, Link: URL
Source: Korean patent dataset, Size (GB): 19.0, Link: -
Source: Korean Q & A dataset, Size (GB): 18.1, Link: -
Source: KcBert dataset, Size (GB): 12.7, Link: URL
Source: Korean fiction dataset, Size (GB): 6.1, Link: -
Source: Korean online comments, Size (GB): 4.2, Link: -
Source: Korean wikipedia, Size (GB): 1.4, Link: URL
Source: Clova call, Size (GB): < 1.0, Link: URL
Source: Naver sentiment movie corpus, Size (GB): < 1.0, Link: URL
Source: Korean hate speech dataset, Size (GB): < 1.0, Link: -
Source: Open subtitles, Size (GB): < 1.0, Link: URL
Source: AIHub various tasks datasets, Size (GB): < 1.0, Link: URL
Source: Standard Korean language dictionary, Size (GB): < 1.0, Link: URL
Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage:
* '<|acc|>' : bank account number
* '<|rrn|>' : resident registration number
* '<|tell|>' : phone number
Training procedure
------------------
Polyglot-Ko-5.8B was trained for 172 billion tokens over 320,000 steps on 256 A100 GPUs with the GPT-NeoX framework. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token.
How to use
----------
This model can be easily loaded using the 'AutoModelForCausalLM' class:
Evaluation results
------------------
We evaluate Polyglot-Ko-3.8B on KOBEST dataset, a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper.
The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the polyglot branch of lm-evaluation-harness and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, 'n' refers to the number of few-shot examples.
In case of WiC dataset, all models show random performance.
### COPA (F1)
<img src="URL width="800px">
### HellaSwag (F1)
<img src="URL width="800px">
### BoolQ (F1)
<img src="URL width="800px">
### SentiNeg (F1)
<img src="URL width="800px">
### WiC (F1)
<img src="URL width="800px">
Limitations and Biases
----------------------
Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.
and Related Information
### BibTeX entry
If you find our work useful, please consider citing:
### Licensing
All our models are licensed under the terms of the Apache License 2.0.
### Acknowledgement
This project was made possible thanks to the computing resources from URL, and thanks to TUNiB for providing a large-scale Korean dataset for this work.
| [
"### COPA (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### HellaSwag (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### BoolQ (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### SentiNeg (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### WiC (F1)\n\n\n\n<img src=\"URL width=\"800px\">\n\n\nLimitations and Biases\n----------------------\n\n\nPolyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.\n\n\nand Related Information",
"### BibTeX entry\n\n\nIf you find our work useful, please consider citing:",
"### Licensing\n\n\nAll our models are licensed under the terms of the Apache License 2.0.",
"### Acknowledgement\n\n\nThis project was made possible thanks to the computing resources from URL, and thanks to TUNiB for providing a large-scale Korean dataset for this work."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2104.09864 #arxiv-2204.04541 #arxiv-2306.02254 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### COPA (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### HellaSwag (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### BoolQ (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### SentiNeg (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### WiC (F1)\n\n\n\n<img src=\"URL width=\"800px\">\n\n\nLimitations and Biases\n----------------------\n\n\nPolyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.\n\n\nand Related Information",
"### BibTeX entry\n\n\nIf you find our work useful, please consider citing:",
"### Licensing\n\n\nAll our models are licensed under the terms of the Apache License 2.0.",
"### Acknowledgement\n\n\nThis project was made possible thanks to the computing resources from URL, and thanks to TUNiB for providing a large-scale Korean dataset for this work."
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_RL1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- F1: 0.5146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 63 | 0.6931 | 0.4509 |
| No log | 2.0 | 126 | 0.6931 | 0.4754 |
| No log | 3.0 | 189 | 0.6931 | 0.5079 |
| No log | 4.0 | 252 | 0.6931 | 0.4969 |
| No log | 5.0 | 315 | 0.6931 | 0.5245 |
| No log | 6.0 | 378 | 0.6931 | 0.5146 |
| No log | 7.0 | 441 | 0.6931 | 0.5294 |
| 0.6981 | 8.0 | 504 | 0.6931 | 0.5398 |
| 0.6981 | 9.0 | 567 | 0.6931 | 0.5205 |
| 0.6981 | 10.0 | 630 | 0.6931 | 0.5146 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "FacebookAI/xlm-roberta-large", "model-index": [{"name": "COPA_RL1", "results": []}]} | Ariffiq99/COPA_RL1 | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:11:54+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #xlm-roberta #multiple-choice #generated_from_trainer #base_model-FacebookAI/xlm-roberta-large #license-mit #endpoints_compatible #region-us
| COPA\_RL1
=========
This model is a fine-tuned version of FacebookAI/xlm-roberta-large on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6931
* F1: 0.5146
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #multiple-choice #generated_from_trainer #base_model-FacebookAI/xlm-roberta-large #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 | {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | shivanikerai/TinyLlama-1.1B-Chat-v1.0-adapter-title-suggestion-v1.0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-23T07:12:37+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v3
This model is a fine-tuned version of [happyrobot-ai/Llama3_function_calling_v1](https://huggingface.co/happyrobot-ai/Llama3_function_calling_v1) on the jobandtalent_sft_v3 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 6.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "other", "library_name": "peft", "tags": ["llama-factory", "lora", "generated_from_trainer"], "base_model": "happyrobot-ai/Llama3_function_calling_v1", "model-index": [{"name": "v3", "results": []}]} | happyrobot-ai/jobandtalent-shift-not-confirmed-llama3 | null | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:happyrobot-ai/Llama3_function_calling_v1",
"license:other",
"region:us"
] | null | 2024-04-23T07:14:38+00:00 | [] | [] | TAGS
#peft #safetensors #llama-factory #lora #generated_from_trainer #base_model-happyrobot-ai/Llama3_function_calling_v1 #license-other #region-us
|
# v3
This model is a fine-tuned version of happyrobot-ai/Llama3_function_calling_v1 on the jobandtalent_sft_v3 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 6.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# v3\n\nThis model is a fine-tuned version of happyrobot-ai/Llama3_function_calling_v1 on the jobandtalent_sft_v3 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 6.0",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #llama-factory #lora #generated_from_trainer #base_model-happyrobot-ai/Llama3_function_calling_v1 #license-other #region-us \n",
"# v3\n\nThis model is a fine-tuned version of happyrobot-ai/Llama3_function_calling_v1 on the jobandtalent_sft_v3 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 6.0",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# J.O.S.I.E.3-Beta12-7B-slerp
J.O.S.I.E.3-Beta12-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Weyaxi/Einstein-v6-7B](https://huggingface.co/Weyaxi/Einstein-v6-7B)
* [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
This model has been further Finetuned on my custom J.O.S.I.E.v3.11 Dataset, in the ChatML prompt Format.
```text
<|im_start|>system
You are JOSIE, my private and superinteligent AI Assistant.<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
{{ .Response }}<|im_end|>
```
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Weyaxi/Einstein-v6-7B
layer_range: [0, 32]
- model: argilla/CapybaraHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: argilla/CapybaraHermes-2.5-Mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Isaak-Carter/J.O.S.I.E.3-Beta12-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# Evaluation results:
```json
{
"all": {
"acc": 0.635008846776534,
"acc_stderr": 0.03244450973873997,
"acc_norm": 0.6365238167399629,
"acc_norm_stderr": 0.033101612504829854,
"mc1": 0.397796817625459,
"mc1_stderr": 0.017133934248559635,
"mc2": 0.5816259277988214,
"mc2_stderr": 0.01521267822060948
},
"harness|arc:challenge|25": {
"acc": 0.6220136518771331,
"acc_stderr": 0.0141696645203031,
"acc_norm": 0.6459044368600683,
"acc_norm_stderr": 0.013975454122756557
},
"harness|hellaswag|10": {
"acc": 0.6512646883091018,
"acc_stderr": 0.004755960559929163,
"acc_norm": 0.8397729535949015,
"acc_norm_stderr": 0.003660668242740655
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5703703703703704,
"acc_stderr": 0.042763494943765995,
"acc_norm": 0.5703703703703704,
"acc_norm_stderr": 0.042763494943765995
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6792452830188679,
"acc_stderr": 0.028727502957880267,
"acc_norm": 0.6792452830188679,
"acc_norm_stderr": 0.028727502957880267
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7361111111111112,
"acc_stderr": 0.03685651095897532,
"acc_norm": 0.7361111111111112,
"acc_norm_stderr": 0.03685651095897532
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6416184971098265,
"acc_stderr": 0.036563436533531585,
"acc_norm": 0.6416184971098265,
"acc_norm_stderr": 0.036563436533531585
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3235294117647059,
"acc_stderr": 0.04655010411319619,
"acc_norm": 0.3235294117647059,
"acc_norm_stderr": 0.04655010411319619
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5829787234042553,
"acc_stderr": 0.03223276266711712,
"acc_norm": 0.5829787234042553,
"acc_norm_stderr": 0.03223276266711712
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.046920083813689104,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.046920083813689104
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.025424835086924006,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.025424835086924006
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.044444444444444495,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.044444444444444495
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7548387096774194,
"acc_stderr": 0.024472243840895525,
"acc_norm": 0.7548387096774194,
"acc_norm_stderr": 0.024472243840895525
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.02860620428922988,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.02860620428922988
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8756476683937824,
"acc_stderr": 0.023814477086593552,
"acc_norm": 0.8756476683937824,
"acc_norm_stderr": 0.023814477086593552
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.658974358974359,
"acc_stderr": 0.02403548967633509,
"acc_norm": 0.658974358974359,
"acc_norm_stderr": 0.02403548967633509
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.02857834836547308,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.02857834836547308
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.030684737115135363,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.030684737115135363
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.304635761589404,
"acc_stderr": 0.03757949922943344,
"acc_norm": 0.304635761589404,
"acc_norm_stderr": 0.03757949922943344
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8238532110091743,
"acc_stderr": 0.016332882393431353,
"acc_norm": 0.8238532110091743,
"acc_norm_stderr": 0.016332882393431353
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.03409386946992699,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.03409386946992699
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7990196078431373,
"acc_stderr": 0.02812597226565437,
"acc_norm": 0.7990196078431373,
"acc_norm_stderr": 0.02812597226565437
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.759493670886076,
"acc_stderr": 0.027820781981149685,
"acc_norm": 0.759493670886076,
"acc_norm_stderr": 0.027820781981149685
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6681614349775785,
"acc_stderr": 0.03160295143776679,
"acc_norm": 0.6681614349775785,
"acc_norm_stderr": 0.03160295143776679
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7404580152671756,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.7404580152671756,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.036401182719909456,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.036401182719909456
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.022509033937077802,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.022509033937077802
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8173690932311622,
"acc_stderr": 0.013816335389973141,
"acc_norm": 0.8173690932311622,
"acc_norm_stderr": 0.013816335389973141
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7254335260115607,
"acc_stderr": 0.02402774515526502,
"acc_norm": 0.7254335260115607,
"acc_norm_stderr": 0.02402774515526502
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.27039106145251396,
"acc_stderr": 0.014854993938010071,
"acc_norm": 0.27039106145251396,
"acc_norm_stderr": 0.014854993938010071
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7516339869281046,
"acc_stderr": 0.02473998135511359,
"acc_norm": 0.7516339869281046,
"acc_norm_stderr": 0.02473998135511359
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7331189710610932,
"acc_stderr": 0.025122637608816653,
"acc_norm": 0.7331189710610932,
"acc_norm_stderr": 0.025122637608816653
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.024922001168886324,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.024922001168886324
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46099290780141844,
"acc_stderr": 0.02973659252642444,
"acc_norm": 0.46099290780141844,
"acc_norm_stderr": 0.02973659252642444
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4680573663624511,
"acc_stderr": 0.012744149704869647,
"acc_norm": 0.4680573663624511,
"acc_norm_stderr": 0.012744149704869647
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.02833295951403121,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.02833295951403121
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.01933314202079716,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.01933314202079716
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6816326530612244,
"acc_stderr": 0.029822533793982062,
"acc_norm": 0.6816326530612244,
"acc_norm_stderr": 0.029822533793982062
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8507462686567164,
"acc_stderr": 0.025196929874827072,
"acc_norm": 0.8507462686567164,
"acc_norm_stderr": 0.025196929874827072
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.035887028128263734,
"acc_norm": 0.85,
"acc_norm_stderr": 0.035887028128263734
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.397796817625459,
"mc1_stderr": 0.017133934248559635,
"mc2": 0.5816259277988214,
"mc2_stderr": 0.01521267822060948
},
"harness|winogrande|5": {
"acc": 0.7963693764798737,
"acc_stderr": 0.011317798781626913
},
"harness|gsm8k|5": {
"acc": 0.5966641394996209,
"acc_stderr": 0.013512654781814702
}
}
```
| {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "Weyaxi/Einstein-v6-7B", "argilla/CapybaraHermes-2.5-Mistral-7B"], "base_model": ["Weyaxi/Einstein-v6-7B", "argilla/CapybaraHermes-2.5-Mistral-7B"]} | Isaak-Carter/J.O.S.I.E.3-Beta12-7B-slerp | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Weyaxi/Einstein-v6-7B",
"argilla/CapybaraHermes-2.5-Mistral-7B",
"conversational",
"base_model:Weyaxi/Einstein-v6-7B",
"base_model:argilla/CapybaraHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T07:14:42+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #Weyaxi/Einstein-v6-7B #argilla/CapybaraHermes-2.5-Mistral-7B #conversational #base_model-Weyaxi/Einstein-v6-7B #base_model-argilla/CapybaraHermes-2.5-Mistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# J.O.S.I.E.3-Beta12-7B-slerp
J.O.S.I.E.3-Beta12-7B-slerp is a merge of the following models using LazyMergekit:
* Weyaxi/Einstein-v6-7B
* argilla/CapybaraHermes-2.5-Mistral-7B
This model has been further Finetuned on my custom J.O.S.I.E.v3.11 Dataset, in the ChatML prompt Format.
## Configuration
## Usage
# Evaluation results:
| [
"# J.O.S.I.E.3-Beta12-7B-slerp\n\nJ.O.S.I.E.3-Beta12-7B-slerp is a merge of the following models using LazyMergekit:\n* Weyaxi/Einstein-v6-7B\n* argilla/CapybaraHermes-2.5-Mistral-7B\n\nThis model has been further Finetuned on my custom J.O.S.I.E.v3.11 Dataset, in the ChatML prompt Format.",
"## Configuration",
"## Usage",
"# Evaluation results:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #Weyaxi/Einstein-v6-7B #argilla/CapybaraHermes-2.5-Mistral-7B #conversational #base_model-Weyaxi/Einstein-v6-7B #base_model-argilla/CapybaraHermes-2.5-Mistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# J.O.S.I.E.3-Beta12-7B-slerp\n\nJ.O.S.I.E.3-Beta12-7B-slerp is a merge of the following models using LazyMergekit:\n* Weyaxi/Einstein-v6-7B\n* argilla/CapybaraHermes-2.5-Mistral-7B\n\nThis model has been further Finetuned on my custom J.O.S.I.E.v3.11 Dataset, in the ChatML prompt Format.",
"## Configuration",
"## Usage",
"# Evaluation results:"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "diffusers"} | Niggendar/ponysauceXL_v10 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-04-23T07:14:47+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-assamese
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_11_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_11_0"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-large-xls-r-300m-assamese", "results": []}]} | jayashreedevi2020/wav2vec2-large-xls-r-300m-assamese | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:14:57+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_11_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-large-xls-r-300m-assamese
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_11_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# wav2vec2-large-xls-r-300m-assamese\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_11_0 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_11_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-large-xls-r-300m-assamese\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_11_0 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PolizzeDonut-UltimaProvaCluster-Cluster2di4-5epochs-Resol964x1350
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-UltimaProvaCluster-Cluster2di4-5epochs-Resol964x1350", "results": []}]} | tedad09/PolizzeDonut-UltimaProvaCluster-Cluster2di4-5epochs-Resol964x1350 | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:17:12+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
|
# PolizzeDonut-UltimaProvaCluster-Cluster2di4-5epochs-Resol964x1350
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# PolizzeDonut-UltimaProvaCluster-Cluster2di4-5epochs-Resol964x1350\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"# PolizzeDonut-UltimaProvaCluster-Cluster2di4-5epochs-Resol964x1350\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
polyglot-ko-12.8b - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/polyglot-ko-12.8b/
Original model description:
---
language:
- ko
tags:
- pytorch
- causal-lm
license: apache-2.0
---
# Polyglot-Ko-12.8B
## Model Description
Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team.
| Hyperparameter | Value |
|----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| \\(n_{parameters}\\) | 12,898,631,680 |
| \\(n_{layers}\\) | 40 |
| \\(d_{model}\\) | 5120 |
| \\(d_{ff}\\) | 20,480 |
| \\(n_{heads}\\) | 40 |
| \\(d_{head}\\) | 128 |
| \\(n_{ctx}\\) | 2,048 |
| \\(n_{vocab}\\) | 30,003 / 30,080 |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
The model consists of 40 transformer layers with a model dimension of 5120, and a feedforward dimension of 20480. The model
dimension is split into 40 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 30003.
## Training data
Polyglot-Ko-12.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by [TUNiB](https://tunib.ai/). The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use.
| Source |Size (GB) | Link |
|-------------------------------------|---------|------------------------------------------|
| Korean blog posts | 682.3 | - |
| Korean news dataset | 87.0 | - |
| Modu corpus | 26.4 |corpus.korean.go.kr |
| Korean patent dataset | 19.0 | - |
| Korean Q & A dataset | 18.1 | - |
| KcBert dataset | 12.7 | github.com/Beomi/KcBERT |
| Korean fiction dataset | 6.1 | - |
| Korean online comments | 4.2 | - |
| Korean wikipedia | 1.4 | ko.wikipedia.org |
| Clova call | < 1.0 | github.com/clovaai/ClovaCall |
| Naver sentiment movie corpus | < 1.0 | github.com/e9t/nsmc |
| Korean hate speech dataset | < 1.0 | - |
| Open subtitles | < 1.0 | opus.nlpl.eu/OpenSubtitles.php |
| AIHub various tasks datasets | < 1.0 |aihub.or.kr |
| Standard Korean language dictionary | < 1.0 | stdict.korean.go.kr/main/main.do |
Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage:
* `<|acc|>` : bank account number
* `<|rrn|>` : resident registration number
* `<|tell|>` : phone number
## Training procedure
Polyglot-Ko-12.8B was trained for 167 billion tokens over 301,000 steps on 256 A100 GPUs with the [GPT-NeoX framework](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` class:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/polyglot-ko-12.8b")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-12.8b")
```
## Evaluation results
We evaluate Polyglot-Ko-3.8B on [KOBEST dataset](https://arxiv.org/abs/2204.04541), a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper.
The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the [polyglot branch of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, `n` refers to the number of few-shot examples.
In case of WiC dataset, all models show random performance.
```console
python main.py \
--model gpt2 \
--model_args pretrained='EleutherAI/polyglot-ko-3.8b' \
--tasks kobest_copa,kobest_hellaswag \
--num_fewshot $YOUR_NUM_FEWSHOT \
--batch_size $YOUR_BATCH_SIZE \
--device $YOUR_DEVICE \
--output_path $/path/to/output/
```
### COPA (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
| [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
| **[EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) (this)** | **12.8B** | **0.7937** | **0.8108** | **0.8037** | **0.8369** |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/d5b49364-aed5-4467-bae2-5a322c8e2ceb" width="800px">
### HellaSwag (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.5243 | 0.5272 | 0.5166 | 0.5352 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.5590 | 0.5833 | 0.5828 | 0.5907 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.5707 | 0.5830 | 0.5670 | 0.5787 |
| [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
| **[EleutherAI/polyglot-ko-12.8b (this)](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)** | **12.8B** | **0.5954** | **0.6306** | **0.6098** | **0.6118** |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/5acb60ac-161a-4ab3-a296-db4442e08b7f" width="800px">
### BoolQ (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3356 | 0.4014 | 0.3640 | 0.3560 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.4514 | 0.5981 | 0.5499 | 0.5202 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.4320 | 0.5263 | 0.4930 | 0.4038 |
| [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
| **[EleutherAI/polyglot-ko-12.8b (this)](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)** | **12.8B** | **0.4818** | **0.6041** | **0.6289** | **0.6448** |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/b74c23c0-01f3-4b68-9e10-a48e9aa052ab" width="800px">
### SentiNeg (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6065 | 0.6878 | 0.7280 | 0.8413 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3747 | 0.8942 | 0.9294 | 0.9698 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.4858 | 0.7950 | 0.7320 | 0.7851 |
| [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
| **[EleutherAI/polyglot-ko-12.8b (this)](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)** | **12.8B** | **0.9117** | **0.9015** | **0.9345** | **0.9723** |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/95b56b19-d349-4b70-9ff9-94a5560f89ee" width="800px">
### WiC (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3290 | 0.4313 | 0.4001 | 0.3621 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3526 | 0.4775 | 0.4358 | 0.4061 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3280 | 0.4903 | 0.4945 | 0.3656 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3297 | 0.4850 | 0.4650 | 0.3290 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.3390 | 0.4944 | 0.4203 | 0.3835 |
| [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3913 | 0.4688 | 0.4189 | 0.3910 |
| **[EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) (this)** | **12.8B** | **0.3985** | **0.3683** | **0.3307** | **0.3273** |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/4de4a4c3-d7ac-4e04-8b0c-0d533fe88294" width="800px">
## Limitations and Biases
Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.
## Citation and Related Information
### BibTeX entry
If you find our work useful, please consider citing:
```bibtex
@misc{ko2023technical,
title={A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models},
author={Hyunwoong Ko and Kichang Yang and Minho Ryu and Taekyoon Choi and Seungmu Yang and jiwung Hyun and Sungho Park},
year={2023},
eprint={2306.02254},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Licensing
All our models are licensed under the terms of the Apache License 2.0.
```
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
### Acknowledgement
This project was made possible thanks to the computing resources from [Stability.ai](https://stability.ai), and thanks to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.
| {} | RichardErkhov/EleutherAI_-_polyglot-ko-12.8b-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2104.09864",
"arxiv:2204.04541",
"arxiv:2306.02254",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:19:13+00:00 | [
"2104.09864",
"2204.04541",
"2306.02254"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2104.09864 #arxiv-2204.04541 #arxiv-2306.02254 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
polyglot-ko-12.8b - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* ko
tags:
* pytorch
* causal-lm
license: apache-2.0
---
Polyglot-Ko-12.8B
=================
Model Description
-----------------
Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team.
The model consists of 40 transformer layers with a model dimension of 5120, and a feedforward dimension of 20480. The model
dimension is split into 40 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 30003.
Training data
-------------
Polyglot-Ko-12.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by TUNiB. The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use.
Source: Korean blog posts, Size (GB): 682.3, Link: -
Source: Korean news dataset, Size (GB): 87.0, Link: -
Source: Modu corpus, Size (GB): 26.4, Link: URL
Source: Korean patent dataset, Size (GB): 19.0, Link: -
Source: Korean Q & A dataset, Size (GB): 18.1, Link: -
Source: KcBert dataset, Size (GB): 12.7, Link: URL
Source: Korean fiction dataset, Size (GB): 6.1, Link: -
Source: Korean online comments, Size (GB): 4.2, Link: -
Source: Korean wikipedia, Size (GB): 1.4, Link: URL
Source: Clova call, Size (GB): < 1.0, Link: URL
Source: Naver sentiment movie corpus, Size (GB): < 1.0, Link: URL
Source: Korean hate speech dataset, Size (GB): < 1.0, Link: -
Source: Open subtitles, Size (GB): < 1.0, Link: URL
Source: AIHub various tasks datasets, Size (GB): < 1.0, Link: URL
Source: Standard Korean language dictionary, Size (GB): < 1.0, Link: URL
Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage:
* '<|acc|>' : bank account number
* '<|rrn|>' : resident registration number
* '<|tell|>' : phone number
Training procedure
------------------
Polyglot-Ko-12.8B was trained for 167 billion tokens over 301,000 steps on 256 A100 GPUs with the GPT-NeoX framework. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token.
How to use
----------
This model can be easily loaded using the 'AutoModelForCausalLM' class:
Evaluation results
------------------
We evaluate Polyglot-Ko-3.8B on KOBEST dataset, a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper.
The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the polyglot branch of lm-evaluation-harness and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, 'n' refers to the number of few-shot examples.
In case of WiC dataset, all models show random performance.
### COPA (F1)
<img src="URL width="800px">
### HellaSwag (F1)
<img src="URL width="800px">
### BoolQ (F1)
<img src="URL width="800px">
### SentiNeg (F1)
<img src="URL width="800px">
### WiC (F1)
<img src="URL width="800px">
Limitations and Biases
----------------------
Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.
and Related Information
### BibTeX entry
If you find our work useful, please consider citing:
### Licensing
All our models are licensed under the terms of the Apache License 2.0.
### Acknowledgement
This project was made possible thanks to the computing resources from URL, and thanks to TUNiB for providing a large-scale Korean dataset for this work.
| [
"### COPA (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### HellaSwag (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### BoolQ (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### SentiNeg (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### WiC (F1)\n\n\n\n<img src=\"URL width=\"800px\">\n\n\nLimitations and Biases\n----------------------\n\n\nPolyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.\n\n\nand Related Information",
"### BibTeX entry\n\n\nIf you find our work useful, please consider citing:",
"### Licensing\n\n\nAll our models are licensed under the terms of the Apache License 2.0.",
"### Acknowledgement\n\n\nThis project was made possible thanks to the computing resources from URL, and thanks to TUNiB for providing a large-scale Korean dataset for this work."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2104.09864 #arxiv-2204.04541 #arxiv-2306.02254 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### COPA (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### HellaSwag (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### BoolQ (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### SentiNeg (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### WiC (F1)\n\n\n\n<img src=\"URL width=\"800px\">\n\n\nLimitations and Biases\n----------------------\n\n\nPolyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.\n\n\nand Related Information",
"### BibTeX entry\n\n\nIf you find our work useful, please consider citing:",
"### Licensing\n\n\nAll our models are licensed under the terms of the Apache License 2.0.",
"### Acknowledgement\n\n\nThis project was made possible thanks to the computing resources from URL, and thanks to TUNiB for providing a large-scale Korean dataset for this work."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-160M
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-160M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-160M to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-160M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-160m-v0-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:19:15+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-160m-v0 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* the\_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-160M
-----------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-160M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-160M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-160M to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-160M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-160M to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-160M.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-160M to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-160M.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-160M
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-160M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-160M to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-160M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-160m-v0-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:19:45+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-160m-v0 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* the\_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-160M
-----------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-160M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-160M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-160M to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-160M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-160M to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-160M.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-160M to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-160M.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# graph-classification-1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2365
- eval_accuracy: 0.9285
- eval_runtime: 32.2134
- eval_samples_per_second: 142.332
- eval_steps_per_second: 4.47
- epoch: 8.0
- step: 1148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "graph-classification-1", "results": []}]} | giahy2507/graph-classification-1 | null | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:19:55+00:00 | [] | [] | TAGS
#transformers #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# graph-classification-1
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2365
- eval_accuracy: 0.9285
- eval_runtime: 32.2134
- eval_samples_per_second: 142.332
- eval_steps_per_second: 4.47
- epoch: 8.0
- step: 1148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# graph-classification-1\n\nThis model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.2365\n- eval_accuracy: 0.9285\n- eval_runtime: 32.2134\n- eval_samples_per_second: 142.332\n- eval_steps_per_second: 4.47\n- epoch: 8.0\n- step: 1148",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# graph-classification-1\n\nThis model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.2365\n- eval_accuracy: 0.9285\n- eval_runtime: 32.2134\n- eval_samples_per_second: 142.332\n- eval_steps_per_second: 4.47\n- epoch: 8.0\n- step: 1148",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# token-classification-llmlingua2-phobert-bctn-323_sample-5_epoch_16k_fpt_v2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 16 | 0.5426 |
| No log | 1.98 | 32 | 0.5212 |
| No log | 2.98 | 48 | 0.4992 |
| No log | 3.97 | 64 | 0.4895 |
| No log | 4.96 | 80 | 0.4848 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "vinai/phobert-base-v2", "model-index": [{"name": "token-classification-llmlingua2-phobert-bctn-323_sample-5_epoch_16k_fpt_v2", "results": []}]} | qminh369/token-classification-llmlingua2-phobert-bctn-323_sample-5_epoch_16k_fpt_v2 | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:20:24+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #token-classification #generated_from_trainer #base_model-vinai/phobert-base-v2 #autotrain_compatible #endpoints_compatible #region-us
| token-classification-llmlingua2-phobert-bctn-323\_sample-5\_epoch\_16k\_fpt\_v2
===============================================================================
This model is a fine-tuned version of vinai/phobert-base-v2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4848
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.2.1+cu118
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #token-classification #generated_from_trainer #base_model-vinai/phobert-base-v2 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-sentiment-latest-trump-stance-1
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1168
- Accuracy: {'accuracy': 0.6666666666666666}
- Precision: {'precision': 0.5697940503432495}
- Recall: {'recall': 0.7302052785923754}
- F1 Score: {'f1': 0.6401028277634961}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score |
|:-------------:|:-----:|:------:|:---------------:|:----------------------:|:---------------------------------:|:-------------------:|:--------------------------:|
| 0.583 | 1.0 | 3600 | 0.3772 | {'accuracy': 0.83875} | {'precision': 0.812933025404157} | {'recall': 0.88} | {'f1': 0.8451380552220888} |
| 0.5621 | 2.0 | 7200 | 0.3725 | {'accuracy': 0.853125} | {'precision': 0.9407176287051482} | {'recall': 0.75375} | {'f1': 0.8369188063844553} |
| 0.5813 | 3.0 | 10800 | 1.0373 | {'accuracy': 0.625625} | {'precision': 0.5719398711524696} | {'recall': 0.99875} | {'f1': 0.7273554847519345} |
| 0.5317 | 4.0 | 14400 | 0.3697 | {'accuracy': 0.875625} | {'precision': 0.8917861799217731} | {'recall': 0.855} | {'f1': 0.8730057434588385} |
| 0.5498 | 5.0 | 18000 | 0.4457 | {'accuracy': 0.8525} | {'precision': 0.8551637279596978} | {'recall': 0.84875} | {'f1': 0.8519447929736512} |
| 0.5388 | 6.0 | 21600 | 0.4715 | {'accuracy': 0.829375} | {'precision': 0.9136577708006279} | {'recall': 0.7275} | {'f1': 0.8100208768267223} |
| 0.5885 | 7.0 | 25200 | 0.3773 | {'accuracy': 0.85875} | {'precision': 0.8836898395721925} | {'recall': 0.82625} | {'f1': 0.8540051679586563} |
| 0.4961 | 8.0 | 28800 | 0.3819 | {'accuracy': 0.869375} | {'precision': 0.9053497942386831} | {'recall': 0.825} | {'f1': 0.8633093525179856} |
| 0.5421 | 9.0 | 32400 | 0.4011 | {'accuracy': 0.85875} | {'precision': 0.8239277652370203} | {'recall': 0.9125} | {'f1': 0.8659549228944247} |
| 0.5123 | 10.0 | 36000 | 0.3404 | {'accuracy': 0.88125} | {'precision': 0.9034391534391535} | {'recall': 0.85375} | {'f1': 0.877892030848329} |
| 0.5996 | 11.0 | 39600 | 0.3435 | {'accuracy': 0.880625} | {'precision': 0.8801498127340824} | {'recall': 0.88125} | {'f1': 0.8806995627732667} |
| 0.4871 | 12.0 | 43200 | 0.2972 | {'accuracy': 0.890625} | {'precision': 0.9021879021879022} | {'recall': 0.87625} | {'f1': 0.8890298034242232} |
| 0.5272 | 13.0 | 46800 | 0.3629 | {'accuracy': 0.874375} | {'precision': 0.9423929098966026} | {'recall': 0.7975} | {'f1': 0.8639133378469871} |
| 0.5897 | 14.0 | 50400 | 0.3164 | {'accuracy': 0.88} | {'precision': 0.9075067024128687} | {'recall': 0.84625} | {'f1': 0.8758085381630013} |
| 0.4963 | 15.0 | 54000 | 0.3343 | {'accuracy': 0.87625} | {'precision': 0.922752808988764} | {'recall': 0.82125} | {'f1': 0.8690476190476191} |
| 0.5132 | 16.0 | 57600 | 0.5593 | {'accuracy': 0.855625} | {'precision': 0.9330289193302892} | {'recall': 0.76625} | {'f1': 0.8414550446122169} |
| 0.447 | 17.0 | 61200 | 0.3651 | {'accuracy': 0.874375} | {'precision': 0.8544378698224852} | {'recall': 0.9025} | {'f1': 0.8778115501519757} |
| 0.5189 | 18.0 | 64800 | 0.3919 | {'accuracy': 0.878125} | {'precision': 0.9315263908701854} | {'recall': 0.81625} | {'f1': 0.8700866089273818} |
| 0.4835 | 19.0 | 68400 | 0.5706 | {'accuracy': 0.846875} | {'precision': 0.9541734860883797} | {'recall': 0.72875} | {'f1': 0.8263642806520198} |
| 0.455 | 20.0 | 72000 | 0.3523 | {'accuracy': 0.881875} | {'precision': 0.8813982521847691} | {'recall': 0.8825} | {'f1': 0.8819487820112429} |
| 0.4791 | 21.0 | 75600 | 0.3292 | {'accuracy': 0.884375} | {'precision': 0.8546712802768166} | {'recall': 0.92625} | {'f1': 0.8890221955608878} |
| 0.512 | 22.0 | 79200 | 0.4456 | {'accuracy': 0.87} | {'precision': 0.9391691394658753} | {'recall': 0.79125} | {'f1': 0.858887381275441} |
| 0.4783 | 23.0 | 82800 | 0.3283 | {'accuracy': 0.880625} | {'precision': 0.9188445667125172} | {'recall': 0.835} | {'f1': 0.8749181401440733} |
| 0.4699 | 24.0 | 86400 | 0.3399 | {'accuracy': 0.885} | {'precision': 0.9074074074074074} | {'recall': 0.8575} | {'f1': 0.8817480719794345} |
| 0.4485 | 25.0 | 90000 | 0.3156 | {'accuracy': 0.89} | {'precision': 0.8949367088607595} | {'recall': 0.88375} | {'f1': 0.889308176100629} |
| 0.4305 | 26.0 | 93600 | 0.3105 | {'accuracy': 0.894375} | {'precision': 0.9092088197146563} | {'recall': 0.87625} | {'f1': 0.8924252068746021} |
| 0.4704 | 27.0 | 97200 | 0.3528 | {'accuracy': 0.879375} | {'precision': 0.8634730538922155} | {'recall': 0.90125} | {'f1': 0.8819571865443425} |
| 0.4589 | 28.0 | 100800 | 0.3534 | {'accuracy': 0.879375} | {'precision': 0.8696711327649208} | {'recall': 0.8925} | {'f1': 0.8809376927822332} |
| 0.4831 | 29.0 | 104400 | 0.3315 | {'accuracy': 0.891875} | {'precision': 0.9108781127129751} | {'recall': 0.86875} | {'f1': 0.889315419065899} |
| 0.4931 | 30.0 | 108000 | 0.3200 | {'accuracy': 0.891875} | {'precision': 0.9185580774365821} | {'recall': 0.86} | {'f1': 0.8883150419625565} |
| 0.4286 | 31.0 | 111600 | 0.3488 | {'accuracy': 0.8825} | {'precision': 0.9180327868852459} | {'recall': 0.84} | {'f1': 0.8772845953002611} |
| 0.4309 | 32.0 | 115200 | 0.3192 | {'accuracy': 0.891875} | {'precision': 0.8875154511742892} | {'recall': 0.8975} | {'f1': 0.8924798011187073} |
| 0.3896 | 33.0 | 118800 | 0.3294 | {'accuracy': 0.881875} | {'precision': 0.8632580261593341} | {'recall': 0.9075} | {'f1': 0.8848263254113345} |
| 0.4327 | 34.0 | 122400 | 0.3003 | {'accuracy': 0.899375} | {'precision': 0.9346938775510204} | {'recall': 0.85875} | {'f1': 0.895114006514658} |
| 0.4179 | 35.0 | 126000 | 0.3189 | {'accuracy': 0.898125} | {'precision': 0.9368998628257887} | {'recall': 0.85375} | {'f1': 0.8933943754087639} |
| 0.4023 | 36.0 | 129600 | 0.3284 | {'accuracy': 0.8775} | {'precision': 0.8408577878103838} | {'recall': 0.93125} | {'f1': 0.8837485172004745} |
| 0.4285 | 37.0 | 133200 | 0.3221 | {'accuracy': 0.894375} | {'precision': 0.9280868385345997} | {'recall': 0.855} | {'f1': 0.8900455432661027} |
| 0.3988 | 38.0 | 136800 | 0.2861 | {'accuracy': 0.896875} | {'precision': 0.8905289052890529} | {'recall': 0.905} | {'f1': 0.8977061376317421} |
| 0.4034 | 39.0 | 140400 | 0.3501 | {'accuracy': 0.895625} | {'precision': 0.9438990182328191} | {'recall': 0.84125} | {'f1': 0.8896232650363516} |
| 0.3743 | 40.0 | 144000 | 0.3654 | {'accuracy': 0.886875} | {'precision': 0.9176788124156545} | {'recall': 0.85} | {'f1': 0.8825438027255029} |
| 0.3979 | 41.0 | 147600 | 0.3230 | {'accuracy': 0.899375} | {'precision': 0.9311740890688259} | {'recall': 0.8625} | {'f1': 0.8955223880597015} |
| 0.3808 | 42.0 | 151200 | 0.2978 | {'accuracy': 0.90375} | {'precision': 0.9205729166666666} | {'recall': 0.88375} | {'f1': 0.9017857142857143} |
| 0.3777 | 43.0 | 154800 | 0.2805 | {'accuracy': 0.899375} | {'precision': 0.9220607661822986} | {'recall': 0.8725} | {'f1': 0.8965960179833012} |
| 0.3631 | 44.0 | 158400 | 0.2984 | {'accuracy': 0.898125} | {'precision': 0.9163398692810457} | {'recall': 0.87625} | {'f1': 0.8958466453674121} |
| 0.3674 | 45.0 | 162000 | 0.2924 | {'accuracy': 0.90375} | {'precision': 0.9376693766937669} | {'recall': 0.865} | {'f1': 0.8998699609882965} |
| 0.3539 | 46.0 | 165600 | 0.3158 | {'accuracy': 0.89375} | {'precision': 0.899746192893401} | {'recall': 0.88625} | {'f1': 0.8929471032745592} |
| 0.3557 | 47.0 | 169200 | 0.2861 | {'accuracy': 0.9} | {'precision': 0.9145077720207254} | {'recall': 0.8825} | {'f1': 0.8982188295165394} |
| 0.38 | 48.0 | 172800 | 0.2962 | {'accuracy': 0.894375} | {'precision': 0.9029374201787995} | {'recall': 0.88375} | {'f1': 0.8932406822488945} |
| 0.3754 | 49.0 | 176400 | 0.2905 | {'accuracy': 0.9} | {'precision': 0.9166666666666666} | {'recall': 0.88} | {'f1': 0.8979591836734694} |
| 0.3717 | 50.0 | 180000 | 0.2880 | {'accuracy': 0.89875} | {'precision': 0.9153645833333334} | {'recall': 0.87875} | {'f1': 0.8966836734693877} |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall"], "base_model": "cardiffnlp/twitter-roberta-base-sentiment-latest", "model-index": [{"name": "twitter-roberta-base-sentiment-latest-trump-stance-1", "results": []}]} | saideep-arikontham/twitter-roberta-base-sentiment-latest-trump-stance-1 | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-sentiment-latest",
"has_space",
"region:us"
] | null | 2024-04-23T07:21:31+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-cardiffnlp/twitter-roberta-base-sentiment-latest #has_space #region-us
| twitter-roberta-base-sentiment-latest-trump-stance-1
====================================================
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment-latest on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1168
* Accuracy: {'accuracy': 0.6666666666666666}
* Precision: {'precision': 0.5697940503432495}
* Recall: {'recall': 0.7302052785923754}
* F1 Score: {'f1': 0.6401028277634961}
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.38.2
* Pytorch 2.2.1
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-cardiffnlp/twitter-roberta-base-sentiment-latest #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pyphi
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 1.5134 | 0.9998 | 4651 | 1.5228 |
| 1.402 | 1.9999 | 9303 | 1.5084 |
| 1.2921 | 2.9999 | 13955 | 1.5076 |
| 1.2776 | 3.9993 | 18604 | 1.5101 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "pyphi", "results": []}]} | chenghuzi/pyphi | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-23T07:21:35+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
| pyphi
=====
This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5101
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 6
* total\_train\_batch\_size: 12
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.2.2+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 6\n* total\\_train\\_batch\\_size: 12\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 6\n* total\\_train\\_batch\\_size: 12\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1b-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1b-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1B
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-1B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-1b-v0-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:22:00+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-1b-v0 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* the\_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-1B
---------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-1B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-1B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-1B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-1B to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-1B.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-1B to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-1B.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abhijithgururaj/blip2-opt-2.7b-spanish-post-lora-abhijith | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:22:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1b-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1b-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1B
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-1B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-1b-v0-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:23:27+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-1b-v0 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* the\_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-1B
---------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-1B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-1B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-1B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-1B to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-1B.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-1B to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-1B.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-70m-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-70m-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-70M
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-70M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-70M to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-70M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-70m-v0-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:25:02+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-70m-v0 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* the\_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-70M
----------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-70M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-70M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-70M to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-70M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-70M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-70M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-70M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-70M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-70M to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-70M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-70M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-70M.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-70M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-70M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-70M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-70M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-70M to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-70M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-70M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-70M.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kalai_bert_model_test_2
This model is a fine-tuned version of [albert/albert-base-v2](https://huggingface.co/albert/albert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3276
- Accuracy: 0.93
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.5033 | 0.93 |
| No log | 2.0 | 50 | 0.3276 | 0.93 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "albert/albert-base-v2", "model-index": [{"name": "kalai_bert_model_test_2", "results": []}]} | KalaiselvanD/kalai_bert_model_test_2 | null | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:25:35+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #albert #text-classification #generated_from_trainer #base_model-albert/albert-base-v2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| kalai\_bert\_model\_test\_2
===========================
This model is a fine-tuned version of albert/albert-base-v2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3276
* Accuracy: 0.93
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #albert #text-classification #generated_from_trainer #base_model-albert/albert-base-v2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
polyglot-ko-12.8b - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/polyglot-ko-12.8b/
Original model description:
---
language:
- ko
tags:
- pytorch
- causal-lm
license: apache-2.0
---
# Polyglot-Ko-12.8B
## Model Description
Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team.
| Hyperparameter | Value |
|----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| \\(n_{parameters}\\) | 12,898,631,680 |
| \\(n_{layers}\\) | 40 |
| \\(d_{model}\\) | 5120 |
| \\(d_{ff}\\) | 20,480 |
| \\(n_{heads}\\) | 40 |
| \\(d_{head}\\) | 128 |
| \\(n_{ctx}\\) | 2,048 |
| \\(n_{vocab}\\) | 30,003 / 30,080 |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
The model consists of 40 transformer layers with a model dimension of 5120, and a feedforward dimension of 20480. The model
dimension is split into 40 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 30003.
## Training data
Polyglot-Ko-12.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by [TUNiB](https://tunib.ai/). The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use.
| Source |Size (GB) | Link |
|-------------------------------------|---------|------------------------------------------|
| Korean blog posts | 682.3 | - |
| Korean news dataset | 87.0 | - |
| Modu corpus | 26.4 |corpus.korean.go.kr |
| Korean patent dataset | 19.0 | - |
| Korean Q & A dataset | 18.1 | - |
| KcBert dataset | 12.7 | github.com/Beomi/KcBERT |
| Korean fiction dataset | 6.1 | - |
| Korean online comments | 4.2 | - |
| Korean wikipedia | 1.4 | ko.wikipedia.org |
| Clova call | < 1.0 | github.com/clovaai/ClovaCall |
| Naver sentiment movie corpus | < 1.0 | github.com/e9t/nsmc |
| Korean hate speech dataset | < 1.0 | - |
| Open subtitles | < 1.0 | opus.nlpl.eu/OpenSubtitles.php |
| AIHub various tasks datasets | < 1.0 |aihub.or.kr |
| Standard Korean language dictionary | < 1.0 | stdict.korean.go.kr/main/main.do |
Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage:
* `<|acc|>` : bank account number
* `<|rrn|>` : resident registration number
* `<|tell|>` : phone number
## Training procedure
Polyglot-Ko-12.8B was trained for 167 billion tokens over 301,000 steps on 256 A100 GPUs with the [GPT-NeoX framework](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` class:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/polyglot-ko-12.8b")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-12.8b")
```
## Evaluation results
We evaluate Polyglot-Ko-3.8B on [KOBEST dataset](https://arxiv.org/abs/2204.04541), a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper.
The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the [polyglot branch of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, `n` refers to the number of few-shot examples.
In case of WiC dataset, all models show random performance.
```console
python main.py \
--model gpt2 \
--model_args pretrained='EleutherAI/polyglot-ko-3.8b' \
--tasks kobest_copa,kobest_hellaswag \
--num_fewshot $YOUR_NUM_FEWSHOT \
--batch_size $YOUR_BATCH_SIZE \
--device $YOUR_DEVICE \
--output_path $/path/to/output/
```
### COPA (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
| [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
| **[EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) (this)** | **12.8B** | **0.7937** | **0.8108** | **0.8037** | **0.8369** |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/d5b49364-aed5-4467-bae2-5a322c8e2ceb" width="800px">
### HellaSwag (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.5243 | 0.5272 | 0.5166 | 0.5352 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.5590 | 0.5833 | 0.5828 | 0.5907 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.5707 | 0.5830 | 0.5670 | 0.5787 |
| [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
| **[EleutherAI/polyglot-ko-12.8b (this)](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)** | **12.8B** | **0.5954** | **0.6306** | **0.6098** | **0.6118** |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/5acb60ac-161a-4ab3-a296-db4442e08b7f" width="800px">
### BoolQ (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3356 | 0.4014 | 0.3640 | 0.3560 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.4514 | 0.5981 | 0.5499 | 0.5202 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.4320 | 0.5263 | 0.4930 | 0.4038 |
| [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
| **[EleutherAI/polyglot-ko-12.8b (this)](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)** | **12.8B** | **0.4818** | **0.6041** | **0.6289** | **0.6448** |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/b74c23c0-01f3-4b68-9e10-a48e9aa052ab" width="800px">
### SentiNeg (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6065 | 0.6878 | 0.7280 | 0.8413 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3747 | 0.8942 | 0.9294 | 0.9698 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.4858 | 0.7950 | 0.7320 | 0.7851 |
| [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
| **[EleutherAI/polyglot-ko-12.8b (this)](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)** | **12.8B** | **0.9117** | **0.9015** | **0.9345** | **0.9723** |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/95b56b19-d349-4b70-9ff9-94a5560f89ee" width="800px">
### WiC (F1)
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3290 | 0.4313 | 0.4001 | 0.3621 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3526 | 0.4775 | 0.4358 | 0.4061 |
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3280 | 0.4903 | 0.4945 | 0.3656 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3297 | 0.4850 | 0.4650 | 0.3290 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.3390 | 0.4944 | 0.4203 | 0.3835 |
| [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3913 | 0.4688 | 0.4189 | 0.3910 |
| **[EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) (this)** | **12.8B** | **0.3985** | **0.3683** | **0.3307** | **0.3273** |
<img src="https://github.com/EleutherAI/polyglot/assets/19511788/4de4a4c3-d7ac-4e04-8b0c-0d533fe88294" width="800px">
## Limitations and Biases
Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.
## Citation and Related Information
### BibTeX entry
If you find our work useful, please consider citing:
```bibtex
@misc{ko2023technical,
title={A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models},
author={Hyunwoong Ko and Kichang Yang and Minho Ryu and Taekyoon Choi and Seungmu Yang and jiwung Hyun and Sungho Park},
year={2023},
eprint={2306.02254},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Licensing
All our models are licensed under the terms of the Apache License 2.0.
```
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
### Acknowledgement
This project was made possible thanks to the computing resources from [Stability.ai](https://stability.ai), and thanks to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.
| {} | RichardErkhov/EleutherAI_-_polyglot-ko-12.8b-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2104.09864",
"arxiv:2204.04541",
"arxiv:2306.02254",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:26:19+00:00 | [
"2104.09864",
"2204.04541",
"2306.02254"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2104.09864 #arxiv-2204.04541 #arxiv-2306.02254 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
polyglot-ko-12.8b - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* ko
tags:
* pytorch
* causal-lm
license: apache-2.0
---
Polyglot-Ko-12.8B
=================
Model Description
-----------------
Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team.
The model consists of 40 transformer layers with a model dimension of 5120, and a feedforward dimension of 20480. The model
dimension is split into 40 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 30003.
Training data
-------------
Polyglot-Ko-12.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by TUNiB. The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use.
Source: Korean blog posts, Size (GB): 682.3, Link: -
Source: Korean news dataset, Size (GB): 87.0, Link: -
Source: Modu corpus, Size (GB): 26.4, Link: URL
Source: Korean patent dataset, Size (GB): 19.0, Link: -
Source: Korean Q & A dataset, Size (GB): 18.1, Link: -
Source: KcBert dataset, Size (GB): 12.7, Link: URL
Source: Korean fiction dataset, Size (GB): 6.1, Link: -
Source: Korean online comments, Size (GB): 4.2, Link: -
Source: Korean wikipedia, Size (GB): 1.4, Link: URL
Source: Clova call, Size (GB): < 1.0, Link: URL
Source: Naver sentiment movie corpus, Size (GB): < 1.0, Link: URL
Source: Korean hate speech dataset, Size (GB): < 1.0, Link: -
Source: Open subtitles, Size (GB): < 1.0, Link: URL
Source: AIHub various tasks datasets, Size (GB): < 1.0, Link: URL
Source: Standard Korean language dictionary, Size (GB): < 1.0, Link: URL
Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage:
* '<|acc|>' : bank account number
* '<|rrn|>' : resident registration number
* '<|tell|>' : phone number
Training procedure
------------------
Polyglot-Ko-12.8B was trained for 167 billion tokens over 301,000 steps on 256 A100 GPUs with the GPT-NeoX framework. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token.
How to use
----------
This model can be easily loaded using the 'AutoModelForCausalLM' class:
Evaluation results
------------------
We evaluate Polyglot-Ko-3.8B on KOBEST dataset, a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper.
The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the polyglot branch of lm-evaluation-harness and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, 'n' refers to the number of few-shot examples.
In case of WiC dataset, all models show random performance.
### COPA (F1)
<img src="URL width="800px">
### HellaSwag (F1)
<img src="URL width="800px">
### BoolQ (F1)
<img src="URL width="800px">
### SentiNeg (F1)
<img src="URL width="800px">
### WiC (F1)
<img src="URL width="800px">
Limitations and Biases
----------------------
Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.
and Related Information
### BibTeX entry
If you find our work useful, please consider citing:
### Licensing
All our models are licensed under the terms of the Apache License 2.0.
### Acknowledgement
This project was made possible thanks to the computing resources from URL, and thanks to TUNiB for providing a large-scale Korean dataset for this work.
| [
"### COPA (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### HellaSwag (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### BoolQ (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### SentiNeg (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### WiC (F1)\n\n\n\n<img src=\"URL width=\"800px\">\n\n\nLimitations and Biases\n----------------------\n\n\nPolyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.\n\n\nand Related Information",
"### BibTeX entry\n\n\nIf you find our work useful, please consider citing:",
"### Licensing\n\n\nAll our models are licensed under the terms of the Apache License 2.0.",
"### Acknowledgement\n\n\nThis project was made possible thanks to the computing resources from URL, and thanks to TUNiB for providing a large-scale Korean dataset for this work."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2104.09864 #arxiv-2204.04541 #arxiv-2306.02254 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### COPA (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### HellaSwag (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### BoolQ (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### SentiNeg (F1)\n\n\n\n<img src=\"URL width=\"800px\">",
"### WiC (F1)\n\n\n\n<img src=\"URL width=\"800px\">\n\n\nLimitations and Biases\n----------------------\n\n\nPolyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.\n\n\nand Related Information",
"### BibTeX entry\n\n\nIf you find our work useful, please consider citing:",
"### Licensing\n\n\nAll our models are licensed under the terms of the Apache License 2.0.",
"### Acknowledgement\n\n\nThis project was made possible thanks to the computing resources from URL, and thanks to TUNiB for providing a large-scale Korean dataset for this work."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-70m-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-70m-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-70M
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-70M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-70M to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-70M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-70m-v0-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:26:21+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-70m-v0 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* the\_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-70M
----------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-70M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-70M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-70M to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-70M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-70M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-70M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-70M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-70M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-70M to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-70M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-70M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-70M.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-70M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-70M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-70M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-70M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-70M to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-70M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-70M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-70M.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-410M
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-410M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-410M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-410m-v0-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:26:23+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-410m-v0 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* the\_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-410M
-----------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-410M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-410M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-410M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-410M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-410M to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-410M.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-410M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-410M to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-410M.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | fangzhaoz/mistralv1_spectral_r4_1e-4_e5_directmerge_v2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T07:26:45+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-410M
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-410M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-410M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-410m-v0-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:27:04+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-410m-v0 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* the\_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-410M
-----------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-410M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-410M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-410M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-410M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-410M to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-410M.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-410M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-410M to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-410M.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m-deduped-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-160M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-160M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-160M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-160M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-160m-deduped-v0-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:27:06+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-160m-deduped-v0 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-160M-deduped
-------------------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-160M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-160M-deduped to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
Pythia-160M-deduped was trained on the Pile after the dataset has been
globally deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge – Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-160M-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-160M-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-160M-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-160M-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m-deduped-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-160M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-160M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-160M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-160M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-160m-deduped-v0-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:27:38+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-160m-deduped-v0 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-160M-deduped
-------------------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-160M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-160M-deduped to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
Pythia-160M-deduped was trained on the Pile after the dataset has been
globally deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge – Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-160M-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-160M-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-160M-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-160M-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers |
# DanTagGen - delta
DanTagGen(Danbooru Tag Generator) is inspired from p1atdev's dart project.
But with different arch, dataset, format and different training strategy.
## Difference between versions
alpha: pretrain on 2M dataset, smaller batch size. Limited ability
beta: pretrain on 5.3M dataset, larger batch size. More stable, better ability with only a few information provided.
delta: pretrain on 7.2M dataset, larger batch size. Slightly underfit but better diversity. quality tag introduced.
## Model arch
This version of DTG is trained from scratch with 400M param LLaMA arch.(In my personal preference I will call it NanoLLaMA)
Since it is llama arch. Theoritically it should be able to be used in any LLaMA inference interface.
This repo also provided converted FP16 gguf model and quantized 8bit/6bit gguf models.
Basically it is recommended to use llama.cpp or llama-cpp-python to run this model. Which will be very fast.
## Format
```python3
prompt = f"""
quality: {quality or '<|empty|>'}
rating: {rating or '<|empty|>'}
artist: {artist.strip() or '<|empty|>'}
characters: {characters.strip() or '<|empty|>'}
copyrights: {copyrights.strip() or '<|empty|>'}
aspect ratio: {f"{aspect_ratio:.1f}" or '<|empty|>'}
target: {'<|' + target + '|>' if target else '<|long|>'}
general: {", ".join(special_tags)}, {general.strip().strip(",")}<|input_end|>
"""
```
for example:
```
quality: masterpiece
rating: safe
artist: <|empty|>
characters: <|empty|>
copyrights: <|empty|>
aspect ratio: 1.0
target: <|short|>
general: 1girl, solo, dragon girl, dragon horns, dragon tail<|input_end|>
```
And you may get something like:
```
rating: safe
artist: <|empty|>
characters: <|empty|>
copyrights: <|empty|>
aspect ratio: 1.0
target: <|short|>
general: 1girl, solo, dragon girl, dragon horns, dragon tail<|input_end|>open mouth, red eyes, long hair, pointy ears, tail, black hair, chinese clothes, simple background, dragon, hair between eyes, horns, china dress, dress, looking at viewer, breasts
```
## Dataset and Training
I use the trainer I implemented in HakuPhi to run the training.
with 10epoch on 7.2M data. This model have roughly 10~15B token seen.
The dataset is exported by HakuBooru with my danbooru sqlite database. Use the percentile of fav_count on each rating to filter the data. (2M = top 25%, 5.3M = top 75%)
## Utilities
HF space: https://huggingface.co/spaces/KBlueLeaf/DTG-demo
Demo for DTG + Kohaku XL Epsilon: https://huggingface.co/spaces/KBlueLeaf/This-Cute-Dragon-Girl-Doesnt-Exist
SD-WebUI Extension: https://github.com/KohakuBlueleaf/z-a1111-sd-webui-dtg
ComfyUI Node: https://github.com/toyxyz/a1111-sd-webui-dtg_comfyui | {"language": ["en"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "art"], "datasets": ["KBlueLeaf/danbooru2023-sqlite"], "pipeline_tag": "text-generation", "widget": [{"text": "quality: masterpiece\nrating: safe\nartist: <|empty|>\ncharacters: <|empty|>\ncopyrights: <|empty|>\naspect ratio: 1.0\ntarget: <|short|>\ngeneral: 1girl, solo, dragon girl, dragon horns, dragon tail<|input_end|>"}]} | KBlueLeaf/DanTagGen-delta | null | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"not-for-all-audiences",
"art",
"en",
"dataset:KBlueLeaf/danbooru2023-sqlite",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T07:27:42+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #gguf #llama #text-generation #not-for-all-audiences #art #en #dataset-KBlueLeaf/danbooru2023-sqlite #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# DanTagGen - delta
DanTagGen(Danbooru Tag Generator) is inspired from p1atdev's dart project.
But with different arch, dataset, format and different training strategy.
## Difference between versions
alpha: pretrain on 2M dataset, smaller batch size. Limited ability
beta: pretrain on 5.3M dataset, larger batch size. More stable, better ability with only a few information provided.
delta: pretrain on 7.2M dataset, larger batch size. Slightly underfit but better diversity. quality tag introduced.
## Model arch
This version of DTG is trained from scratch with 400M param LLaMA arch.(In my personal preference I will call it NanoLLaMA)
Since it is llama arch. Theoritically it should be able to be used in any LLaMA inference interface.
This repo also provided converted FP16 gguf model and quantized 8bit/6bit gguf models.
Basically it is recommended to use URL or llama-cpp-python to run this model. Which will be very fast.
## Format
for example:
And you may get something like:
## Dataset and Training
I use the trainer I implemented in HakuPhi to run the training.
with 10epoch on 7.2M data. This model have roughly 10~15B token seen.
The dataset is exported by HakuBooru with my danbooru sqlite database. Use the percentile of fav_count on each rating to filter the data. (2M = top 25%, 5.3M = top 75%)
## Utilities
HF space: URL
Demo for DTG + Kohaku XL Epsilon: URL
SD-WebUI Extension: URL
ComfyUI Node: URL | [
"# DanTagGen - delta\nDanTagGen(Danbooru Tag Generator) is inspired from p1atdev's dart project.\nBut with different arch, dataset, format and different training strategy.",
"## Difference between versions\nalpha: pretrain on 2M dataset, smaller batch size. Limited ability\nbeta: pretrain on 5.3M dataset, larger batch size. More stable, better ability with only a few information provided.\ndelta: pretrain on 7.2M dataset, larger batch size. Slightly underfit but better diversity. quality tag introduced.",
"## Model arch\nThis version of DTG is trained from scratch with 400M param LLaMA arch.(In my personal preference I will call it NanoLLaMA)\nSince it is llama arch. Theoritically it should be able to be used in any LLaMA inference interface.\n\nThis repo also provided converted FP16 gguf model and quantized 8bit/6bit gguf models.\nBasically it is recommended to use URL or llama-cpp-python to run this model. Which will be very fast.",
"## Format\n\n\nfor example:\n\n\nAnd you may get something like:",
"## Dataset and Training\nI use the trainer I implemented in HakuPhi to run the training.\nwith 10epoch on 7.2M data. This model have roughly 10~15B token seen.\n\nThe dataset is exported by HakuBooru with my danbooru sqlite database. Use the percentile of fav_count on each rating to filter the data. (2M = top 25%, 5.3M = top 75%)",
"## Utilities\nHF space: URL\nDemo for DTG + Kohaku XL Epsilon: URL\nSD-WebUI Extension: URL\nComfyUI Node: URL"
] | [
"TAGS\n#transformers #safetensors #gguf #llama #text-generation #not-for-all-audiences #art #en #dataset-KBlueLeaf/danbooru2023-sqlite #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DanTagGen - delta\nDanTagGen(Danbooru Tag Generator) is inspired from p1atdev's dart project.\nBut with different arch, dataset, format and different training strategy.",
"## Difference between versions\nalpha: pretrain on 2M dataset, smaller batch size. Limited ability\nbeta: pretrain on 5.3M dataset, larger batch size. More stable, better ability with only a few information provided.\ndelta: pretrain on 7.2M dataset, larger batch size. Slightly underfit but better diversity. quality tag introduced.",
"## Model arch\nThis version of DTG is trained from scratch with 400M param LLaMA arch.(In my personal preference I will call it NanoLLaMA)\nSince it is llama arch. Theoritically it should be able to be used in any LLaMA inference interface.\n\nThis repo also provided converted FP16 gguf model and quantized 8bit/6bit gguf models.\nBasically it is recommended to use URL or llama-cpp-python to run this model. Which will be very fast.",
"## Format\n\n\nfor example:\n\n\nAnd you may get something like:",
"## Dataset and Training\nI use the trainer I implemented in HakuPhi to run the training.\nwith 10epoch on 7.2M data. This model have roughly 10~15B token seen.\n\nThe dataset is exported by HakuBooru with my danbooru sqlite database. Use the percentile of fav_count on each rating to filter the data. (2M = top 25%, 5.3M = top 75%)",
"## Utilities\nHF space: URL\nDemo for DTG + Kohaku XL Epsilon: URL\nSD-WebUI Extension: URL\nComfyUI Node: URL"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | hamzahey/Falcon_7b_Instruct_sharded | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:28:39+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LongformerTest
This model is a fine-tuned version of [valhalla/longformer-base-4096-finetuned-squadv1](https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 0
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "valhalla/longformer-base-4096-finetuned-squadv1", "model-index": [{"name": "LongformerTest", "results": []}]} | carloswbarros/LongformerTest | null | [
"transformers",
"tensorboard",
"safetensors",
"longformer",
"question-answering",
"generated_from_trainer",
"base_model:valhalla/longformer-base-4096-finetuned-squadv1",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:29:23+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #longformer #question-answering #generated_from_trainer #base_model-valhalla/longformer-base-4096-finetuned-squadv1 #license-mit #endpoints_compatible #region-us
|
# LongformerTest
This model is a fine-tuned version of valhalla/longformer-base-4096-finetuned-squadv1 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 0
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# LongformerTest\n\nThis model is a fine-tuned version of valhalla/longformer-base-4096-finetuned-squadv1 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 0\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #longformer #question-answering #generated_from_trainer #base_model-valhalla/longformer-base-4096-finetuned-squadv1 #license-mit #endpoints_compatible #region-us \n",
"# LongformerTest\n\nThis model is a fine-tuned version of valhalla/longformer-base-4096-finetuned-squadv1 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 0\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1.4b-deduped-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1.4b-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1.4B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1.4B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1.4B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1.4B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-1.4B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:29:47+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-1.4b-deduped-v0 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-1.4B-deduped
-------------------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-1.4B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-1.4B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1.4B-deduped to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
Pythia-1.4B-deduped was trained on the Pile after the dataset has been
globally deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge – Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1.4B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1.4B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1.4B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1.4B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-1.4B-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1.4B-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1.4B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-1.4B-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1.4B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1.4B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1.4B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1.4B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-1.4B-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1.4B-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1.4B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-1.4B-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1.4b-deduped-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1.4b-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1.4B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1.4B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1.4B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1.4B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-1.4B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:31:06+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-1.4b-deduped-v0 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-1.4B-deduped
-------------------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-1.4B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-1.4B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1.4B-deduped to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
Pythia-1.4B-deduped was trained on the Pile after the dataset has been
globally deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge – Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1.4B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1.4B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1.4B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1.4B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-1.4B-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1.4B-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1.4B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-1.4B-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1.4B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1.4B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1.4B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1.4B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-1.4B-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1.4B-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1.4B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-1.4B-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1b-deduped-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1b-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-1B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:31:20+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-1b-deduped-v0 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-1B-deduped
-----------------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B-deduped to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
Pythia-1B-deduped was trained on the Pile after the dataset has been
globally deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge – Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-1B-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1B-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-1B-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-1B-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1B-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-1B-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
null | null | Apa itu Prostalove pil?
Prostalove Tablet adalah suplemen makanan premium yang diformulasikan dengan cermat untuk meningkatkan kesehatan prostat. Dibuat dengan campuran bahan-bahan alami yang dikenal karena khasiatnya yang mendukung prostat, Prostalove Harga bertujuan untuk mengatasi masalah umum yang terkait dengan kesehatan prostat, seperti kesulitan buang air kecil dan peradangan.
Situs web resmi:<a href="https://www.nutritionsee.com/prostalndon">www.Prostalove.com</a>
<p><a href="https://www.nutritionsee.com/prostalndon"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/Prostalove-Indonesia-1.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/prostalndon">Beli sekarang!! Klik link di bawah untuk informasi lebih lanjut dan dapatkan diskon 50% sekarang... Buruan
</a>
Situs web resmi:<a href="https://www.nutritionsee.com/prostalndon">www.Prostalove.com</a> | {"license": "apache-2.0"} | ProstaloveIndonesia/ProstaloveIndonesia | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-04-23T07:31:24+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| Apa itu Prostalove pil?
Prostalove Tablet adalah suplemen makanan premium yang diformulasikan dengan cermat untuk meningkatkan kesehatan prostat. Dibuat dengan campuran bahan-bahan alami yang dikenal karena khasiatnya yang mendukung prostat, Prostalove Harga bertujuan untuk mengatasi masalah umum yang terkait dengan kesehatan prostat, seperti kesulitan buang air kecil dan peradangan.
Situs web resmi:<a href="URL
<p><a href="URL <img src="URL alt="enter image description here"> </a></p>
<a href="URL sekarang!! Klik link di bawah untuk informasi lebih lanjut dan dapatkan diskon 50% sekarang... Buruan
</a>
Situs web resmi:<a href="URL | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1b-deduped-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1b-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-1B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:32:39+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-1b-deduped-v0 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-1B-deduped
-----------------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B-deduped to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
Pythia-1B-deduped was trained on the Pile after the dataset has been
globally deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge – Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-1B-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1B-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-1B-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-1B-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1B-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-1B-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-to-image | diffusers | ### model Dreambooth model trained by amilyjenksy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| {"license": "creativeml-openrail-m", "tags": ["text-to-image", "stable-diffusion"]} | amilyjenksy/model | null | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-23T07:32:43+00:00 | [] | [] | TAGS
#diffusers #safetensors #text-to-image #stable-diffusion #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
| ### model Dreambooth model trained by amilyjenksy with TheLastBen's fast-DreamBooth notebook
Test the concept via A1111 Colab fast-Colab-A1111
Sample pictures of this concept:
| [
"### model Dreambooth model trained by amilyjenksy with TheLastBen's fast-DreamBooth notebook\n\n\nTest the concept via A1111 Colab fast-Colab-A1111\n\nSample pictures of this concept:"
] | [
"TAGS\n#diffusers #safetensors #text-to-image #stable-diffusion #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"### model Dreambooth model trained by amilyjenksy with TheLastBen's fast-DreamBooth notebook\n\n\nTest the concept via A1111 Colab fast-Colab-A1111\n\nSample pictures of this concept:"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-70m-deduped-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-70m-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-70M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-70M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-70M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-70M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:33:08+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-70m-deduped-v0 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-70M-deduped
------------------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-70M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-70M-deduped to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
Pythia-70M-deduped was trained on the Pile after the dataset has been
globally deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge – Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-70M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-70M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-70M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-70M-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-70M-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-70M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-70M-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-70M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-70M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-70M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-70M-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-70M-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-70M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-70M-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Epistemic_tiny_0.8_Seed102 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-23T07:33:23+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Epistemic_tiny_0.8_Seed102 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-23T07:33:29+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-70m-deduped-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-70m-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-70M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-70M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-70M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-70M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:33:29+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-70m-deduped-v0 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-70M-deduped
------------------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-70M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-70M-deduped to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
Pythia-70M-deduped was trained on the Pile after the dataset has been
globally deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge – Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-70M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-70M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-70M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-70M-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-70M-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-70M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-70M-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-70M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-70M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-70M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-70M-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-70M-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-70M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-70M-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-deduped-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-410M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-410M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:34:29+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-410m-deduped-v0 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-410M-deduped
-------------------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M-deduped to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
Pythia-410M-deduped was trained on the Pile after the dataset has been
globally deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge – Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-410M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-410M-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-410M-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-410M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-410M-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-410M-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-deduped-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-410M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-410M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:35:11+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-410m-deduped-v0 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-410M-deduped
-------------------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M-deduped to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
Pythia-410M-deduped was trained on the Pile after the dataset has been
globally deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge – Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-410M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-410M-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-410M-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-410M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-410M-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-410M-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | nmdr/Llama-3-8B-Instruct-Physics-5k-Scar | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T07:35:34+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_Ba1
This model is a fine-tuned version of [albert/albert-base-v2](https://huggingface.co/albert/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6909
- F1: 0.5477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 63 | 0.6933 | 0.4512 |
| No log | 2.0 | 126 | 0.6931 | 0.5436 |
| No log | 3.0 | 189 | 0.6932 | 0.4708 |
| No log | 4.0 | 252 | 0.6931 | 0.5418 |
| No log | 5.0 | 315 | 0.6923 | 0.5521 |
| No log | 6.0 | 378 | 0.6931 | 0.5202 |
| No log | 7.0 | 441 | 0.6926 | 0.5691 |
| 0.6994 | 8.0 | 504 | 0.6898 | 0.5562 |
| 0.6994 | 9.0 | 567 | 0.6929 | 0.5402 |
| 0.6994 | 10.0 | 630 | 0.6909 | 0.5477 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "albert/albert-base-v2", "model-index": [{"name": "COPA_Ba1", "results": []}]} | Ariffiq99/COPA_Ba1 | null | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"multiple-choice",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:35:54+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #albert #multiple-choice #generated_from_trainer #base_model-albert/albert-base-v2 #license-apache-2.0 #endpoints_compatible #region-us
| COPA\_Ba1
=========
This model is a fine-tuned version of albert/albert-base-v2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6909
* F1: 0.5477
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #albert #multiple-choice #generated_from_trainer #base_model-albert/albert-base-v2 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "diffusers"} | Niggendar/dominusFantasy_v10 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-04-23T07:35:57+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Saiga_timelist_task50steps
This model is a fine-tuned version of [TheBloke/Llama-2-7B-fp16](https://huggingface.co/TheBloke/Llama-2-7B-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8261 | 0.64 | 25 | 1.8215 |
| 1.7159 | 1.29 | 50 | 1.7980 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Llama-2-7B-fp16", "model-index": [{"name": "Saiga_timelist_task50steps", "results": []}]} | marcus2000/Saiga_timelist_task50steps | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Llama-2-7B-fp16",
"region:us"
] | null | 2024-04-23T07:36:10+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-TheBloke/Llama-2-7B-fp16 #region-us
| Saiga\_timelist\_task50steps
============================
This model is a fine-tuned version of TheBloke/Llama-2-7B-fp16 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7980
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 10
* total\_train\_batch\_size: 20
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 50
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 10\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-TheBloke/Llama-2-7B-fp16 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 10\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TextSummarizerAI_Basic_v1
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3319
- Rouge1: 0.1985
- Rouge2: 0.1019
- Rougel: 0.1702
- Rougelsum: 0.17
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7851 | 0.1315 | 0.0434 | 0.1122 | 0.1124 | 19.0 |
| No log | 2.0 | 124 | 2.5568 | 0.1442 | 0.0559 | 0.1197 | 0.1193 | 19.0 |
| No log | 3.0 | 186 | 2.4669 | 0.1536 | 0.062 | 0.127 | 0.1268 | 19.0 |
| No log | 4.0 | 248 | 2.4149 | 0.1768 | 0.0786 | 0.1472 | 0.1472 | 19.0 |
| No log | 5.0 | 310 | 2.3847 | 0.1947 | 0.0959 | 0.1653 | 0.1651 | 19.0 |
| No log | 6.0 | 372 | 2.3634 | 0.1973 | 0.0999 | 0.1691 | 0.1688 | 19.0 |
| No log | 7.0 | 434 | 2.3487 | 0.1981 | 0.1017 | 0.1704 | 0.1703 | 19.0 |
| No log | 8.0 | 496 | 2.3404 | 0.1982 | 0.102 | 0.1706 | 0.1703 | 19.0 |
| 2.7541 | 9.0 | 558 | 2.3333 | 0.199 | 0.1024 | 0.1711 | 0.1709 | 19.0 |
| 2.7541 | 10.0 | 620 | 2.3319 | 0.1985 | 0.1019 | 0.1702 | 0.17 | 19.0 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google-t5/t5-small", "model-index": [{"name": "TextSummarizerAI_Basic_v1", "results": []}]} | Bhotuya/TextSummarizerAI_Basic_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T07:36:11+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| TextSummarizerAI\_Basic\_v1
===========================
This model is a fine-tuned version of google-t5/t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3319
* Rouge1: 0.1985
* Rouge2: 0.1019
* Rougel: 0.1702
* Rougelsum: 0.17
* Gen Len: 19.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** shubham11
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | shubham11/mistralrelease102 | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:36:18+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: shubham11
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: shubham11\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: shubham11\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-2.8B
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-2.8B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-2.8b-v0-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:37:46+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-2.8b-v0 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* the\_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-2.8B
-----------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-2.8B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-2.8B to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-2.8B.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-2.8B to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-2.8B.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-160M
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-160M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-160M to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-160M.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-160m-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:37:57+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-160m - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-160M
===========
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-160M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-160M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-160M to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-160M.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-160M to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-160M.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-160M to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-160M.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/BAAI/JudgeLM-33B-v1.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.Q2_K.gguf) | Q2_K | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.IQ3_XS.gguf) | IQ3_XS | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.Q3_K_S.gguf) | Q3_K_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.IQ3_M.gguf) | IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.Q3_K_L.gguf) | Q3_K_L | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.IQ4_XS.gguf) | IQ4_XS | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.Q5_K_S.gguf) | Q5_K_S | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.Q5_K_M.gguf) | Q5_K_M | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.Q6_K.gguf) | Q6_K | 26.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/JudgeLM-33B-v1.0-GGUF/resolve/main/JudgeLM-33B-v1.0.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "base_model": "BAAI/JudgeLM-33B-v1.0", "quantized_by": "mradermacher"} | mradermacher/JudgeLM-33B-v1.0-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:BAAI/JudgeLM-33B-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:37:59+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-BAAI/JudgeLM-33B-v1.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-BAAI/JudgeLM-33B-v1.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-160M
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-160M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-160M to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-160M.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-160m-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:38:46+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-160m - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-160M
===========
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-160M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-160M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-160M to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-160M.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-160M to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-160M.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-160M to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-160M.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# alikhan234/my_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5248
- Validation Loss: 2.3941
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 60, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.7456 | 3.2536 | 0 |
| 2.8165 | 2.3941 | 1 |
| 2.5248 | 2.3941 | 2 |
### Framework versions
- Transformers 4.40.0
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "alikhan234/my_qa_model", "results": []}]} | alikhan234/my_qa_model | null | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:38:58+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| alikhan234/my\_qa\_model
========================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 2.5248
* Validation Loss: 2.3941
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 60, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.40.0
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 60, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 60, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
token-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | rizkyfoxcale/xlm-roberta-ner-ja-xtreme | null | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:39:10+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #xlm-roberta #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-2.8B
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-2.8B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-2.8b-v0-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:40:39+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-2.8b-v0 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* the\_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-2.8B
-----------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-2.8B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-2.8B to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-2.8B.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change over the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-2.8B to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-2.8B.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b-deduped-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-2.8B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-2.8B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:40:53+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-2.8b-deduped-v0 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-2.8B-deduped
-------------------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B-deduped to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
Pythia-2.8B-deduped was trained on the Pile after the dataset has been
globally deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge – Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-2.8B-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-2.8B-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-2.8B-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-2.8B-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m-deduped - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m-deduped/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-160M-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-160M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-160M-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-160M-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-160m-deduped-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:41:05+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-160m-deduped - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-160M-deduped
===================
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-160M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-160M-deduped to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
Pythia-160M-deduped was trained on the Pile after the dataset has been globally
deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-160M-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-160M-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-160M-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-160M-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
null | transformers |
# Uploaded model
- **Developed by:** waadarsh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | waadarsh/llama3-8b-nissan-magnite | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:41:23+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: waadarsh
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: waadarsh\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: waadarsh\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m-deduped - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m-deduped/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-160M-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-160M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-160M-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-160M-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-160m-deduped-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:41:35+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-160m-deduped - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-160M-deduped
===================
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-160M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-160M-deduped to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
Pythia-160M-deduped was trained on the Pile after the dataset has been globally
deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-160M-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-160M-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-160M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-160M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-160M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-160M-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-160M-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-160M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-160M-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# token-classification-llmlingua2-xlm-roberta-bctn-323_sample-5_epoch_16k_fpt_v2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.97 | 2 | 0.4856 |
| No log | 1.94 | 4 | 0.3957 |
| No log | 2.91 | 6 | 0.3313 |
| No log | 3.88 | 8 | 0.2951 |
| No log | 4.85 | 10 | 0.2826 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "FacebookAI/xlm-roberta-large", "model-index": [{"name": "token-classification-llmlingua2-xlm-roberta-bctn-323_sample-5_epoch_16k_fpt_v2", "results": []}]} | qminh369/token-classification-llmlingua2-xlm-roberta-bctn-323_sample-5_epoch_16k_fpt_v2 | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:43:03+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-FacebookAI/xlm-roberta-large #license-mit #autotrain_compatible #endpoints_compatible #region-us
| token-classification-llmlingua2-xlm-roberta-bctn-323\_sample-5\_epoch\_16k\_fpt\_v2
===================================================================================
This model is a fine-tuned version of FacebookAI/xlm-roberta-large on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2826
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.2.1+cu118
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-FacebookAI/xlm-roberta-large #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2363
- Accuracy: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.225 | 1.0 | 1563 | 0.1971 | 0.9240 |
| 0.1461 | 2.0 | 3126 | 0.2363 | 0.9308 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "my_awesome_model", "results": []}]} | CornCube/my_awesome_model | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:43:48+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| my\_awesome\_model
==================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2363
* Accuracy: 0.9308
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# with_board_only_history_with_sys_5epoch_lr1.41e-5
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "with_board_only_history_with_sys_5epoch_lr1.41e-5", "results": []}]} | wenshicheng97/with_board_only_history_with_sys_5epoch_lr1.41e-5 | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null | 2024-04-23T07:44:10+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
|
# with_board_only_history_with_sys_5epoch_lr1.41e-5
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# with_board_only_history_with_sys_5epoch_lr1.41e-5\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.41e-05\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.2\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n",
"# with_board_only_history_with_sys_5epoch_lr1.41e-5\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.41e-05\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.2\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b-deduped-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-2.8B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-2.8B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:44:11+00:00 | [
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-2.8b-deduped-v0 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
* pythia\_v0
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
on Hugging Face.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-2.8B-deduped
-------------------
### Model Details
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch '143000' corresponds
exactly to the model checkpoint on the 'main' branch of each model.
You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B-deduped to produce factually accurate output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
### Training
#### Training data
Pythia-2.8B-deduped was trained on the Pile after the dataset has been
globally deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so 'step1000' is the first checkpoint
for 'pythia-1.4b' that was saved (corresponding to step 500 in training), and
'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved
(corresponding to 1000 “actual” steps).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
### Evaluations
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge – Challenge Set

SciQ

### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-2.8B-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-2.8B-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model Details\n\n\n* Developed by: EleutherAI\n* Model type: Transformer-based Language Model\n* Language: English\n* Learn more: Pythia's GitHub repository\nfor training procedure, config files, and details on how to use.\n* Library: GPT-NeoX\n* License: Apache 2.0\n* Contact: to ask questions about this model, join the EleutherAI\nDiscord, and post them in '#release-discussion'.\nPlease read the existing *Pythia* documentation before asking about it in the\nEleutherAI Discord. For general correspondence: contact@eleuther.\nai.\n\n\n\n\nEngineering details for the *Pythia Suite*. Deduped and \nnon-deduped models of a given size have the same hyperparameters. “Equivalent” \nmodels have **exactly** the same architecture, and the same number of \nnon-embedding parameters.",
"### Uses and Limitations",
"#### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. To enable the\nstudy of how language models change in the course of training, we provide\n143 evenly spaced intermediate checkpoints per model. These checkpoints are\nhosted on Hugging Face as branches. Note that branch '143000' corresponds\nexactly to the model checkpoint on the 'main' branch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"#### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “understand” human instructions.",
"#### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token deemed statistically most likely by the\nmodel need not produce the most “accurate” text. Never rely on\nPythia-2.8B-deduped to produce factually accurate output.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B-deduped may produce socially unacceptable or undesirable text,\n*even if* the prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.",
"### Training",
"#### Training data\n\n\nPythia-2.8B-deduped was trained on the Pile after the dataset has been\nglobally deduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"#### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for the equivalent of 143000 steps at a batch size\nof 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch\nsize of 4M tokens listed were originally trained for 71500 steps instead, with\ncheckpoints every 500 steps. The checkpoints on Hugging Face are renamed for\nconsistency with all 2M batch models, so 'step1000' is the first checkpoint\nfor 'pythia-1.4b' that was saved (corresponding to step 500 in training), and\n'step1000' is likewise the first 'pythia-6.9b' checkpoint that was saved\n(corresponding to 1000 “actual” steps). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.",
"### Evaluations\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge – Challenge Set\n\n\n\nSciQ\n",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1.4b - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1.4b/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1.4B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1.4B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1.4B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1.4B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-1.4B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-1.4b-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:44:16+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-1.4b - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/the\_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-1.4B
===========
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-1.4B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-1.4B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1.4B to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-1.4B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1.4B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1.4B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1.4B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1.4B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-1.4B to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1.4B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1.4B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-1.4B.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1.4B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1.4B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1.4B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1.4B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-1.4B to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1.4B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1.4B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-1.4B.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - Ppororo/paper_model
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks paper using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true, "instance_prompt": "a photo of sks paper"} | Ppororo/paper_model | null | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-23T07:44:57+00:00 | [] | [] | TAGS
#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# DreamBooth - Ppororo/paper_model
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks paper using DreamBooth.
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# DreamBooth - Ppororo/paper_model\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks paper using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# DreamBooth - Ppororo/paper_model\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks paper using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1.4b - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1.4b/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1.4B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1.4B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1.4B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1.4B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-1.4B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-1.4b-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:45:37+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-1.4b - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/the\_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-1.4B
===========
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-1.4B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-1.4B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1.4B to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-1.4B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1.4B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1.4B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1.4B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1.4B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-1.4B to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1.4B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1.4B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-1.4B.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1.4B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1.4B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1.4B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1.4B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-1.4B to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1.4B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1.4B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-1.4B.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asril-pegasus-xlsum-skripsi
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.5256 | 0.1046 | 1000 | 3.4857 |
| 3.699 | 0.2092 | 2000 | 3.1625 |
| 3.4046 | 0.3138 | 3000 | 2.9968 |
| 3.2456 | 0.4184 | 4000 | 2.8834 |
| 3.126 | 0.5230 | 5000 | 2.8127 |
| 3.055 | 0.6275 | 6000 | 2.7644 |
| 3.005 | 0.7321 | 7000 | 2.7281 |
| 2.9597 | 0.8367 | 8000 | 2.7060 |
| 2.9627 | 0.9413 | 9000 | 2.6919 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "base_model": "google/pegasus-xsum", "model-index": [{"name": "asril-pegasus-xlsum-skripsi", "results": []}]} | asrilmurdian/asril-pegasus-xlsum-skripsi | null | [
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-xsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:48:44+00:00 | [] | [] | TAGS
#transformers #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-xsum #autotrain_compatible #endpoints_compatible #region-us
| asril-pegasus-xlsum-skripsi
===========================
This model is a fine-tuned version of google/pegasus-xsum on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.6919
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.1.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-xsum #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers | # Untitled Model (1)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
* /workspace/ses
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [0, 32]
- model: /workspace/ses
layer_range: [0, 32]
merge_method: slerp
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["meta-llama/Meta-Llama-3-8B-Instruct"]} | taozi555/llama3-Mirage-Walker-8b-v0.2-slerp | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T07:48:45+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Untitled Model (1)
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* meta-llama/Meta-Llama-3-8B-Instruct
* /workspace/ses
### Configuration
The following YAML configuration was used to produce this model:
| [
"# Untitled Model (1)\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B-Instruct\n* /workspace/ses",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Untitled Model (1)\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B-Instruct\n* /workspace/ses",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b-deduped - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b-deduped/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-2.8B-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-2.8B-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-2.8B-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:49:20+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-2.8b-deduped - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-2.8B-deduped
===================
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-2.8B-deduped to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
Pythia-2.8B-deduped was trained on the Pile after the dataset has been globally
deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-2.8B-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-2.8B-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-2.8B-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-2.8B-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-70m-deduped - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-70m-deduped/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-70M-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-70M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-70M-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-70m-deduped-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:49:24+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-70m-deduped - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-70M-deduped
==================
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-70M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
Pythia-70M-deduped was trained on the Pile after the dataset has been globally
deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-70M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-70M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-70M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-70M-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-70M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-70M-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-70M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-70M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-70M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-70M-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-70M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-70M-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-70m-deduped - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-70m-deduped/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-70M-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-70M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-70M-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-70m-deduped-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:49:50+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-70m-deduped - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-70M-deduped
==================
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-70M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
Pythia-70M-deduped was trained on the Pile after the dataset has been globally
deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-70M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-70M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-70M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-70M-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-70M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-70M-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-70M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-70M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-70M-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-70M-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-70M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-70M-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "dpo"]} | NBA55/Experiment_with_trained_model_Final_DPO_for_all_3_issues-epoch-10 | null | [
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:51:03+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #trl #dpo #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #trl #dpo #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b-deduped - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b-deduped/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-2.8B-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-2.8B-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-2.8B-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:51:10+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-2.8b-deduped - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-2.8B-deduped
===================
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-2.8B-deduped to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
Pythia-2.8B-deduped was trained on the Pile after the dataset has been globally
deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-2.8B-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-2.8B-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-2.8B-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-2.8B-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["validation_1", "validation_2", "validation_3", "validation_4", "validation_5", "validation_6", "validation_7"]} | HamAndCheese82/math-ocr-donut-v2 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"validation_1",
"validation_2",
"validation_3",
"validation_4",
"validation_5",
"validation_6",
"validation_7",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:52:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #validation_1 #validation_2 #validation_3 #validation_4 #validation_5 #validation_6 #validation_7 #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #validation_1 #validation_2 #validation_3 #validation_4 #validation_5 #validation_6 #validation_7 #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abhijithgururaj/blip2-opt-2.7b-french-pre-lora-abhijith | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:52:13+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-2.8B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-2.8B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-2.8B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-2.8b-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:52:48+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-2.8b - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-2.8B
===========
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-2.8B to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-2.8B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-2.8B to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-2.8B.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-2.8B to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-2.8B.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-410M
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-410M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-410M to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-410M.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-410m-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:53:20+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-410m - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-410M
===========
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-410M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-410M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-410M to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-410M.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-410M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-410M to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-410M.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-410M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-410M to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-410M.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-classification | setfit |
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 5 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Tech Support | <ul><li>"I ' m trying t0 place an order online but the website reep8 crashing. Gan y0o assist me?"</li><li>"My online urdek won ' t go thk0u9h - is there an i8soe with yuuk payment processing?"</li><li>"I ' m 9ettin9 an erkok when trying t0 redeem my loyalty p0int8. Who can a88ist me?"</li></ul> |
| HR | <ul><li>"I ' m considering 8obmittin9 my two - week notice. What i8 the typical resignation pk0ce8s?"</li><li>"I ' m 1o0ring to switch t0 a part - time schedule. What are the requirements?"</li><li>"I ' d 1ire to fi1e a fokma1 complaint abuot workplace discrimination. Who do I contact?"</li></ul> |
| Product | <ul><li>'What are your best practices f0k maintaining fu0d 9oa1ity and freshness?'</li><li>'What 6kand of nut butters du you carry that are peanot - fkee?'</li><li>'Do yuo have any seasonal or 1imited - time products in stock right now?'</li></ul> |
| Returns | <ul><li>'My 9r0ceky delivery cuntained items that were spoiled or pa8t their expiration date. How do I 9et replacements?'</li><li>"1 ' d like to exchange a product 1 bought in - 8toke. Do I need to bring the uki9inal receipt?"</li><li>'1 keceived a damaged item in my online okdek. How do I go about getting a kefond?'</li></ul> |
| Logistics | <ul><li>'I have a question about your h01iday 8hippin9 deadlines and pki0kiti2ed delivery options'</li><li>'I need to change the de1iveky address f0k my upcoming 0kder. How can I d0 that?'</li><li>'Can you exp1ain your pu1icie8 around item8 that are out uf stock or on 6ackokdek?'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8491 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("Can you tell me about any on9uin9 promotions uk discounts on organic pk0doce?")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 10 | 16.125 | 28 |
| Label | Training Sample Count |
|:-------------|:----------------------|
| Returns | 8 |
| Tech Support | 8 |
| Logistics | 8 |
| HR | 8 |
| Product | 8 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (10, 10)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.025 | 1 | 0.2231 | - |
| 1.25 | 50 | 0.065 | - |
| 2.5 | 100 | 0.0065 | - |
| 3.75 | 150 | 0.0019 | - |
| 5.0 | 200 | 0.0032 | - |
| 6.25 | 250 | 0.0026 | - |
| 7.5 | 300 | 0.0009 | - |
| 8.75 | 350 | 0.0018 | - |
| 10.0 | 400 | 0.0018 | - |
### Framework Versions
- Python: 3.11.8
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.0
- PyTorch: 2.2.2
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "BAAI/bge-small-en-v1.5", "widget": [{"text": "Can you tell me about any on9uin9 promotions uk discounts on organic pk0doce?"}, {"text": "I bought 80methin9 that didn ' t meet my expectations. 18 there a way to 9et a partial kefond?"}, {"text": "I ' d like to place a 1ar9e urdek for my business. Do you offer any special bulk 8hippin9 rates?"}, {"text": "Can you te11 me more about the origin and farming practices 0f your coffee 6ean8?"}, {"text": "1 ' d like to exchange a product 1 bought in - 8toke. Do I need to bring the uki9inal receipt?"}], "pipeline_tag": "text-classification", "inference": true, "model-index": [{"name": "SetFit with BAAI/bge-small-en-v1.5", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8490566037735849, "name": "Accuracy"}]}]}]} | wikd/nlp_aug | null | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-small-en-v1.5",
"model-index",
"region:us"
] | null | 2024-04-23T07:53:23+00:00 | [
"2209.11055"
] | [] | TAGS
#setfit #safetensors #bert #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-BAAI/bge-small-en-v1.5 #model-index #region-us
| SetFit with BAAI/bge-small-en-v1.5
==================================
This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-small-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a Sentence Transformer with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
-------------
### Model Description
* Model Type: SetFit
* Sentence Transformer body: BAAI/bge-small-en-v1.5
* Classification head: a LogisticRegression instance
* Maximum Sequence Length: 512 tokens
* Number of Classes: 5 classes
### Model Sources
* Repository: SetFit on GitHub
* Paper: Efficient Few-Shot Learning Without Prompts
* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
### Model Labels
Evaluation
----------
### Metrics
Uses
----
### Direct Use for Inference
First install the SetFit library:
Then you can load this model and run inference.
Training Details
----------------
### Training Set Metrics
### Training Hyperparameters
* batch\_size: (32, 32)
* num\_epochs: (10, 10)
* max\_steps: -1
* sampling\_strategy: oversampling
* body\_learning\_rate: (2e-05, 1e-05)
* head\_learning\_rate: 0.01
* loss: CosineSimilarityLoss
* distance\_metric: cosine\_distance
* margin: 0.25
* end\_to\_end: False
* use\_amp: False
* warmup\_proportion: 0.1
* seed: 42
* eval\_max\_steps: -1
* load\_best\_model\_at\_end: False
### Training Results
### Framework Versions
* Python: 3.11.8
* SetFit: 1.0.3
* Sentence Transformers: 2.7.0
* Transformers: 4.40.0
* PyTorch: 2.2.2
* Datasets: 2.19.0
* Tokenizers: 0.19.1
### BibTeX
| [
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: BAAI/bge-small-en-v1.5\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 5 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (32, 32)\n* num\\_epochs: (10, 10)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.11.8\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.0\n* PyTorch: 2.2.2\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1",
"### BibTeX"
] | [
"TAGS\n#setfit #safetensors #bert #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-BAAI/bge-small-en-v1.5 #model-index #region-us \n",
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: BAAI/bge-small-en-v1.5\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 5 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (32, 32)\n* num\\_epochs: (10, 10)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.11.8\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.0\n* PyTorch: 2.2.2\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1",
"### BibTeX"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-410M
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-410M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-410M to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-410M.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-410m-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:54:09+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-410m - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-410M
===========
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-410M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-410M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-410M to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-410M.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-410M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-410M to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-410M.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-410M will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-410M to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-410M.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-deduped - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-deduped/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-410M-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means XNPythia-410M-dedupedAME will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-410M-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-410m-deduped-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:54:13+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-410m-deduped - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-410M-deduped
===================
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means XNPythia-410M-dedupedAME will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
Pythia-410M-deduped was trained on the Pile after the dataset has been globally
deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means XNPythia-410M-dedupedAME will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-410M-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means XNPythia-410M-dedupedAME will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-410M-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "dpo"]} | NBA55/Experiment_with_trained_model_Final_DPO_for_all_3_issues-epoch-2 | null | [
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:54:27+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #trl #dpo #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #trl #dpo #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # Llama-3-Smolphin-8b
<figure>

</figure>
This is a merge of pre-trained language models https://huggingface.co/abacusai/Llama-3-Smaug-8B and https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* https://huggingface.co/abacusai/Llama-3-Smaug-8B
* https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: cognitivecomputations/dolphin-2.9-llama3-8b
layer_range: [0, 32]
- model: abacusai/Llama-3-Smaug-8B
layer_range: [0, 32]
merge_method: slerp
base_model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` | {"license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["abacusai/Llama-3-Smaug-8B", "cognitivecomputations/dolphin-2.9-llama3-8b"]} | EryriLabs/Llama-3-Smolphin-8b | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:abacusai/Llama-3-Smaug-8B",
"base_model:cognitivecomputations/dolphin-2.9-llama3-8b",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T07:54:32+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-abacusai/Llama-3-Smaug-8B #base_model-cognitivecomputations/dolphin-2.9-llama3-8b #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Llama-3-Smolphin-8b
<figure>
!Smolphin
</figure>
This is a merge of pre-trained language models URL and URL created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* URL
* URL
### Configuration
The following YAML configuration was used to produce this model:
| [
"# Llama-3-Smolphin-8b\n\n<figure>\n\n!Smolphin \n\n</figure>\n\nThis is a merge of pre-trained language models URL and URL created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* URL\n* URL",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-abacusai/Llama-3-Smaug-8B #base_model-cognitivecomputations/dolphin-2.9-llama3-8b #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Llama-3-Smolphin-8b\n\n<figure>\n\n!Smolphin \n\n</figure>\n\nThis is a merge of pre-trained language models URL and URL created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* URL\n* URL",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "Salesforce/codegen-350M-mono"} | Denis641/checkpoint1000 | null | [
"peft",
"safetensors",
"codegen",
"arxiv:1910.09700",
"base_model:Salesforce/codegen-350M-mono",
"region:us"
] | null | 2024-04-23T07:54:33+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #codegen #arxiv-1910.09700 #base_model-Salesforce/codegen-350M-mono #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #codegen #arxiv-1910.09700 #base_model-Salesforce/codegen-350M-mono #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-deduped - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-deduped/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-410M-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means XNPythia-410M-dedupedAME will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-410M-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-410m-deduped-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:54:55+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-410m-deduped - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-410M-deduped
===================
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means XNPythia-410M-dedupedAME will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
Pythia-410M-deduped was trained on the Pile after the dataset has been globally
deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means XNPythia-410M-dedupedAME will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-410M-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-410M-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-410M-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means XNPythia-410M-dedupedAME will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-410M-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-410M-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-410M-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
null | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | giantdev/sn17-m1h1 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2024-04-23T07:54:57+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-2.8B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-2.8B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-2.8B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-2.8b-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:55:06+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-2.8b - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-2.8B
===========
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-2.8B to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
The Pile was not deduplicated before being used to train Pythia-2.8B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-2.8B to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-2.8B.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-2.8B for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-2.8B as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-2.8B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-2.8B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-2.8B to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-2.8B may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-2.8B.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror. \n\nThe Pile was not deduplicated before being used to train Pythia-2.8B.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
feature-extraction | transformers |
# megatron.bert-base.bpe-32k-no_pretok.25k-steps
This BERT model was trained using the NeMo library.
The size of the model is a regular bert-large.
The model was trained on more than 245GB of data, consisting mostly of web-data and Swedish newspaper text curated by the National Library of Sweden.
Training was done for 25k training steps using a batch size of 8k.
The model has multiple sibling models trained on the same dataset using different tokenizers or more/less parameters:
- [megatron.bert-base.bpe-32k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.bpe-32k-no_pretok.25k-steps)
- [megatron.bert-base.bpe-64k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.bpe-64k-no_pretok.25k-steps)
- [megatron.bert-base.spe-bpe-32k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.spe-bpe-32k-no_pretok.25k-steps)
- [megatron.bert-base.spe-bpe-32k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.spe-bpe-32k-pretok.25k-steps)
- [megatron.bert-base.spe-bpe-64k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.spe-bpe-64k-no_pretok.25k-steps)
- [megatron.bert-base.spe-bpe-64k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.spe-bpe-64k-pretok.25k-steps)
- [megatron.bert-base.unigram-32k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.unigram-32k-no_pretok.25k-steps)
- [megatron.bert-base.unigram-32k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.unigram-32k-pretok.25k-steps)
- [megatron.bert-base.unigram-64k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.unigram-64k-no_pretok.25k-steps)
- [megatron.bert-base.unigram-64k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.unigram-64k-pretok.25k-steps)
- [megatron.bert-base.wordpiece-32k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.wordpiece-32k-no_pretok.25k-steps)
- [megatron.bert-base.wordpiece-32k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.wordpiece-32k-pretok.25k-steps)
- [megatron.bert-base.wordpiece-64k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.wordpiece-64k-no_pretok.25k-steps)
- [megatron.bert-base.wordpiece-64k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.wordpiece-64k-pretok.25k-steps)
- [megatron.bert-large.bpe-64k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-large.bpe-64k-no_pretok.25k-steps)
- [megatron.bert-large.spe-bpe-32k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-large.spe-bpe-32k-pretok.25k-steps)
- [megatron.bert-large.unigram-32k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-large.unigram-32k-pretok.25k-steps)
- [megatron.bert-large.unigram-64k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-large.unigram-64k-pretok.25k-steps)
- [megatron.bert-large.wordpiece-32k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-large.wordpiece-32k-pretok.25k-steps)
- [megatron.bert-large.wordpiece-64k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-large.wordpiece-64k-pretok.25k-steps)
## Acknowledgements
The training was performed on the Luxembourg national supercomputer MeluXina.
The authors gratefully acknowledge the LuxProvide teams for their expert support.
| {"language": ["sv"]} | KBLab/megatron.bert-base.bpe-32k-no_pretok.25k-steps | null | [
"transformers",
"pytorch",
"safetensors",
"megatron-bert",
"feature-extraction",
"sv",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:56:06+00:00 | [] | [
"sv"
] | TAGS
#transformers #pytorch #safetensors #megatron-bert #feature-extraction #sv #endpoints_compatible #region-us
|
# URL-32k-no_pretok.25k-steps
This BERT model was trained using the NeMo library.
The size of the model is a regular bert-large.
The model was trained on more than 245GB of data, consisting mostly of web-data and Swedish newspaper text curated by the National Library of Sweden.
Training was done for 25k training steps using a batch size of 8k.
The model has multiple sibling models trained on the same dataset using different tokenizers or more/less parameters:
- URL-32k-no_pretok.25k-steps
- URL-64k-no_pretok.25k-steps
- URL-bpe-32k-no_pretok.25k-steps
- URL-bpe-32k-pretok.25k-steps
- URL-bpe-64k-no_pretok.25k-steps
- URL-bpe-64k-pretok.25k-steps
- URL-base.unigram-32k-no_pretok.25k-steps
- URL-base.unigram-32k-pretok.25k-steps
- URL-base.unigram-64k-no_pretok.25k-steps
- URL-base.unigram-64k-pretok.25k-steps
- URL-base.wordpiece-32k-no_pretok.25k-steps
- URL-base.wordpiece-32k-pretok.25k-steps
- URL-base.wordpiece-64k-no_pretok.25k-steps
- URL-base.wordpiece-64k-pretok.25k-steps
- URL-64k-no_pretok.25k-steps
- URL-bpe-32k-pretok.25k-steps
- URL-large.unigram-32k-pretok.25k-steps
- URL-large.unigram-64k-pretok.25k-steps
- URL-large.wordpiece-32k-pretok.25k-steps
- URL-large.wordpiece-64k-pretok.25k-steps
## Acknowledgements
The training was performed on the Luxembourg national supercomputer MeluXina.
The authors gratefully acknowledge the LuxProvide teams for their expert support.
| [
"# URL-32k-no_pretok.25k-steps\n\nThis BERT model was trained using the NeMo library.\nThe size of the model is a regular bert-large.\nThe model was trained on more than 245GB of data, consisting mostly of web-data and Swedish newspaper text curated by the National Library of Sweden.\n\nTraining was done for 25k training steps using a batch size of 8k.\n\nThe model has multiple sibling models trained on the same dataset using different tokenizers or more/less parameters:\n- URL-32k-no_pretok.25k-steps\n- URL-64k-no_pretok.25k-steps\n- URL-bpe-32k-no_pretok.25k-steps\n- URL-bpe-32k-pretok.25k-steps\n- URL-bpe-64k-no_pretok.25k-steps\n- URL-bpe-64k-pretok.25k-steps\n- URL-base.unigram-32k-no_pretok.25k-steps\n- URL-base.unigram-32k-pretok.25k-steps\n- URL-base.unigram-64k-no_pretok.25k-steps\n- URL-base.unigram-64k-pretok.25k-steps\n- URL-base.wordpiece-32k-no_pretok.25k-steps\n- URL-base.wordpiece-32k-pretok.25k-steps\n- URL-base.wordpiece-64k-no_pretok.25k-steps\n- URL-base.wordpiece-64k-pretok.25k-steps\n- URL-64k-no_pretok.25k-steps\n- URL-bpe-32k-pretok.25k-steps\n- URL-large.unigram-32k-pretok.25k-steps\n- URL-large.unigram-64k-pretok.25k-steps\n- URL-large.wordpiece-32k-pretok.25k-steps\n- URL-large.wordpiece-64k-pretok.25k-steps",
"## Acknowledgements\n\nThe training was performed on the Luxembourg national supercomputer MeluXina.\nThe authors gratefully acknowledge the LuxProvide teams for their expert support."
] | [
"TAGS\n#transformers #pytorch #safetensors #megatron-bert #feature-extraction #sv #endpoints_compatible #region-us \n",
"# URL-32k-no_pretok.25k-steps\n\nThis BERT model was trained using the NeMo library.\nThe size of the model is a regular bert-large.\nThe model was trained on more than 245GB of data, consisting mostly of web-data and Swedish newspaper text curated by the National Library of Sweden.\n\nTraining was done for 25k training steps using a batch size of 8k.\n\nThe model has multiple sibling models trained on the same dataset using different tokenizers or more/less parameters:\n- URL-32k-no_pretok.25k-steps\n- URL-64k-no_pretok.25k-steps\n- URL-bpe-32k-no_pretok.25k-steps\n- URL-bpe-32k-pretok.25k-steps\n- URL-bpe-64k-no_pretok.25k-steps\n- URL-bpe-64k-pretok.25k-steps\n- URL-base.unigram-32k-no_pretok.25k-steps\n- URL-base.unigram-32k-pretok.25k-steps\n- URL-base.unigram-64k-no_pretok.25k-steps\n- URL-base.unigram-64k-pretok.25k-steps\n- URL-base.wordpiece-32k-no_pretok.25k-steps\n- URL-base.wordpiece-32k-pretok.25k-steps\n- URL-base.wordpiece-64k-no_pretok.25k-steps\n- URL-base.wordpiece-64k-pretok.25k-steps\n- URL-64k-no_pretok.25k-steps\n- URL-bpe-32k-pretok.25k-steps\n- URL-large.unigram-32k-pretok.25k-steps\n- URL-large.unigram-64k-pretok.25k-steps\n- URL-large.wordpiece-32k-pretok.25k-steps\n- URL-large.wordpiece-64k-pretok.25k-steps",
"## Acknowledgements\n\nThe training was performed on the Luxembourg national supercomputer MeluXina.\nThe authors gratefully acknowledge the LuxProvide teams for their expert support."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1b-deduped - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1b-deduped/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1B-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-1B-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-1b-deduped-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:56:06+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-1b-deduped - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-1B-deduped
=================
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
Pythia-1B-deduped was trained on the Pile after the dataset has been globally
deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1B-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-1B-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1B-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-1B-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m-seed1 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m-seed1/
Original model description:
Entry not found
| {} | RichardErkhov/EleutherAI_-_pythia-160m-seed1-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:56:17+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-160m-seed1 - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
Entry not found
| [] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n"
] |
image-text-to-text | xtuner |
# stormchaser/llava-llama-3-8b-v1_1-Q6_K-GGUF
This model was converted to GGUF format from [`xtuner/llava-llama-3-8b-v1_1`](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo stormchaser/llava-llama-3-8b-v1_1-Q6_K-GGUF --model llava-llama-3-8b-v1_1.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo stormchaser/llava-llama-3-8b-v1_1-Q6_K-GGUF --model llava-llama-3-8b-v1_1.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llava-llama-3-8b-v1_1.Q6_K.gguf -n 128
```
| {"library_name": "xtuner", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Lin-Chen/ShareGPT4V"], "pipeline_tag": "image-text-to-text"} | stormchaser/llava-llama-3-8b-v1_1-Q6_K-GGUF | null | [
"xtuner",
"gguf",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"dataset:Lin-Chen/ShareGPT4V",
"region:us"
] | null | 2024-04-23T07:56:31+00:00 | [] | [] | TAGS
#xtuner #gguf #llama-cpp #gguf-my-repo #image-text-to-text #dataset-Lin-Chen/ShareGPT4V #region-us
|
# stormchaser/llava-llama-3-8b-v1_1-Q6_K-GGUF
This model was converted to GGUF format from 'xtuner/llava-llama-3-8b-v1_1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# stormchaser/llava-llama-3-8b-v1_1-Q6_K-GGUF\nThis model was converted to GGUF format from 'xtuner/llava-llama-3-8b-v1_1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#xtuner #gguf #llama-cpp #gguf-my-repo #image-text-to-text #dataset-Lin-Chen/ShareGPT4V #region-us \n",
"# stormchaser/llava-llama-3-8b-v1_1-Q6_K-GGUF\nThis model was converted to GGUF format from 'xtuner/llava-llama-3-8b-v1_1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m-seed1 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m-seed1/
Original model description:
Entry not found
| {} | RichardErkhov/EleutherAI_-_pythia-160m-seed1-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:56:55+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-160m-seed1 - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
Entry not found
| [] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kalai_bert_model_test_3_out
This model is a fine-tuned version of [KalaiselvanD/kalai_bert_model_test_3_out](https://huggingface.co/KalaiselvanD/kalai_bert_model_test_3_out) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0761
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 0.1840 | 1.0 |
| No log | 2.0 | 8 | 0.0899 | 1.0 |
| No log | 3.0 | 12 | 0.0761 | 1.0 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "KalaiselvanD/kalai_bert_model_test_3_out", "model-index": [{"name": "kalai_bert_model_test_3_out", "results": []}]} | KalaiselvanD/kalai_bert_model_test_3_out | null | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:KalaiselvanD/kalai_bert_model_test_3_out",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T07:57:25+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #albert #text-classification #generated_from_trainer #base_model-KalaiselvanD/kalai_bert_model_test_3_out #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| kalai\_bert\_model\_test\_3\_out
================================
This model is a fine-tuned version of KalaiselvanD/kalai\_bert\_model\_test\_3\_out on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0761
* Accuracy: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #albert #text-classification #generated_from_trainer #base_model-KalaiselvanD/kalai_bert_model_test_3_out #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1b-deduped - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1b-deduped/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1B-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-1B-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| {} | RichardErkhov/EleutherAI_-_pythia-1b-deduped-8bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-23T07:57:25+00:00 | [
"2304.01373",
"2101.00027",
"2201.07311"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-1b-deduped - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* en
tags:
* pytorch
* causal-lm
* pythia
license: apache-2.0
datasets:
* EleutherAI/the\_pile\_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research (see paper).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models [match or exceed](#evaluations) the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Details on previous early release and naming convention.
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card [lists the changes](#changelog);
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
still available, but we
suggest the retrained suite if you are just starting to use Pythia.
This is the current release.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a [table
comparing the old and new names](#naming-convention-and-parameter-count) is provided in this model card, together
with exact parameter counts.
Pythia-1B-deduped
=================
Model Details
-------------
* Developed by: EleutherAI
* Model type: Transformer-based Language Model
* Language: English
* Learn more: Pythia's GitHub repository
for training procedure, config files, and details on how to use.
See paper for more evals and implementation
details.
* Library: GPT-NeoX
* License: Apache 2.0
* Contact: to ask questions about this model, join the EleutherAI
Discord, and post them in '#release-discussion'.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: contact@eleuther.
ai.
Engineering details for the *Pythia Suite*. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have **exactly** the same architecture, and the same number of
non-embedding parameters.
Uses and Limitations
--------------------
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints
'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to
'step143000'. These checkpoints are hosted on Hugging Face as branches. Note
that branch '143000' corresponds exactly to the model checkpoint on the 'main'
branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face Transformers
Library. If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is not intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate
output.
This model was trained on the Pile, a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See Section 6 of the Pile paper for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third 'pythia-70m-deduped' checkpoint:
Revision/branch 'step143000' corresponds exactly to the model checkpoint on
the 'main' branch of each model.
For more information on how to use all Pythia models, see documentation on
GitHub.
Training
--------
### Training data
Pythia-1B-deduped was trained on the Pile after the dataset has been globally
deduplicated.
The Pile is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See the Pile
paper for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult the
datasheet for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the official website, or from a community
mirror.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from 'step1000' to 'step143000' (which is the same as 'main'). In addition, we
also provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).
See GitHub for more details on training
procedure, including how to reproduce
it.
Pythia uses the same tokenizer as GPT-NeoX-
20B.
Evaluations
-----------
All 16 *Pythia* models were evaluated using the LM Evaluation
Harness. You can access
the results by model and step at 'results/json/\*' in the GitHub
repository.
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
LAMBADA – OpenAI

Physical Interaction: Question Answering (PIQA)

WinoGrande

AI2 Reasoning Challenge—Easy Set

SciQ

Changelog
---------
This section compares differences between previously released
Pythia v0 and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
* All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
* Flash Attention was used in the new retrained suite.
* We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
| [
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1B-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-1B-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-2304.01373 #arxiv-2101.00027 #arxiv-2201.07311 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Intended Use\n\n\nThe primary intended use of Pythia is research on the behavior, functionality,\nand limitations of large language models. This suite is intended to provide\na controlled setting for performing scientific experiments. We also provide\n154 checkpoints per model: initial 'step0', 10 log-spaced checkpoints\n'step{1,2,4...512}', and 143 evenly-spaced checkpoints from 'step1000' to\n'step143000'. These checkpoints are hosted on Hugging Face as branches. Note\nthat branch '143000' corresponds exactly to the model checkpoint on the 'main'\nbranch of each model.\n\n\nYou may also further fine-tune and adapt Pythia-1B-deduped for deployment,\nas long as your use is in accordance with the Apache 2.0 license. Pythia\nmodels work with the Hugging Face Transformers\nLibrary. If you decide to use\npre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please\nconduct your own risk and bias assessment.",
"### Out-of-scope use\n\n\nThe Pythia Suite is not intended for deployment. It is not a in itself\na product and cannot be used for human-facing interactions. For example,\nthe model may generate harmful or offensive text. Please evaluate the risks\nassociated with your particular use case.\n\n\nPythia models are English-language only, and are not suitable for translation\nor generating text in other languages.\n\n\nPythia-1B-deduped has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means Pythia-1B-deduped will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and biases\n\n\nThe core functionality of a large language model is to take a string of text\nand predict the next token. The token used by the model need not produce the\nmost “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate\noutput.\n\n\nThis model was trained on the Pile, a dataset\nknown to contain profanity and texts that are lewd or otherwise offensive.\nSee Section 6 of the Pile paper for a\ndiscussion of documented biases with regards to gender, religion, and race.\nPythia-1B-deduped may produce socially unacceptable or undesirable text, *even if*\nthe prompt itself does not include anything explicitly offensive.\n\n\nIf you plan on using text generated through, for example, the Hosted Inference\nAPI, we recommend having a human curate the outputs of this language model\nbefore presenting it to other people. Please inform your audience that the\ntext was generated by Pythia-1B-deduped.",
"### Quickstart\n\n\nPythia models can be loaded and used via the following code, demonstrated here\nfor the third 'pythia-70m-deduped' checkpoint:\n\n\nRevision/branch 'step143000' corresponds exactly to the model checkpoint on\nthe 'main' branch of each model. \n\nFor more information on how to use all Pythia models, see documentation on\nGitHub.\n\n\nTraining\n--------",
"### Training data\n\n\nPythia-1B-deduped was trained on the Pile after the dataset has been globally\ndeduplicated. \n\nThe Pile is a 825GiB general-purpose dataset in\nEnglish. It was created by EleutherAI specifically for training large language\nmodels. It contains texts from 22 diverse sources, roughly broken down into\nfive categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),\nprose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and\nmiscellaneous (e.g. GitHub, Enron Emails). See the Pile\npaper for a breakdown of all data sources,\nmethodology, and a discussion of ethical implications. Consult the\ndatasheet for more detailed documentation\nabout the Pile and its component datasets. The Pile can be downloaded from\nthe official website, or from a community\nmirror.",
"### Training procedure\n\n\nAll models were trained on the exact same data, in the exact same order. Each\nmodel saw 299,892,736,000 tokens during training, and 143 checkpoints for each\nmodel are saved every 2,097,152,000 tokens, spaced evenly throughout training,\nfrom 'step1000' to 'step143000' (which is the same as 'main'). In addition, we\nalso provide frequent early checkpoints: 'step0' and 'step{1,2,4...512}'.\nThis corresponds to training for just under 1 epoch on the Pile for\nnon-deduplicated models, and about 1.5 epochs on the deduplicated Pile.\n\n\nAll *Pythia* models trained for 143000 steps at a batch size\nof 2M (2,097,152 tokens). \n\nSee GitHub for more details on training\nprocedure, including how to reproduce\nit. \n\nPythia uses the same tokenizer as GPT-NeoX-\n20B.\n\n\nEvaluations\n-----------\n\n\nAll 16 *Pythia* models were evaluated using the LM Evaluation\nHarness. You can access\nthe results by model and step at 'results/json/\\*' in the GitHub\nrepository. \n\nExpand the sections below to see plots of evaluation results for all\nPythia and Pythia-deduped models compared with OPT and BLOOM.\n\n\n\nLAMBADA – OpenAI\n\n\n\nPhysical Interaction: Question Answering (PIQA)\n\n\n\nWinoGrande\n\n\n\nAI2 Reasoning Challenge—Easy Set\n\n\n\nSciQ\n\n\nChangelog\n---------\n\n\nThis section compares differences between previously released\nPythia v0 and the current\nmodels. See Appendix B of the Pythia paper for further discussion of these\nchanges and the motivation behind them. We found that retraining Pythia had no\nimpact on benchmark performance.\n\n\n* All model sizes are now trained with uniform batch size of 2M tokens.\nPreviously, the models of size 160M, 410M, and 1.4B parameters were trained\nwith batch sizes of 4M tokens.\n* We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,\n128,256,512} in addition to every 1000 training steps.\n* Flash Attention was used in the new retrained suite.\n* We remedied a minor inconsistency that existed in the original suite: all\nmodels of size 2.8B parameters or smaller had a learning rate (LR) schedule\nwhich decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and\n12B models all used an LR schedule which decayed to a minimum LR of 0. In\nthe redone training runs, we rectified this inconsistency: all models now were\ntrained with LR decaying to a minimum of 0.1× their maximum LR.",
"### Naming convention and parameter count\n\n\n*Pythia* models were renamed in January 2023. It is possible that the old\nnaming convention still persists in some documentation by accident. The\ncurrent naming convention (70M, 160M, etc.) is based on total parameter count."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Saiga_timelist_task30steps
This model is a fine-tuned version of [TheBloke/Llama-2-7B-fp16](https://huggingface.co/TheBloke/Llama-2-7B-fp16) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2298 | 0.37 | 2 | 2.2027 |
| 2.0986 | 0.74 | 4 | 2.1505 |
| 2.0278 | 1.11 | 6 | 2.1167 |
| 1.9954 | 1.48 | 8 | 2.0915 |
| 1.9696 | 1.85 | 10 | 2.0753 |
| 1.8978 | 2.22 | 12 | 2.0648 |
| 1.9246 | 2.59 | 14 | 2.0564 |
| 1.9361 | 2.96 | 16 | 2.0506 |
| 1.895 | 3.33 | 18 | 2.0470 |
| 1.8525 | 3.7 | 20 | 2.0442 |
| 1.8912 | 4.07 | 22 | 2.0419 |
| 1.8689 | 4.44 | 24 | 2.0400 |
| 1.9054 | 4.81 | 26 | 2.0390 |
| 1.8537 | 5.19 | 28 | 2.0384 |
| 1.8501 | 5.56 | 30 | 2.0384 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Llama-2-7B-fp16", "model-index": [{"name": "Saiga_timelist_task30steps", "results": []}]} | marcus2000/Saiga_timelist_task30steps | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Llama-2-7B-fp16",
"region:us"
] | null | 2024-04-23T07:57:38+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-TheBloke/Llama-2-7B-fp16 #region-us
| Saiga\_timelist\_task30steps
============================
This model is a fine-tuned version of TheBloke/Llama-2-7B-fp16 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0384
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 10
* total\_train\_batch\_size: 20
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 30
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 10\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 30",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-TheBloke/Llama-2-7B-fp16 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 10\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 30",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m-seed2 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m-seed2/
Original model description:
Entry not found
| {} | RichardErkhov/EleutherAI_-_pythia-160m-seed2-4bits | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T07:57:48+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
pythia-160m-seed2 - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
Entry not found
| [] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.