pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | MoGP/g_x_reg | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:02:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
37,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | azhara001/donut-base-demo-new-3e-05_AdamW_1876 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:03:56+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
35,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | tropianhs/mistral-tweet-finetune-tropianhs-full | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T09:03:59+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
48,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** waylandzhang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
Teaching purpose model。 这个model只是配合我视频教学目的 :D
**QLoRA (4bit)**
Params to replicate training
Peft Config
```
r=8,
target_modules=[
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj",
],
lora_alpha=16,
lora_dropout=0,
bias="none",
random_state=3407,
use_rslora=False, # Rank stabilized LoRA
loftq_config=None, # LoftQ
```
Training args
```
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
gradient_accumulation_steps=4, # set to 4 to avoid issues with GPTQ Quantization
warmup_steps=5,
max_steps=300, # Fine-tune iterations
learning_rate=2e-4,
fp16=not torch.cuda.is_bf16_supported(),
bf16=torch.cuda.is_bf16_supported(),
evaluation_strategy="steps",
prediction_loss_only=True,
eval_accumulation_steps=1,
eval_steps=10,
logging_steps=1,
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="cosine", # instead of "linear"
seed=1337,
output_dir="wayland-files/models",
report_to="wandb", # Log report to W&B
```
**Interernce Code**
```python
from unsloth import FastLanguageModel
import os
import torch
max_seq_length = 4096 # 2048
dtype = None
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="waylandzhang/Llama-3-8b-Chinese-Novel-4bit-lesson-v0.1",
max_seq_length=max_seq_length,
dtype=dtype,
load_in_4bit=load_in_4bit,
device_map="cuda",
attn_implementation="flash_attention_2"
)
FastLanguageModel.for_inference(model) # 使用unsloth的推理模式可以加快2倍速度
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
inputs = tokenizer(
[
alpaca_prompt.format(
"给你一段话,帮我继续写下去。", # 任务指令
"小明在西安城墙上", # 用户指令
"", # output - 留空以自动生成 / 不留空以填充
)
], return_tensors="pt").to("cuda")
# Opt 1: 文本生成输出
# outputs = model.generate(**inputs, max_new_tokens=500, use_cache=True)
# print(tokenizer.batch_decode(outputs[0][len(inputs.input_ids[0]):], skip_special_tokens=True))
# Opt 2: 消息流式输出
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt=True)
_ = model.generate(**inputs, streamer=text_streamer, max_new_tokens=500)
```
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en", "zh"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "Chinese", "unsloth", "llama", "trl"], "base_model": "waylandzhang/Llama-3-8b-Chinese-Novel-4bit-lesson-v0.1"} | waylandzhang/Llama-3-8b-Chinese-Novel-4bit-lesson-v0.1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"Chinese",
"unsloth",
"trl",
"en",
"zh",
"base_model:waylandzhang/Llama-3-8b-Chinese-Novel-4bit-lesson-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-05-02T09:04:18+00:00 | [] | [
"en",
"zh"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #Chinese #unsloth #trl #en #zh #base_model-waylandzhang/Llama-3-8b-Chinese-Novel-4bit-lesson-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: waylandzhang
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
Teaching purpose model。 这个model只是配合我视频教学目的 :D
QLoRA (4bit)
Params to replicate training
Peft Config
Training args
Interernce Code
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"# Uploaded model\n\n- Developed by: waylandzhang\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nTeaching purpose model。 这个model只是配合我视频教学目的 :D\n\n\nQLoRA (4bit)\n\nParams to replicate training\n\nPeft Config\n\n\n\nTraining args\n\n\n\n\nInterernce Code\n\n\n\n\n\n\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #Chinese #unsloth #trl #en #zh #base_model-waylandzhang/Llama-3-8b-Chinese-Novel-4bit-lesson-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: waylandzhang\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nTeaching purpose model。 这个model只是配合我视频教学目的 :D\n\n\nQLoRA (4bit)\n\nParams to replicate training\n\nPeft Config\n\n\n\nTraining args\n\n\n\n\nInterernce Code\n\n\n\n\n\n\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
91,
125
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #Chinese #unsloth #trl #en #zh #base_model-waylandzhang/Llama-3-8b-Chinese-Novel-4bit-lesson-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n# Uploaded model\n\n- Developed by: waylandzhang\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nTeaching purpose model。 这个model只是配合我视频教学目的 :D\n\n\nQLoRA (4bit)\n\nParams to replicate training\n\nPeft Config\n\n\n\nTraining args\n\n\n\n\nInterernce Code\n\n\n\n\n\n\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | diffusers |
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/sayakpaul/diffusion-orpo-lora-sdxl/runs/23l9mn47).
| {} | sayakpaul/sdxl-orpo-large-beta_orpo-0.005-beta_inner-500-lr-5e-7-scaled-steps-5000-no-warmup | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-05-02T09:04:38+00:00 | [] | [] | TAGS
#diffusers #safetensors #region-us
|
More information on all the CLI arguments and the environment are available on your 'wandb' run page.
| [] | [
"TAGS\n#diffusers #safetensors #region-us \n"
] | [
12
] | [
"TAGS\n#diffusers #safetensors #region-us \n"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | youngisk/whisper-large-v2-korea-common_13 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:04:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
sentence-similarity | sentence-transformers |
# COCO0414/DNF-bge-m3
[bge-m3](https://huggingface.co/BAAI/bge-m3) based model. This embedding model for Dungeon & Fighter data.
Trained data, [Dungeon&Fighter](https://df.nexon.com) user community QA board. Extract 79k QA pairs data in 20k articles.
This model was created to use RAG.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
import torch
question = "사도들에 대해서 알려줘"
# Data from namuwiki
reval_content = "제6 사도제7 사도제8 사도무형의 시로코검은 질병의 디레지에불을 먹는 안톤긴 발의 로터스제9 사도제10 사도제11 사도제12 사도(舊) 폭룡왕 바칼(新) 건설자 루크성안의 미카엘라혼돈의 오즈마세리아 키르민 같이 보기: 테아나(인공신)제1 사도 - 숙명의 카인. 죽음이 두려워하는 자.제2 사도 - 우는 눈의 힐더. 두 가지 얼굴에 찬연히 빛나는 이슬을 감춘 자.[A] 제3 사도 - 창공의 이시스-프레이 (프레이-이시스). 땅에 발을 딛지 않는 자.제4 사도 - 정복자 카시야스. 피로 강철을 적시는 자.제5 사도† - 무형의 시로코. 수 백의 얼굴을 가졌으되 보이지 않는 자. - (오리진 이전)록시에게 사망. (오리진 이후) 4인의 웨펀마스터에게 사망.[4]이후 그림시커가 사념을 모두 모아 다시 부활하고 하늘성을 점령한 뒤 이튼 공업지대가 있는 천계를 노리고 있다. 그러나 결국 미리 대기하고 있던 천계군에게 집중포화로 또 다시 사망.제6 사도 - 검은 질병의 디레지에. 더러운 피를 흘리는 자. - 노스마이어로 전이되어 모험가와 미쉘, 아젤리아에게 패배한 뒤 힐더의 계략으로 인해 차원의 틈으로 빨려들어가 온 몸이 갈갈이 찢겨 나가며 봉인. 설정상 불사이기에 소멸하지 않았고, 불안정한 차원 속에서 다시 힘을 기르는 중.제7 사도† - 불을 먹는 안톤. 몸을 뻗어 능히 세상 끝에 닿을 수 있는 자. - 파워스테이션에 전이되어 모험가와 천계 토벌군에게 사망.제8 사도† - 긴발의 로터스. 한번에 수천의 무기를 쥘 수 있는 자. - 베히모스로 전이되어 3인의 웨펀마스터들과 제국군, 모험가에게 사망.(舊)제9 사도† - 폭룡왕 바칼. 불의 숨결을 내는 자. - 500년 전 마계에서 도망쳐 나와 천계를 지배하던 중 천계 저항군에 의해 사망.[5](新)제9 사도† - 건설자 루크. 말 없이 흙을 만지는 자. - 자신의 거주지인 죽은 자의 성에서 사망.[6]제10 사도 - 성안의 미카엘라. 진실을 꿰뚫어 보는 자.[7]제11 사도† - 혼돈의 오즈마. 죽음에서 일어난 자. 검은 대지에서 카잔에 의해 봉인 중이었으나, 이후 봉인이 완전히 풀려 날뛰다가 모험가들과의 결전으로 사망.제12 사도 - 세리아 키르민. 비밀을 알고 있는 자.[A]현재 플레인 : 아라드의 이야기 진행 상황에서 살아있는 사도는 카인, 힐더, 이시스-프레이, 카시야스, 디레지에, 미카엘라, 세리아까지 총 7명이다.사도의 순위는 마계에 올라탄 순서이긴 한데 서열에도 영향이 있긴 한지 제4사도 카시야스의 경우 구 스토리에서는 서열 4위의 사도라고 한다. 미카엘라부터는 마계랑 인연이 없는 사도이다.3. 모티브[편집]유래는 예수의 열두 제자인 사도이다. 호칭도 같은데다가 마침 숫자도 딱 열둘이고 그들 중 하나가 내쳐지고 다른 이가 임명된 일까지 예수의 열두 사도랑 딱 들어맞는다.[9] 창세기, 종세기, 창신세기로 이루어진 던파의 성서도 성경을 모티브로 한 것이므로 어찌 보면 자연스러운 연결이다.또한, 각 사도들에 반영된 개별적 모티브는 다음과 같다. 언급되지 않은 사도는 네오플에서 독자적으로 창작한 케이스다.카인: 성경(아담, 카인)힐더: 성경(하와)이시스-프레이/프레이-이시스: 슬라브 신화(벨로보그, 체르노보그), 북유럽 신화(프레이), 이집트 신화(이시스) [10]카시야스: 일본 요괴(오니)시로코: 사하라 사막의 열풍(시로코)[11]디레지에: 질병(흑사병 등)로터스: 바다괴물(크라켄)바칼: 악마성 드라큘라(드라큘라 블라드 체페슈)루크: 켈트 신화(루 라와더)미카엘라: 가톨릭(메시아)오즈마: 악마성 드라큘라(드라큘라 블라드 체페슈)세리아 키르민: ???3.1. 문양[편집]사도에겐 자신을 상징하는 고유의 문양이 존재하며 사도와 관련된 사건이 일어남에 따라 문양이 변경되는 경우도 있다.[12]카인은 그를 두려워하고 경외하는 자들에 의해서 문양이 만들어졌다. 힐더는 자신이 가진 우는 눈을 형상화하여 문양을 만들었다. 테라코타에서는 이 문양을 신성하게 여긴다. 이시스-프레이는 그를 따르는 추종자들에 의해서 문양이 만들어졌으며, 밤의 마천루 전투 이후에 이시스의 상징이[13] 추가"
model = SentenceTransformer('COCO0414/DNF-bge-m3')
question_embedding = model.encode(question, convert_to_tensor=True)
answer_embedding = model.encode(answer, convert_to_tensor=True)
similarity = torch.nn.functional.cosine_similarity(question_embedding, answer_embedding, dim=0)
print(similarity.item())
# 0.6239629983901978
```
| {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | COCO0414/DNF-bge-m3 | null | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:05:12+00:00 | [] | [] | TAGS
#sentence-transformers #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# COCO0414/DNF-bge-m3
bge-m3 based model. This embedding model for Dungeon & Fighter data.
Trained data, Dungeon&Fighter user community QA board. Extract 79k QA pairs data in 20k articles.
This model was created to use RAG.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
| [
"# COCO0414/DNF-bge-m3\n\nbge-m3 based model. This embedding model for Dungeon & Fighter data.\n\nTrained data, Dungeon&Fighter user community QA board. Extract 79k QA pairs data in 20k articles.\n\nThis model was created to use RAG.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:"
] | [
"TAGS\n#sentence-transformers #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# COCO0414/DNF-bge-m3\n\nbge-m3 based model. This embedding model for Dungeon & Fighter data.\n\nTrained data, Dungeon&Fighter user community QA board. Extract 79k QA pairs data in 20k articles.\n\nThis model was created to use RAG.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:"
] | [
31,
64,
30
] | [
"TAGS\n#sentence-transformers #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n# COCO0414/DNF-bge-m3\n\nbge-m3 based model. This embedding model for Dungeon & Fighter data.\n\nTrained data, Dungeon&Fighter user community QA board. Extract 79k QA pairs data in 20k articles.\n\nThis model was created to use RAG.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:"
] |
text-generation | transformers |
# 🔮 THIS IS UNHOLY 🔦

This model is a proud fork of [Undi95/Unholy-v2-13B](https://huggingface.co/Undi95/Unholy-v2-13B), but we've infused it with [WEMAKE-CX/V41](https://huggingface.co/WEMAKE-CX/V41). The mix of tech with emotional intelligence opens new ways for understanding and interaction.
## Why 💙 WeMake VX-Unholy
We don't only want to advance technology but to do so with consciousness, responsibility, and empathy. VX-Unholy represents one of many ways we take towards this goal.
### Our Goals With VX-Unholy
- **Foster Emotional Intelligence:** Incorporating our V41 emotional intelligence unit, VX-Unholy aims to bring a new depth of understanding and empathy to AI interactions.
- **Promote Ethical AI Use:** We stand firmly for the ethical development and application of AI. This model is shared with the world under the [cc-by-nc-4.0](https://spdx.org/licenses/CC-BY-NC-4.0) license that encourages non-commercial, responsible use.
- **Encourage Innovation and Learning:** We believe in the power of community and shared knowledge. VX-Unholy is not just a tool but an open invitation for curious minds to explore, learn, and innovate together.
### How We Use VX-Unholy
VX-Unholy serves as one of many efforts to develop AI solutions that are not only technologically advanced but also socially conscious and emotionally intelligent. It's used in:
- Research and Development: Enhancing our understanding of AI's potential to comprehend and mimic emotional intelligence.
- Community Projects: Engaging with developers, ethicists, and the broader community to explore innovative uses and promote responsible AI development.
- Educational Programs: Offering resources for learning and exploration to foster the next generation of responsible AI engineers and enthusiasts.
## Our Commitment to Ethics and Sustainability
It's 2024, of course, our commitment to sustainability, ethical responsibility, and equal opportunities shapes every project we take. VX-Unholy reflects these values and our belief in the power of technology to change the world for the better—when guided by 💙 and responsibility.
Thank you for your interest in VX-Unholy. Together, let's make a positive impact on the world, one intelligent interaction at a time.
💙 **The WeMake Team**
[🤝 Ethics Policy](https://wemake.cx/legal/ethics/) [🛡️ Privacy Policy](https://wemake.cx/legal/privacy) [📇 Imprint](https://wemake.cx/legal/imprint) | {"language": ["en", "de"], "license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw"], "datasets": ["WEMAKE-CX/Intelligent-Content-Understanding"]} | WEMAKE-CX/VX-Unholy-13B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"en",
"de",
"dataset:WEMAKE-CX/Intelligent-Content-Understanding",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:05:29+00:00 | [] | [
"en",
"de"
] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #en #de #dataset-WEMAKE-CX/Intelligent-Content-Understanding #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# THIS IS UNHOLY
!image/png
This model is a proud fork of Undi95/Unholy-v2-13B, but we've infused it with WEMAKE-CX/V41. The mix of tech with emotional intelligence opens new ways for understanding and interaction.
## Why WeMake VX-Unholy
We don't only want to advance technology but to do so with consciousness, responsibility, and empathy. VX-Unholy represents one of many ways we take towards this goal.
### Our Goals With VX-Unholy
- Foster Emotional Intelligence: Incorporating our V41 emotional intelligence unit, VX-Unholy aims to bring a new depth of understanding and empathy to AI interactions.
- Promote Ethical AI Use: We stand firmly for the ethical development and application of AI. This model is shared with the world under the cc-by-nc-4.0 license that encourages non-commercial, responsible use.
- Encourage Innovation and Learning: We believe in the power of community and shared knowledge. VX-Unholy is not just a tool but an open invitation for curious minds to explore, learn, and innovate together.
### How We Use VX-Unholy
VX-Unholy serves as one of many efforts to develop AI solutions that are not only technologically advanced but also socially conscious and emotionally intelligent. It's used in:
- Research and Development: Enhancing our understanding of AI's potential to comprehend and mimic emotional intelligence.
- Community Projects: Engaging with developers, ethicists, and the broader community to explore innovative uses and promote responsible AI development.
- Educational Programs: Offering resources for learning and exploration to foster the next generation of responsible AI engineers and enthusiasts.
## Our Commitment to Ethics and Sustainability
It's 2024, of course, our commitment to sustainability, ethical responsibility, and equal opportunities shapes every project we take. VX-Unholy reflects these values and our belief in the power of technology to change the world for the better—when guided by and responsibility.
Thank you for your interest in VX-Unholy. Together, let's make a positive impact on the world, one intelligent interaction at a time.
The WeMake Team
Ethics Policy ️ Privacy Policy Imprint | [
"# THIS IS UNHOLY \n\n!image/png\n\nThis model is a proud fork of Undi95/Unholy-v2-13B, but we've infused it with WEMAKE-CX/V41. The mix of tech with emotional intelligence opens new ways for understanding and interaction.",
"## Why WeMake VX-Unholy\n\nWe don't only want to advance technology but to do so with consciousness, responsibility, and empathy. VX-Unholy represents one of many ways we take towards this goal.",
"### Our Goals With VX-Unholy\n\n- Foster Emotional Intelligence: Incorporating our V41 emotional intelligence unit, VX-Unholy aims to bring a new depth of understanding and empathy to AI interactions.\n\n- Promote Ethical AI Use: We stand firmly for the ethical development and application of AI. This model is shared with the world under the cc-by-nc-4.0 license that encourages non-commercial, responsible use.\n\n- Encourage Innovation and Learning: We believe in the power of community and shared knowledge. VX-Unholy is not just a tool but an open invitation for curious minds to explore, learn, and innovate together.",
"### How We Use VX-Unholy\n\nVX-Unholy serves as one of many efforts to develop AI solutions that are not only technologically advanced but also socially conscious and emotionally intelligent. It's used in:\n\n- Research and Development: Enhancing our understanding of AI's potential to comprehend and mimic emotional intelligence.\n- Community Projects: Engaging with developers, ethicists, and the broader community to explore innovative uses and promote responsible AI development.\n- Educational Programs: Offering resources for learning and exploration to foster the next generation of responsible AI engineers and enthusiasts.",
"## Our Commitment to Ethics and Sustainability\n\nIt's 2024, of course, our commitment to sustainability, ethical responsibility, and equal opportunities shapes every project we take. VX-Unholy reflects these values and our belief in the power of technology to change the world for the better—when guided by and responsibility.\n\nThank you for your interest in VX-Unholy. Together, let's make a positive impact on the world, one intelligent interaction at a time.\n\n The WeMake Team\n\n Ethics Policy ️ Privacy Policy Imprint"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #en #de #dataset-WEMAKE-CX/Intelligent-Content-Understanding #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# THIS IS UNHOLY \n\n!image/png\n\nThis model is a proud fork of Undi95/Unholy-v2-13B, but we've infused it with WEMAKE-CX/V41. The mix of tech with emotional intelligence opens new ways for understanding and interaction.",
"## Why WeMake VX-Unholy\n\nWe don't only want to advance technology but to do so with consciousness, responsibility, and empathy. VX-Unholy represents one of many ways we take towards this goal.",
"### Our Goals With VX-Unholy\n\n- Foster Emotional Intelligence: Incorporating our V41 emotional intelligence unit, VX-Unholy aims to bring a new depth of understanding and empathy to AI interactions.\n\n- Promote Ethical AI Use: We stand firmly for the ethical development and application of AI. This model is shared with the world under the cc-by-nc-4.0 license that encourages non-commercial, responsible use.\n\n- Encourage Innovation and Learning: We believe in the power of community and shared knowledge. VX-Unholy is not just a tool but an open invitation for curious minds to explore, learn, and innovate together.",
"### How We Use VX-Unholy\n\nVX-Unholy serves as one of many efforts to develop AI solutions that are not only technologically advanced but also socially conscious and emotionally intelligent. It's used in:\n\n- Research and Development: Enhancing our understanding of AI's potential to comprehend and mimic emotional intelligence.\n- Community Projects: Engaging with developers, ethicists, and the broader community to explore innovative uses and promote responsible AI development.\n- Educational Programs: Offering resources for learning and exploration to foster the next generation of responsible AI engineers and enthusiasts.",
"## Our Commitment to Ethics and Sustainability\n\nIt's 2024, of course, our commitment to sustainability, ethical responsibility, and equal opportunities shapes every project we take. VX-Unholy reflects these values and our belief in the power of technology to change the world for the better—when guided by and responsibility.\n\nThank you for your interest in VX-Unholy. Together, let's make a positive impact on the world, one intelligent interaction at a time.\n\n The WeMake Team\n\n Ethics Policy ️ Privacy Policy Imprint"
] | [
78,
65,
50,
137,
117,
108
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #en #de #dataset-WEMAKE-CX/Intelligent-Content-Understanding #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# THIS IS UNHOLY \n\n!image/png\n\nThis model is a proud fork of Undi95/Unholy-v2-13B, but we've infused it with WEMAKE-CX/V41. The mix of tech with emotional intelligence opens new ways for understanding and interaction.## Why WeMake VX-Unholy\n\nWe don't only want to advance technology but to do so with consciousness, responsibility, and empathy. VX-Unholy represents one of many ways we take towards this goal.### Our Goals With VX-Unholy\n\n- Foster Emotional Intelligence: Incorporating our V41 emotional intelligence unit, VX-Unholy aims to bring a new depth of understanding and empathy to AI interactions.\n\n- Promote Ethical AI Use: We stand firmly for the ethical development and application of AI. This model is shared with the world under the cc-by-nc-4.0 license that encourages non-commercial, responsible use.\n\n- Encourage Innovation and Learning: We believe in the power of community and shared knowledge. VX-Unholy is not just a tool but an open invitation for curious minds to explore, learn, and innovate together.### How We Use VX-Unholy\n\nVX-Unholy serves as one of many efforts to develop AI solutions that are not only technologically advanced but also socially conscious and emotionally intelligent. It's used in:\n\n- Research and Development: Enhancing our understanding of AI's potential to comprehend and mimic emotional intelligence.\n- Community Projects: Engaging with developers, ethicists, and the broader community to explore innovative uses and promote responsible AI development.\n- Educational Programs: Offering resources for learning and exploration to foster the next generation of responsible AI engineers and enthusiasts.## Our Commitment to Ethics and Sustainability\n\nIt's 2024, of course, our commitment to sustainability, ethical responsibility, and equal opportunities shapes every project we take. VX-Unholy reflects these values and our belief in the power of technology to change the world for the better—when guided by and responsibility.\n\nThank you for your interest in VX-Unholy. Together, let's make a positive impact on the world, one intelligent interaction at a time.\n\n The WeMake Team\n\n Ethics Policy ️ Privacy Policy Imprint"
] |
null | transformers |
# Uploaded model
- **Developed by:** CarolLiu999
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | CarolLiu999/llama-3-8b-Instruct-bnb-4bit-lora-TWhealthCare | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:08:42+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: CarolLiu999
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: CarolLiu999\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: CarolLiu999\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
67,
86
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: CarolLiu999\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('izhl/sd-class-butterflies-32')
image = pipeline().images[0]
image
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | izhl/sd-class-butterflies-32 | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-05-02T09:09:12+00:00 | [] | [] | TAGS
#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
|
# Model Card for Unit 1 of the Diffusion Models Class
This model is a diffusion model for unconditional image generation of cute .
## Usage
'''python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('izhl/sd-class-butterflies-32')
image = pipeline().images[0]
image
| [
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage\n\n'''python\nfrom diffusers import DDPMPipeline\n\npipeline = DDPMPipeline.from_pretrained('izhl/sd-class-butterflies-32')\nimage = pipeline().images[0]\nimage"
] | [
"TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n",
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage\n\n'''python\nfrom diffusers import DDPMPipeline\n\npipeline = DDPMPipeline.from_pretrained('izhl/sd-class-butterflies-32')\nimage = pipeline().images[0]\nimage"
] | [
43,
26,
53
] | [
"TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .## Usage\n\n'''python\nfrom diffusers import DDPMPipeline\n\npipeline = DDPMPipeline.from_pretrained('izhl/sd-class-butterflies-32')\nimage = pipeline().images[0]\nimage"
] |
automatic-speech-recognition | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | William-Ndoni-254/swahili_asr | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:15:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
42,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Alexander-Schiegl/dummy-model | null | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:15:32+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
39,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
EEVE-Korean-10.8B-v1.0 - GGUF
- Model creator: https://huggingface.co/yanolja/
- Original model: https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [EEVE-Korean-10.8B-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q2_K.gguf) | Q2_K | 3.77GB |
| [EEVE-Korean-10.8B-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.IQ3_XS.gguf) | IQ3_XS | 4.18GB |
| [EEVE-Korean-10.8B-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.IQ3_S.gguf) | IQ3_S | 4.41GB |
| [EEVE-Korean-10.8B-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q3_K_S.gguf) | Q3_K_S | 4.39GB |
| [EEVE-Korean-10.8B-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.IQ3_M.gguf) | IQ3_M | 4.56GB |
| [EEVE-Korean-10.8B-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q3_K.gguf) | Q3_K | 4.88GB |
| [EEVE-Korean-10.8B-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q3_K_M.gguf) | Q3_K_M | 4.88GB |
| [EEVE-Korean-10.8B-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q3_K_L.gguf) | Q3_K_L | 5.31GB |
| [EEVE-Korean-10.8B-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.IQ4_XS.gguf) | IQ4_XS | 5.47GB |
| [EEVE-Korean-10.8B-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q4_0.gguf) | Q4_0 | 5.7GB |
| [EEVE-Korean-10.8B-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.IQ4_NL.gguf) | IQ4_NL | 5.77GB |
| [EEVE-Korean-10.8B-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q4_K_S.gguf) | Q4_K_S | 5.75GB |
| [EEVE-Korean-10.8B-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q4_K.gguf) | Q4_K | 6.07GB |
| [EEVE-Korean-10.8B-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q4_K_M.gguf) | Q4_K_M | 6.07GB |
| [EEVE-Korean-10.8B-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q4_1.gguf) | Q4_1 | 6.32GB |
| [EEVE-Korean-10.8B-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q5_0.gguf) | Q5_0 | 6.94GB |
| [EEVE-Korean-10.8B-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q5_K_S.gguf) | Q5_K_S | 6.94GB |
| [EEVE-Korean-10.8B-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q5_K.gguf) | Q5_K | 7.13GB |
| [EEVE-Korean-10.8B-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q5_K_M.gguf) | Q5_K_M | 7.13GB |
| [EEVE-Korean-10.8B-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q5_1.gguf) | Q5_1 | 7.56GB |
| [EEVE-Korean-10.8B-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf/blob/main/EEVE-Korean-10.8B-v1.0.Q6_K.gguf) | Q6_K | 8.26GB |
Original model description:
---
license: apache-2.0
base_model: upstage/SOLAR-10.7B-v1.0
tags:
- generated_from_trainer
model-index:
- name: yanolja/EEVE-Korean-10.8B-v1.0
results: []
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<p align="left">
<img src="https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0/resolve/main/eeve_logo.webp" width="50%"/>
<p>
# EEVE-Korean-10.8B-v1.0
## Join Our Community on Discord!
If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: [Discord Link](https://discord.gg/b27bAHg95m).
## Our Dedicated Team (Alphabetical Order)
| Research | Engineering | Product Management | UX Design |
|-----------------|-----------------|--------------------|--------------
| Myeongho Jeong | Geon Kim | Bokyung Huh | Eunsue Choi |
| Seungduk Kim | Rifqi Alfi | | |
| Seungtaek Choi | Sanghoon Han | | |
| | Suhyun Kang | | |
## About the Model
This model is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0), specifically fine-tuned on various Korean web-crawled datasets available on HuggingFace. Our approach was to expand the model's understanding of Korean by pre-training the embeddings for new tokens and partially fine-tuning the `lm_head` embeddings for the already existing tokens while preserving the original parameters of the base model.
### Technical Deep Dive
<p align="left">
<img src="https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0/resolve/main/EEVE_figure.png" width="100%"/>
<p>
To adapt foundational models from English to Korean, we use subword-based embedding with a seven-stage training process involving parameter freezing.
This approach progressively trains from input embeddings to full parameters, efficiently extending the model's vocabulary to include Korean.
Our method enhances the model's cross-linguistic applicability by carefully integrating new linguistic tokens, focusing on causal language modeling pre-training.
We leverage the inherent capabilities of foundational models trained on English to efficiently transfer knowledge and reasoning to Korean, optimizing the adaptation process.
For more details, please refer to our technical report: [Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models](https://arxiv.org/abs/2402.14714).
Here’s an simplified code for our key approach:
```python
# number_of_old_tokens is the size of tokenizer before vocab extension. For example, in case of EEVE-Korean-10.8B-v1.0, number_of_old_tokens is 32000.
def freeze_partial_embedding_hook(grad):
grad[:number_of_old_tokens] = 0
return grad
for name, param in model.named_parameters():
if ("lm_head" in name or "embed_tokens" in name) and "original" not in name:
param.requires_grad = True
if "embed_tokens" in name:
param.register_hook(freeze_partial_embedding_hook)
else:
param.requires_grad = False
```
### Usage and Limitations
Keep in mind that this model hasn't been fine-tuned with instruction-based training. While it excels in Korean language tasks, we advise careful consideration and further training for specific applications.
### Training Details
Our model’s training was comprehensive and diverse:
- **Vocabulary Expansion:**
We meticulously selected 8,960 Korean tokens based on their frequency in our Korean web corpus. This process involved multiple rounds of tokenizer training, manual curation, and token frequency analysis, ensuring a rich and relevant vocabulary for our model.
1. **Initial Tokenizer Training:** We trained an intermediate tokenizer on a Korean web corpus, with a vocabulary of 40,000 tokens.
2. **Extraction of New Korean Tokens:** From the intermediate tokenizer, we identified all Korean tokens not present in the original SOLAR's tokenizer.
3. **Manual Tokenizer Construction:** We then built the target tokenizer, focusing on these new Korean tokens.
4. **Frequency Analysis:** Using the target tokenizer, we processed a 100GB Korean corpus to count each token's frequency.
5. **Refinement of Token List:** We removed tokens appearing less than 6,000 times, ensuring to secure enough tokens to train models later.
6. **Inclusion of Single-Letter Characters:** Counted missing Korean single-letter characters and added them to the target tokenizer that appeared more than 6,000 times.
7. **Iterative Refinement:** We repeated steps 2 to 6 until there were no tokens to drop or add.
8. **Training Bias Towards New Tokens:** Our training data was biased to include more texts with new tokens, for effective learning.
This rigorous approach ensured a comprehensive and contextually rich Korean vocabulary for the model.
## Citation
```
@misc{kim2024efficient,
title={Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models},
author={Seungduk Kim and Seungtaek Choi and Myeongho Jeong},
year={2024},
eprint={2402.14714},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-gguf | null | [
"gguf",
"arxiv:2402.14714",
"region:us"
] | null | 2024-05-02T09:15:35+00:00 | [
"2402.14714"
] | [] | TAGS
#gguf #arxiv-2402.14714 #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
EEVE-Korean-10.8B-v1.0 - GGUF
* Model creator: URL
* Original model: URL
Name: EEVE-Korean-10.8B-v1.0.Q2\_K.gguf, Quant method: Q2\_K, Size: 3.77GB
Name: EEVE-Korean-10.8B-v1.0.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 4.18GB
Name: EEVE-Korean-10.8B-v1.0.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 4.41GB
Name: EEVE-Korean-10.8B-v1.0.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 4.39GB
Name: EEVE-Korean-10.8B-v1.0.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 4.56GB
Name: EEVE-Korean-10.8B-v1.0.Q3\_K.gguf, Quant method: Q3\_K, Size: 4.88GB
Name: EEVE-Korean-10.8B-v1.0.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 4.88GB
Name: EEVE-Korean-10.8B-v1.0.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 5.31GB
Name: EEVE-Korean-10.8B-v1.0.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 5.47GB
Name: EEVE-Korean-10.8B-v1.0.Q4\_0.gguf, Quant method: Q4\_0, Size: 5.7GB
Name: EEVE-Korean-10.8B-v1.0.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 5.77GB
Name: EEVE-Korean-10.8B-v1.0.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 5.75GB
Name: EEVE-Korean-10.8B-v1.0.Q4\_K.gguf, Quant method: Q4\_K, Size: 6.07GB
Name: EEVE-Korean-10.8B-v1.0.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 6.07GB
Name: EEVE-Korean-10.8B-v1.0.Q4\_1.gguf, Quant method: Q4\_1, Size: 6.32GB
Name: EEVE-Korean-10.8B-v1.0.Q5\_0.gguf, Quant method: Q5\_0, Size: 6.94GB
Name: EEVE-Korean-10.8B-v1.0.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 6.94GB
Name: EEVE-Korean-10.8B-v1.0.Q5\_K.gguf, Quant method: Q5\_K, Size: 7.13GB
Name: EEVE-Korean-10.8B-v1.0.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 7.13GB
Name: EEVE-Korean-10.8B-v1.0.Q5\_1.gguf, Quant method: Q5\_1, Size: 7.56GB
Name: EEVE-Korean-10.8B-v1.0.Q6\_K.gguf, Quant method: Q6\_K, Size: 8.26GB
Original model description:
---------------------------
license: apache-2.0
base\_model: upstage/SOLAR-10.7B-v1.0
tags:
* generated\_from\_trainer
model-index:
* name: yanolja/EEVE-Korean-10.8B-v1.0
results: []
---
<img src="URL alt="Built with Axolotl" width="200" height="32"/>

EEVE-Korean-10.8B-v1.0
======================
Join Our Community on Discord!
------------------------------
If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: Discord Link.
Our Dedicated Team (Alphabetical Order)
---------------------------------------
About the Model
---------------
This model is a Korean vocabulary-extended version of upstage/SOLAR-10.7B-v1.0, specifically fine-tuned on various Korean web-crawled datasets available on HuggingFace. Our approach was to expand the model's understanding of Korean by pre-training the embeddings for new tokens and partially fine-tuning the 'lm\_head' embeddings for the already existing tokens while preserving the original parameters of the base model.
### Technical Deep Dive

To adapt foundational models from English to Korean, we use subword-based embedding with a seven-stage training process involving parameter freezing.
This approach progressively trains from input embeddings to full parameters, efficiently extending the model's vocabulary to include Korean.
Our method enhances the model's cross-linguistic applicability by carefully integrating new linguistic tokens, focusing on causal language modeling pre-training.
We leverage the inherent capabilities of foundational models trained on English to efficiently transfer knowledge and reasoning to Korean, optimizing the adaptation process.
For more details, please refer to our technical report: Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models.
Here’s an simplified code for our key approach:
### Usage and Limitations
Keep in mind that this model hasn't been fine-tuned with instruction-based training. While it excels in Korean language tasks, we advise careful consideration and further training for specific applications.
### Training Details
Our model’s training was comprehensive and diverse:
* Vocabulary Expansion:
We meticulously selected 8,960 Korean tokens based on their frequency in our Korean web corpus. This process involved multiple rounds of tokenizer training, manual curation, and token frequency analysis, ensuring a rich and relevant vocabulary for our model.
1. Initial Tokenizer Training: We trained an intermediate tokenizer on a Korean web corpus, with a vocabulary of 40,000 tokens.
2. Extraction of New Korean Tokens: From the intermediate tokenizer, we identified all Korean tokens not present in the original SOLAR's tokenizer.
3. Manual Tokenizer Construction: We then built the target tokenizer, focusing on these new Korean tokens.
4. Frequency Analysis: Using the target tokenizer, we processed a 100GB Korean corpus to count each token's frequency.
5. Refinement of Token List: We removed tokens appearing less than 6,000 times, ensuring to secure enough tokens to train models later.
6. Inclusion of Single-Letter Characters: Counted missing Korean single-letter characters and added them to the target tokenizer that appeared more than 6,000 times.
7. Iterative Refinement: We repeated steps 2 to 6 until there were no tokens to drop or add.
8. Training Bias Towards New Tokens: Our training data was biased to include more texts with new tokens, for effective learning.
This rigorous approach ensured a comprehensive and contextually rich Korean vocabulary for the model.
| [
"### Technical Deep Dive\n\n\n\n\n\nTo adapt foundational models from English to Korean, we use subword-based embedding with a seven-stage training process involving parameter freezing.\nThis approach progressively trains from input embeddings to full parameters, efficiently extending the model's vocabulary to include Korean.\nOur method enhances the model's cross-linguistic applicability by carefully integrating new linguistic tokens, focusing on causal language modeling pre-training.\nWe leverage the inherent capabilities of foundational models trained on English to efficiently transfer knowledge and reasoning to Korean, optimizing the adaptation process.\n\n\nFor more details, please refer to our technical report: Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models.\n\n\nHere’s an simplified code for our key approach:",
"### Usage and Limitations\n\n\nKeep in mind that this model hasn't been fine-tuned with instruction-based training. While it excels in Korean language tasks, we advise careful consideration and further training for specific applications.",
"### Training Details\n\n\nOur model’s training was comprehensive and diverse:\n\n\n* Vocabulary Expansion:\nWe meticulously selected 8,960 Korean tokens based on their frequency in our Korean web corpus. This process involved multiple rounds of tokenizer training, manual curation, and token frequency analysis, ensuring a rich and relevant vocabulary for our model.\n\n\n\t1. Initial Tokenizer Training: We trained an intermediate tokenizer on a Korean web corpus, with a vocabulary of 40,000 tokens.\n\t2. Extraction of New Korean Tokens: From the intermediate tokenizer, we identified all Korean tokens not present in the original SOLAR's tokenizer.\n\t3. Manual Tokenizer Construction: We then built the target tokenizer, focusing on these new Korean tokens.\n\t4. Frequency Analysis: Using the target tokenizer, we processed a 100GB Korean corpus to count each token's frequency.\n\t5. Refinement of Token List: We removed tokens appearing less than 6,000 times, ensuring to secure enough tokens to train models later.\n\t6. Inclusion of Single-Letter Characters: Counted missing Korean single-letter characters and added them to the target tokenizer that appeared more than 6,000 times.\n\t7. Iterative Refinement: We repeated steps 2 to 6 until there were no tokens to drop or add.\n\t8. Training Bias Towards New Tokens: Our training data was biased to include more texts with new tokens, for effective learning.\n\n\nThis rigorous approach ensured a comprehensive and contextually rich Korean vocabulary for the model."
] | [
"TAGS\n#gguf #arxiv-2402.14714 #region-us \n",
"### Technical Deep Dive\n\n\n\n\n\nTo adapt foundational models from English to Korean, we use subword-based embedding with a seven-stage training process involving parameter freezing.\nThis approach progressively trains from input embeddings to full parameters, efficiently extending the model's vocabulary to include Korean.\nOur method enhances the model's cross-linguistic applicability by carefully integrating new linguistic tokens, focusing on causal language modeling pre-training.\nWe leverage the inherent capabilities of foundational models trained on English to efficiently transfer knowledge and reasoning to Korean, optimizing the adaptation process.\n\n\nFor more details, please refer to our technical report: Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models.\n\n\nHere’s an simplified code for our key approach:",
"### Usage and Limitations\n\n\nKeep in mind that this model hasn't been fine-tuned with instruction-based training. While it excels in Korean language tasks, we advise careful consideration and further training for specific applications.",
"### Training Details\n\n\nOur model’s training was comprehensive and diverse:\n\n\n* Vocabulary Expansion:\nWe meticulously selected 8,960 Korean tokens based on their frequency in our Korean web corpus. This process involved multiple rounds of tokenizer training, manual curation, and token frequency analysis, ensuring a rich and relevant vocabulary for our model.\n\n\n\t1. Initial Tokenizer Training: We trained an intermediate tokenizer on a Korean web corpus, with a vocabulary of 40,000 tokens.\n\t2. Extraction of New Korean Tokens: From the intermediate tokenizer, we identified all Korean tokens not present in the original SOLAR's tokenizer.\n\t3. Manual Tokenizer Construction: We then built the target tokenizer, focusing on these new Korean tokens.\n\t4. Frequency Analysis: Using the target tokenizer, we processed a 100GB Korean corpus to count each token's frequency.\n\t5. Refinement of Token List: We removed tokens appearing less than 6,000 times, ensuring to secure enough tokens to train models later.\n\t6. Inclusion of Single-Letter Characters: Counted missing Korean single-letter characters and added them to the target tokenizer that appeared more than 6,000 times.\n\t7. Iterative Refinement: We repeated steps 2 to 6 until there were no tokens to drop or add.\n\t8. Training Bias Towards New Tokens: Our training data was biased to include more texts with new tokens, for effective learning.\n\n\nThis rigorous approach ensured a comprehensive and contextually rich Korean vocabulary for the model."
] | [
19,
164,
45,
312
] | [
"TAGS\n#gguf #arxiv-2402.14714 #region-us \n### Technical Deep Dive\n\n\n\n\n\nTo adapt foundational models from English to Korean, we use subword-based embedding with a seven-stage training process involving parameter freezing.\nThis approach progressively trains from input embeddings to full parameters, efficiently extending the model's vocabulary to include Korean.\nOur method enhances the model's cross-linguistic applicability by carefully integrating new linguistic tokens, focusing on causal language modeling pre-training.\nWe leverage the inherent capabilities of foundational models trained on English to efficiently transfer knowledge and reasoning to Korean, optimizing the adaptation process.\n\n\nFor more details, please refer to our technical report: Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models.\n\n\nHere’s an simplified code for our key approach:### Usage and Limitations\n\n\nKeep in mind that this model hasn't been fine-tuned with instruction-based training. While it excels in Korean language tasks, we advise careful consideration and further training for specific applications.### Training Details\n\n\nOur model’s training was comprehensive and diverse:\n\n\n* Vocabulary Expansion:\nWe meticulously selected 8,960 Korean tokens based on their frequency in our Korean web corpus. This process involved multiple rounds of tokenizer training, manual curation, and token frequency analysis, ensuring a rich and relevant vocabulary for our model.\n\n\n\t1. Initial Tokenizer Training: We trained an intermediate tokenizer on a Korean web corpus, with a vocabulary of 40,000 tokens.\n\t2. Extraction of New Korean Tokens: From the intermediate tokenizer, we identified all Korean tokens not present in the original SOLAR's tokenizer.\n\t3. Manual Tokenizer Construction: We then built the target tokenizer, focusing on these new Korean tokens.\n\t4. Frequency Analysis: Using the target tokenizer, we processed a 100GB Korean corpus to count each token's frequency.\n\t5. Refinement of Token List: We removed tokens appearing less than 6,000 times, ensuring to secure enough tokens to train models later.\n\t6. Inclusion of Single-Letter Characters: Counted missing Korean single-letter characters and added them to the target tokenizer that appeared more than 6,000 times.\n\t7. Iterative Refinement: We repeated steps 2 to 6 until there were no tokens to drop or add.\n\t8. Training Bias Towards New Tokens: Our training data was biased to include more texts with new tokens, for effective learning.\n\n\nThis rigorous approach ensured a comprehensive and contextually rich Korean vocabulary for the model."
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# best_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00022578664365177788
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "best_model", "results": []}]} | frayline/best_model | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:16:01+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# best_model
This model is a fine-tuned version of gpt2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00022578664365177788
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# best_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.00022578664365177788\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 123\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 7\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# best_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.00022578664365177788\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 123\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 7\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
53,
22,
7,
9,
9,
4,
112,
40
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# best_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.00022578664365177788\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 123\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 7\n- mixed_precision_training: Native AMP### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | azhara001/donut-base-demo-final_3e-05_AdamW | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:16:05+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
35,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | diffusers |
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/sayakpaul/diffusion-orpo-lora-sdxl/runs/4egafx8q).
| {} | sayakpaul/sdxl-orpo-large-beta_orpo-0.005-beta_inner-500-lr-5e-7-steps-5000-no-warmup | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-05-02T09:17:08+00:00 | [] | [] | TAGS
#diffusers #safetensors #region-us
|
More information on all the CLI arguments and the environment are available on your 'wandb' run page.
| [] | [
"TAGS\n#diffusers #safetensors #region-us \n"
] | [
12
] | [
"TAGS\n#diffusers #safetensors #region-us \n"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Himanshu167/AAI_v1 | null | [
"transformers",
"onnx",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:17:23+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #onnx #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #onnx #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
49,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #onnx #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "251.27 +/- 21.80", "name": "mean_reward", "verified": false}]}]}]} | Anderlaxe/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-05-02T09:19:16+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
31,
35,
17
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | transformers |
# Uploaded model
- **Developed by:** klaylouis1932
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | klaylouis1932/llama-3-8b-english-teacher | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:20:15+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: klaylouis1932
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: klaylouis1932\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: klaylouis1932\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
64,
83
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: klaylouis1932\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | azhara001/donut-base-demo-new-1e-05_Adam_1876 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:20:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
35,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** armanbabayan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-2-7b-chat-bnb-4bit"} | armanbabayan/Llama2_Immigration_Chat_lora | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:20:32+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: armanbabayan
- License: apache-2.0
- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: armanbabayan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: armanbabayan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
66,
84
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: armanbabayan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cancerfarore/roberta-base-CancerFarore-Model
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5226
- Train End Logits Accuracy: 0.8429
- Train Start Logits Accuracy: 0.8179
- Validation Loss: 0.8662
- Validation End Logits Accuracy: 0.7675
- Validation Start Logits Accuracy: 0.7540
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 32952, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.0834 | 0.6813 | 0.6618 | 0.8721 | 0.7369 | 0.7251 | 0 |
| 0.7019 | 0.7919 | 0.7665 | 0.8038 | 0.7645 | 0.7510 | 1 |
| 0.5226 | 0.8429 | 0.8179 | 0.8662 | 0.7675 | 0.7540 | 2 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "FacebookAI/roberta-base", "model-index": [{"name": "cancerfarore/roberta-base-CancerFarore-Model", "results": []}]} | cancerfarore/roberta-base-CancerFarore-Model | null | [
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"base_model:FacebookAI/roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:20:53+00:00 | [] | [] | TAGS
#transformers #tf #roberta #question-answering #generated_from_keras_callback #base_model-FacebookAI/roberta-base #license-mit #endpoints_compatible #region-us
| cancerfarore/roberta-base-CancerFarore-Model
============================================
This model is a fine-tuned version of FacebookAI/roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.5226
* Train End Logits Accuracy: 0.8429
* Train Start Logits Accuracy: 0.8179
* Validation Loss: 0.8662
* Validation End Logits Accuracy: 0.7675
* Validation Start Logits Accuracy: 0.7540
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 32952, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.40.1
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 32952, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #roberta #question-answering #generated_from_keras_callback #base_model-FacebookAI/roberta-base #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 32952, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
46,
291,
5,
38
] | [
"TAGS\n#transformers #tf #roberta #question-answering #generated_from_keras_callback #base_model-FacebookAI/roberta-base #license-mit #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 32952, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
summarization | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioNLP-tech-decoder-PLOS
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3739167643078955e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.2
| {"tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "BioNLP-tech-decoder-PLOS", "results": []}]} | dtorber/BioNLP-tech-decoder-PLOS | null | [
"transformers",
"safetensors",
"led",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:21:07+00:00 | [] | [] | TAGS
#transformers #safetensors #led #text2text-generation #summarization #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# BioNLP-tech-decoder-PLOS
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3739167643078955e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.2
| [
"# BioNLP-tech-decoder-PLOS\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.3739167643078955e-06\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 1.13.1+cu117\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #led #text2text-generation #summarization #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# BioNLP-tech-decoder-PLOS\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.3739167643078955e-06\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 1.13.1+cu117\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] | [
39,
24,
7,
9,
9,
4,
122,
5,
44
] | [
"TAGS\n#transformers #safetensors #led #text2text-generation #summarization #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n# BioNLP-tech-decoder-PLOS\n\nThis model was trained from scratch on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.3739167643078955e-06\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP### Training results### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 1.13.1+cu117\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "facebook/bart-large-cnn", "model-index": [{"name": "tmp_trainer", "results": []}]} | Vamshi7981/tmp_trainer | null | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:21:07+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# tmp_trainer
This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# tmp_trainer\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# tmp_trainer\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
54,
27,
7,
9,
9,
4,
95,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# tmp_trainer\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
image-segmentation | transformers |
# BRIA Background Removal v1.4 Model Card
RMBG v1.4 is our state-of-the-art background removal model, designed to effectively separate foreground from background in a range of
categories and image types. This model has been trained on a carefully selected dataset, which includes:
general stock images, e-commerce, gaming, and advertising content, making it suitable for commercial use cases powering enterprise content creation at scale.
The accuracy, efficiency, and versatility currently rival leading source-available models.
It is ideal where content safety, legally licensed datasets, and bias mitigation are paramount.
Developed by BRIA AI, RMBG v1.4 is available as a source-available model for non-commercial use.
[CLICK HERE FOR A DEMO](https://huggingface.co/spaces/briaai/BRIA-RMBG-1.4)

### Model Description
- **Developed by:** [BRIA AI](https://bria.ai/)
- **Model type:** Background Removal
- **License:** [bria-rmbg-1.4](https://bria.ai/bria-huggingface-model-license-agreement/)
- The model is released under a Creative Commons license for non-commercial use.
- Commercial use is subject to a commercial agreement with BRIA. [Contact Us](https://bria.ai/contact-us) for more information.
- **Model Description:** BRIA RMBG 1.4 is a saliency segmentation model trained exclusively on a professional-grade dataset.
- **BRIA:** Resources for more information: [BRIA AI](https://bria.ai/)
## Training data
Bria-RMBG model was trained with over 12,000 high-quality, high-resolution, manually labeled (pixel-wise accuracy), fully licensed images.
Our benchmark included balanced gender, balanced ethnicity, and people with different types of disabilities.
For clarity, we provide our data distribution according to different categories, demonstrating our model’s versatility.
### Distribution of images:
| Category | Distribution |
| -----------------------------------| -----------------------------------:|
| Objects only | 45.11% |
| People with objects/animals | 25.24% |
| People only | 17.35% |
| people/objects/animals with text | 8.52% |
| Text only | 2.52% |
| Animals only | 1.89% |
| Category | Distribution |
| -----------------------------------| -----------------------------------------:|
| Photorealistic | 87.70% |
| Non-Photorealistic | 12.30% |
| Category | Distribution |
| -----------------------------------| -----------------------------------:|
| Non Solid Background | 52.05% |
| Solid Background | 47.95%
| Category | Distribution |
| -----------------------------------| -----------------------------------:|
| Single main foreground object | 51.42% |
| Multiple objects in the foreground | 48.58% |
## Qualitative Evaluation

## Architecture
RMBG v1.4 is developed on the [IS-Net](https://github.com/xuebinqin/DIS) enhanced with our unique training scheme and proprietary dataset.
These modifications significantly improve the model’s accuracy and effectiveness in diverse image-processing scenarios.
## Installation
```bash
pip install -qr https://huggingface.co/briaai/RMBG-1.4/resolve/main/requirements.txt
```
## Usage
Either load the pipeline
```python
from transformers import pipeline
image_path = "https://farm5.staticflickr.com/4007/4322154488_997e69e4cf_z.jpg"
pipe = pipeline("image-segmentation", model="briaai/RMBG-1.4", trust_remote_code=True)
pillow_mask = pipe(image_path, return_mask = True) # outputs a pillow mask
pillow_image = pipe(image_path) # applies mask on input and returns a pillow image
```
Or load the model
```python
from transformers import AutoModelForImageSegmentation
from torchvision.transforms.functional import normalize
model = AutoModelForImageSegmentation.from_pretrained("briaai/RMBG-1.4",trust_remote_code=True)
def preprocess_image(im: np.ndarray, model_input_size: list) -> torch.Tensor:
if len(im.shape) < 3:
im = im[:, :, np.newaxis]
# orig_im_size=im.shape[0:2]
im_tensor = torch.tensor(im, dtype=torch.float32).permute(2,0,1)
im_tensor = F.interpolate(torch.unsqueeze(im_tensor,0), size=model_input_size, mode='bilinear')
image = torch.divide(im_tensor,255.0)
image = normalize(image,[0.5,0.5,0.5],[1.0,1.0,1.0])
return image
def postprocess_image(result: torch.Tensor, im_size: list)-> np.ndarray:
result = torch.squeeze(F.interpolate(result, size=im_size, mode='bilinear') ,0)
ma = torch.max(result)
mi = torch.min(result)
result = (result-mi)/(ma-mi)
im_array = (result*255).permute(1,2,0).cpu().data.numpy().astype(np.uint8)
im_array = np.squeeze(im_array)
return im_array
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
# prepare input
image_path = "https://farm5.staticflickr.com/4007/4322154488_997e69e4cf_z.jpg"
orig_im = io.imread(image_path)
orig_im_size = orig_im.shape[0:2]
image = preprocess_image(orig_im, model_input_size).to(device)
# inference
result=model(image)
# post process
result_image = postprocess_image(result[0][0], orig_im_size)
# save result
pil_im = Image.fromarray(result_image)
no_bg_image = Image.new("RGBA", pil_im.size, (0,0,0,0))
orig_image = Image.open(image_path)
no_bg_image.paste(orig_image, mask=pil_im)
```
| {"license": "other", "tags": ["remove background", "background", "background-removal", "Pytorch", "vision", "legal liability", "transformers"], "license_name": "bria-rmbg-1.4", "license_link": "https://bria.ai/bria-huggingface-model-license-agreement/", "pipeline_tag": "image-segmentation", "extra_gated_prompt": "This model weights by BRIA AI can be obtained after a commercial license is agreed upon. Fill in the form below and we reach out to you.", "extra_gated_fields": {"Name": "text", "Company/Org name": "text", "Org Type (Early/Growth Startup, Enterprise, Academy)": "text", "Role": "text", "Country": "text", "Email": "text", "By submitting this form, I agree to BRIA\u2019s Privacy policy and Terms & conditions, see links below": "checkbox"}} | wide-video/rmbg-v1.0.0 | null | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"SegformerForSemanticSegmentation",
"image-segmentation",
"remove background",
"background",
"background-removal",
"Pytorch",
"vision",
"legal liability",
"custom_code",
"license:other",
"region:us"
] | null | 2024-05-02T09:21:21+00:00 | [] | [] | TAGS
#transformers #pytorch #onnx #safetensors #SegformerForSemanticSegmentation #image-segmentation #remove background #background #background-removal #Pytorch #vision #legal liability #custom_code #license-other #region-us
| BRIA Background Removal v1.4 Model Card
=======================================
RMBG v1.4 is our state-of-the-art background removal model, designed to effectively separate foreground from background in a range of
categories and image types. This model has been trained on a carefully selected dataset, which includes:
general stock images, e-commerce, gaming, and advertising content, making it suitable for commercial use cases powering enterprise content creation at scale.
The accuracy, efficiency, and versatility currently rival leading source-available models.
It is ideal where content safety, legally licensed datasets, and bias mitigation are paramount.
Developed by BRIA AI, RMBG v1.4 is available as a source-available model for non-commercial use.
CLICK HERE FOR A DEMO
!examples
### Model Description
* Developed by: BRIA AI
* Model type: Background Removal
* License: bria-rmbg-1.4
+ The model is released under a Creative Commons license for non-commercial use.
+ Commercial use is subject to a commercial agreement with BRIA. Contact Us for more information.
* Model Description: BRIA RMBG 1.4 is a saliency segmentation model trained exclusively on a professional-grade dataset.
* BRIA: Resources for more information: BRIA AI
Training data
-------------
Bria-RMBG model was trained with over 12,000 high-quality, high-resolution, manually labeled (pixel-wise accuracy), fully licensed images.
Our benchmark included balanced gender, balanced ethnicity, and people with different types of disabilities.
For clarity, we provide our data distribution according to different categories, demonstrating our model’s versatility.
### Distribution of images:
Qualitative Evaluation
----------------------
!examples
Architecture
------------
RMBG v1.4 is developed on the IS-Net enhanced with our unique training scheme and proprietary dataset.
These modifications significantly improve the model’s accuracy and effectiveness in diverse image-processing scenarios.
Installation
------------
Usage
-----
Either load the pipeline
Or load the model
| [
"### Model Description\n\n\n* Developed by: BRIA AI\n* Model type: Background Removal\n* License: bria-rmbg-1.4\n\n\n\t+ The model is released under a Creative Commons license for non-commercial use.\n\t+ Commercial use is subject to a commercial agreement with BRIA. Contact Us for more information.\n* Model Description: BRIA RMBG 1.4 is a saliency segmentation model trained exclusively on a professional-grade dataset.\n* BRIA: Resources for more information: BRIA AI\n\n\nTraining data\n-------------\n\n\nBria-RMBG model was trained with over 12,000 high-quality, high-resolution, manually labeled (pixel-wise accuracy), fully licensed images.\nOur benchmark included balanced gender, balanced ethnicity, and people with different types of disabilities.\nFor clarity, we provide our data distribution according to different categories, demonstrating our model’s versatility.",
"### Distribution of images:\n\n\n\n\n\n\nQualitative Evaluation\n----------------------\n\n\n!examples\n\n\nArchitecture\n------------\n\n\nRMBG v1.4 is developed on the IS-Net enhanced with our unique training scheme and proprietary dataset.\nThese modifications significantly improve the model’s accuracy and effectiveness in diverse image-processing scenarios.\n\n\nInstallation\n------------\n\n\nUsage\n-----\n\n\nEither load the pipeline\n\n\nOr load the model"
] | [
"TAGS\n#transformers #pytorch #onnx #safetensors #SegformerForSemanticSegmentation #image-segmentation #remove background #background #background-removal #Pytorch #vision #legal liability #custom_code #license-other #region-us \n",
"### Model Description\n\n\n* Developed by: BRIA AI\n* Model type: Background Removal\n* License: bria-rmbg-1.4\n\n\n\t+ The model is released under a Creative Commons license for non-commercial use.\n\t+ Commercial use is subject to a commercial agreement with BRIA. Contact Us for more information.\n* Model Description: BRIA RMBG 1.4 is a saliency segmentation model trained exclusively on a professional-grade dataset.\n* BRIA: Resources for more information: BRIA AI\n\n\nTraining data\n-------------\n\n\nBria-RMBG model was trained with over 12,000 high-quality, high-resolution, manually labeled (pixel-wise accuracy), fully licensed images.\nOur benchmark included balanced gender, balanced ethnicity, and people with different types of disabilities.\nFor clarity, we provide our data distribution according to different categories, demonstrating our model’s versatility.",
"### Distribution of images:\n\n\n\n\n\n\nQualitative Evaluation\n----------------------\n\n\n!examples\n\n\nArchitecture\n------------\n\n\nRMBG v1.4 is developed on the IS-Net enhanced with our unique training scheme and proprietary dataset.\nThese modifications significantly improve the model’s accuracy and effectiveness in diverse image-processing scenarios.\n\n\nInstallation\n------------\n\n\nUsage\n-----\n\n\nEither load the pipeline\n\n\nOr load the model"
] | [
63,
197,
118
] | [
"TAGS\n#transformers #pytorch #onnx #safetensors #SegformerForSemanticSegmentation #image-segmentation #remove background #background #background-removal #Pytorch #vision #legal liability #custom_code #license-other #region-us \n### Model Description\n\n\n* Developed by: BRIA AI\n* Model type: Background Removal\n* License: bria-rmbg-1.4\n\n\n\t+ The model is released under a Creative Commons license for non-commercial use.\n\t+ Commercial use is subject to a commercial agreement with BRIA. Contact Us for more information.\n* Model Description: BRIA RMBG 1.4 is a saliency segmentation model trained exclusively on a professional-grade dataset.\n* BRIA: Resources for more information: BRIA AI\n\n\nTraining data\n-------------\n\n\nBria-RMBG model was trained with over 12,000 high-quality, high-resolution, manually labeled (pixel-wise accuracy), fully licensed images.\nOur benchmark included balanced gender, balanced ethnicity, and people with different types of disabilities.\nFor clarity, we provide our data distribution according to different categories, demonstrating our model’s versatility.### Distribution of images:\n\n\n\n\n\n\nQualitative Evaluation\n----------------------\n\n\n!examples\n\n\nArchitecture\n------------\n\n\nRMBG v1.4 is developed on the IS-Net enhanced with our unique training scheme and proprietary dataset.\nThese modifications significantly improve the model’s accuracy and effectiveness in diverse image-processing scenarios.\n\n\nInstallation\n------------\n\n\nUsage\n-----\n\n\nEither load the pipeline\n\n\nOr load the model"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "facebook/bart-large-cnn", "model-index": [{"name": "tmp_trainer", "results": []}]} | sudhanshusaxena/tmp_trainer | null | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:21:29+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# tmp_trainer
This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# tmp_trainer\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# tmp_trainer\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
54,
27,
7,
9,
9,
4,
95,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# tmp_trainer\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** armanbabayan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "dpo"], "base_model": "unsloth/llama-2-7b-chat-bnb-4bit"} | armanbabayan/Llama2_Immigration_Chat_4bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:unsloth/llama-2-7b-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-05-02T09:24:08+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: armanbabayan
- License: apache-2.0
- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: armanbabayan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: armanbabayan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
85,
84
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n# Uploaded model\n\n- Developed by: armanbabayan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text2text-generation | transformers | ## Eval results on WikiLarge
We obtain the following results on ```validation``` and ```test``` sets of WikiLarge:
| Set | SARI | BLEU |
|------------|-------|-------|
| validation | 44.11 | 27.13 |
| test | 38.8 | 31.35 |
| {"language": ["en"], "tags": ["sentence-simplification"], "metrics": ["sari", "bleu"], "pipeline_tag": "text2text-generation"} | waboucay/bart-large-simplification-wikilarge-original-penalized | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"sentence-simplification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:24:20+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #bart #text2text-generation #sentence-simplification #en #autotrain_compatible #endpoints_compatible #region-us
| Eval results on WikiLarge
-------------------------
We obtain the following results on and sets of WikiLarge:
Set: validation, SARI: 44.11, BLEU: 27.13
Set: test, SARI: 38.8, BLEU: 31.35
| [] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #sentence-simplification #en #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
37
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #sentence-simplification #en #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_total_InstructionN1_SOAPL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_total_InstructionN1_SOAPL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_total_InstructionN1_SOAPL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:24:53+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CS505_COQE_viT5_total_InstructionN1_SOAPL_v1
This model is a fine-tuned version of VietAI/vit5-large on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# CS505_COQE_viT5_total_InstructionN1_SOAPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CS505_COQE_viT5_total_InstructionN1_SOAPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
62,
47,
7,
9,
9,
4,
102,
5,
40
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# CS505_COQE_viT5_total_InstructionN1_SOAPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-real-boat-dataset
This model is a fine-tuned version of [zhuchi76/detr-resnet-50-finetuned-boat-dataset](https://huggingface.co/zhuchi76/detr-resnet-50-finetuned-boat-dataset) on the boat_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["boat_dataset"], "base_model": "zhuchi76/detr-resnet-50-finetuned-boat-dataset", "model-index": [{"name": "detr-resnet-50-finetuned-real-boat-dataset", "results": []}]} | leowang707/detr-resnet-50-finetuned-real-boat-dataset | null | [
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:boat_dataset",
"base_model:zhuchi76/detr-resnet-50-finetuned-boat-dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:25:18+00:00 | [] | [] | TAGS
#transformers #safetensors #detr #object-detection #generated_from_trainer #dataset-boat_dataset #base_model-zhuchi76/detr-resnet-50-finetuned-boat-dataset #license-apache-2.0 #endpoints_compatible #region-us
|
# detr-resnet-50-finetuned-real-boat-dataset
This model is a fine-tuned version of zhuchi76/detr-resnet-50-finetuned-boat-dataset on the boat_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# detr-resnet-50-finetuned-real-boat-dataset\n\nThis model is a fine-tuned version of zhuchi76/detr-resnet-50-finetuned-boat-dataset on the boat_dataset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #detr #object-detection #generated_from_trainer #dataset-boat_dataset #base_model-zhuchi76/detr-resnet-50-finetuned-boat-dataset #license-apache-2.0 #endpoints_compatible #region-us \n",
"# detr-resnet-50-finetuned-real-boat-dataset\n\nThis model is a fine-tuned version of zhuchi76/detr-resnet-50-finetuned-boat-dataset on the boat_dataset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
71,
58,
7,
9,
9,
4,
102,
5,
44
] | [
"TAGS\n#transformers #safetensors #detr #object-detection #generated_from_trainer #dataset-boat_dataset #base_model-zhuchi76/detr-resnet-50-finetuned-boat-dataset #license-apache-2.0 #endpoints_compatible #region-us \n# detr-resnet-50-finetuned-real-boat-dataset\n\nThis model is a fine-tuned version of zhuchi76/detr-resnet-50-finetuned-boat-dataset on the boat_dataset dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Meta-Llama-3-8B-VIGGO-qlora
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5268 | 0.99 | 25 | 0.5043 |
| 0.446 | 1.98 | 50 | 0.4889 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "Meta-Llama-3-8B-VIGGO-qlora", "results": []}]} | asprenger/Meta-Llama-3-8B-VIGGO-qlora | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2024-05-02T09:26:36+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us
| Meta-Llama-3-8B-VIGGO-qlora
===========================
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4889
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: constant
* lr\_scheduler\_warmup\_ratio: 0.03
* num\_epochs: 2
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.2"
] | [
55,
142,
5,
52
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/rcxc2xb | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:26:39+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers | # OpenVINO IR model with int8 quantization of Hermes-2-Pro-Llama-3-8B
Model definition for LocalAI:
```
name: hermes-2-pro-llama3
backend: transformers
parameters:
model: fakezeta/Hermes-2-Pro-Llama-3-8B-ov-int8
context_size: 8192
type: OVModelForCausalLM
template:
use_tokenizer_template: true
```
# Hermes 2 Pro - Llama-3 8B

## Model Description
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling
## Example Outputs
### Ask for a structured JSON output:

### Write the plot for a story where anime became real life:

### Coding Assistance

# Prompt Format
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5520|± |0.0145|
| | |acc_norm|0.5887|± |0.0144|
|arc_easy | 0|acc |0.8350|± |0.0076|
| | |acc_norm|0.8123|± |0.0080|
|boolq | 1|acc |0.8584|± |0.0061|
|hellaswag | 0|acc |0.6265|± |0.0048|
| | |acc_norm|0.8053|± |0.0040|
|openbookqa | 0|acc |0.3800|± |0.0217|
| | |acc_norm|0.4580|± |0.0223|
|piqa | 0|acc |0.8003|± |0.0093|
| | |acc_norm|0.8118|± |0.0091|
|winogrande | 0|acc |0.7490|± |0.0122|
```
Average: 72.62
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2520|± |0.0273|
| | |acc_norm|0.2559|± |0.0274|
|agieval_logiqa_en | 0|acc |0.3548|± |0.0188|
| | |acc_norm|0.3625|± |0.0189|
|agieval_lsat_ar | 0|acc |0.1826|± |0.0255|
| | |acc_norm|0.1913|± |0.0260|
|agieval_lsat_lr | 0|acc |0.5510|± |0.0220|
| | |acc_norm|0.5255|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6431|± |0.0293|
| | |acc_norm|0.6097|± |0.0298|
|agieval_sat_en | 0|acc |0.7330|± |0.0309|
| | |acc_norm|0.7039|± |0.0319|
|agieval_sat_en_without_passage| 0|acc |0.4029|± |0.0343|
| | |acc_norm|0.3689|± |0.0337|
|agieval_sat_math | 0|acc |0.3909|± |0.0330|
| | |acc_norm|0.3773|± |0.0328|
```
Average: 42.44
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|± |0.0360|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6667|± |0.0246|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1755|± |0.0201|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2014|± |0.0152|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5500|± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4300|± |0.0222|
|bigbench_navigate | 0|multiple_choice_grade|0.4980|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7010|± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4688|± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1974|± |0.0126|
|bigbench_snarks | 0|multiple_choice_grade|0.7403|± |0.0327|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5426|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.5320|± |0.0158|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2280|± |0.0119|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1531|± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5500|± |0.0288|
```
Average: 43.55
## TruthfulQA:
```
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.410|± |0.0172|
| | |mc2 |0.578|± |0.0157|
```
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"Hermes-2-Pro-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Pro-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B]https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)},
title={Hermes-2-Pro-Llama-3-8B},
author={"Teknium", "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "NousResearch/Meta-Llama-3-8B", "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]} | fakezeta/Hermes-2-Pro-Llama-3-8B-ov-int8 | null | [
"transformers",
"openvino",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Meta-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:27:19+00:00 | [] | [
"en"
] | TAGS
#transformers #openvino #llama #text-generation #Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #axolotl #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-NousResearch/Meta-Llama-3-8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # OpenVINO IR model with int8 quantization of Hermes-2-Pro-Llama-3-8B
Model definition for LocalAI:
# Hermes 2 Pro - Llama-3 8B
!image/png
## Model Description
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - '<tools>', '<tool_call>', '<tool_response>' and their closing tags are single tokens now.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling system for this model on our github repo here: URL
## Example Outputs
### Ask for a structured JSON output:
!image/png
### Write the plot for a story where anime became real life:
!image/png
### Coding Assistance
!image/png
# Prompt Format
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
This prompt is available as a chat template, which means you can format messages using the
'tokenizer.apply_chat_template()' method:
When tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\n' to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
To complete the function call, create a user prompt that follows the above system prompt, like so:
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so:
The assistant will then read in that data from the function's response, and generate a natural language response:
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks
!image/png
## GPT4All:
Average: 72.62
## AGIEval:
Average: 42.44
## BigBench:
Average: 43.55
## TruthfulQA:
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
URL
!image/png
# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:
!image/png
## Quantized Versions:
GGUF Versions Available Here: URL
# How to cite:
| [
"# OpenVINO IR model with int8 quantization of Hermes-2-Pro-Llama-3-8B\n\nModel definition for LocalAI:",
"# Hermes 2 Pro - Llama-3 8B\n\n!image/png",
"## Model Description\n\nHermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.\n\nThis new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.\n\nHermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.\n\nThis version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - '<tools>', '<tool_call>', '<tool_response>' and their closing tags are single tokens now.\n\nThis work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI\n\nLearn more about the function calling system for this model on our github repo here: URL",
"## Example Outputs",
"### Ask for a structured JSON output:\n!image/png",
"### Write the plot for a story where anime became real life:\n!image/png",
"### Coding Assistance\n!image/png",
"# Prompt Format\n\nHermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\nThis is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.\n\nThis format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.\n\nPrompt with system instruction (Use whatever system prompt you like, this is just an example!):\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.\n\nTo utilize the prompt format without a system prompt, simply leave the line out.",
"## Prompt Format for Function Calling\n\nOur model was trained on specific system prompts and structures for Function Calling. \n\nYou should use the system role with this message, followed by a function signature json as this example shows here.\n\n\nTo complete the function call, create a user prompt that follows the above system prompt, like so:\n\n\nThe model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL\n\n\nOnce you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so:\n\n\nThe assistant will then read in that data from the function's response, and generate a natural language response:",
"## Prompt Format for JSON Mode / Structured Outputs\n\nOur model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.\n\nYour schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL\n\n\n\nGiven the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.",
"# Benchmarks\n\n!image/png",
"## GPT4All:\n\nAverage: 72.62",
"## AGIEval:\n\nAverage: 42.44",
"## BigBench:\n\nAverage: 43.55",
"## TruthfulQA:",
"# Inference Code\n\nHere is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)\n\nNote: To use function calling, you should see the github repo above.",
"## Inference Code for Function Calling:\n\nAll code for utilizing, parsing, and building function calling templates is available on our github:\nURL\n\n!image/png",
"# Chat Interfaces\n\nWhen quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.\nIn LM-Studio, simply select the ChatML Prefix on the settings side pane:\n\n!image/png",
"## Quantized Versions:\n\nGGUF Versions Available Here: URL",
"# How to cite:"
] | [
"TAGS\n#transformers #openvino #llama #text-generation #Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #axolotl #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-NousResearch/Meta-Llama-3-8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# OpenVINO IR model with int8 quantization of Hermes-2-Pro-Llama-3-8B\n\nModel definition for LocalAI:",
"# Hermes 2 Pro - Llama-3 8B\n\n!image/png",
"## Model Description\n\nHermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.\n\nThis new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.\n\nHermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.\n\nThis version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - '<tools>', '<tool_call>', '<tool_response>' and their closing tags are single tokens now.\n\nThis work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI\n\nLearn more about the function calling system for this model on our github repo here: URL",
"## Example Outputs",
"### Ask for a structured JSON output:\n!image/png",
"### Write the plot for a story where anime became real life:\n!image/png",
"### Coding Assistance\n!image/png",
"# Prompt Format\n\nHermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\nThis is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.\n\nThis format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.\n\nPrompt with system instruction (Use whatever system prompt you like, this is just an example!):\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.\n\nTo utilize the prompt format without a system prompt, simply leave the line out.",
"## Prompt Format for Function Calling\n\nOur model was trained on specific system prompts and structures for Function Calling. \n\nYou should use the system role with this message, followed by a function signature json as this example shows here.\n\n\nTo complete the function call, create a user prompt that follows the above system prompt, like so:\n\n\nThe model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL\n\n\nOnce you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so:\n\n\nThe assistant will then read in that data from the function's response, and generate a natural language response:",
"## Prompt Format for JSON Mode / Structured Outputs\n\nOur model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.\n\nYour schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL\n\n\n\nGiven the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.",
"# Benchmarks\n\n!image/png",
"## GPT4All:\n\nAverage: 72.62",
"## AGIEval:\n\nAverage: 42.44",
"## BigBench:\n\nAverage: 43.55",
"## TruthfulQA:",
"# Inference Code\n\nHere is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)\n\nNote: To use function calling, you should see the github repo above.",
"## Inference Code for Function Calling:\n\nAll code for utilizing, parsing, and building function calling templates is available on our github:\nURL\n\n!image/png",
"# Chat Interfaces\n\nWhen quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.\nIn LM-Studio, simply select the ChatML Prefix on the settings side pane:\n\n!image/png",
"## Quantized Versions:\n\nGGUF Versions Available Here: URL",
"# How to cite:"
] | [
126,
31,
16,
262,
4,
16,
20,
10,
264,
154,
121,
8,
12,
11,
11,
6,
50,
37,
109,
15,
5
] | [
"TAGS\n#transformers #openvino #llama #text-generation #Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #axolotl #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-NousResearch/Meta-Llama-3-8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# OpenVINO IR model with int8 quantization of Hermes-2-Pro-Llama-3-8B\n\nModel definition for LocalAI:# Hermes 2 Pro - Llama-3 8B\n\n!image/png## Model Description\n\nHermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.\n\nThis new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.\n\nHermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.\n\nThis version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - '<tools>', '<tool_call>', '<tool_response>' and their closing tags are single tokens now.\n\nThis work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI\n\nLearn more about the function calling system for this model on our github repo here: URL## Example Outputs### Ask for a structured JSON output:\n!image/png### Write the plot for a story where anime became real life:\n!image/png### Coding Assistance\n!image/png# Prompt Format\n\nHermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\nThis is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.\n\nThis format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.\n\nPrompt with system instruction (Use whatever system prompt you like, this is just an example!):\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.\n\nTo utilize the prompt format without a system prompt, simply leave the line out.## Prompt Format for Function Calling\n\nOur model was trained on specific system prompts and structures for Function Calling. \n\nYou should use the system role with this message, followed by a function signature json as this example shows here.\n\n\nTo complete the function call, create a user prompt that follows the above system prompt, like so:\n\n\nThe model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL\n\n\nOnce you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so:\n\n\nThe assistant will then read in that data from the function's response, and generate a natural language response:## Prompt Format for JSON Mode / Structured Outputs\n\nOur model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.\n\nYour schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL\n\n\n\nGiven the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.# Benchmarks\n\n!image/png## GPT4All:\n\nAverage: 72.62## AGIEval:\n\nAverage: 42.44## BigBench:\n\nAverage: 43.55## TruthfulQA:# Inference Code\n\nHere is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)\n\nNote: To use function calling, you should see the github repo above.## Inference Code for Function Calling:\n\nAll code for utilizing, parsing, and building function calling templates is available on our github:\nURL\n\n!image/png# Chat Interfaces\n\nWhen quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.\nIn LM-Studio, simply select the ChatML Prefix on the settings side pane:\n\n!image/png## Quantized Versions:\n\nGGUF Versions Available Here: URL# How to cite:"
] |
text-classification | bertopic |
# BERTopic-enron-50k
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("antulik/BERTopic-enron-50k")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 656
* Number of training documents: 50000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | enron - agreement - contract - energy - power | 10 | -1_enron_agreement_contract_energy |
| 0 | enronmessagingadministrationenroncom - recipients - recipient - message - disclosure | 24222 | 0_enronmessagingadministrationenroncom_recipients_recipient_message |
| 1 | fares - newark - flights - continentalcom - airlines | 510 | 1_fares_newark_flights_continentalcom |
| 2 | isda - isdas - agreements - enron - ena | 358 | 2_isda_isdas_agreements_enron |
| 3 | northwests - northwest - rogerpelotewilliamscom - available - billwcalpinecom | 355 | 3_northwests_northwest_rogerpelotewilliamscom_available |
| 4 | deals - trades - deal - trade - buys | 340 | 4_deals_trades_deal_trade |
| 5 | reschedule - scheduled - schedule - meeting - meetings | 316 | 5_reschedule_scheduled_schedule_meeting |
| 6 | outlookmigrationteamenroncom - outlook - outlookteamenroncom - appointment - calendar | 310 | 6_outlookmigrationteamenroncom_outlook_outlookteamenroncom_appointment |
| 7 | logistics - gas - fuel - coal - trading | 293 | 7_logistics_gas_fuel_coal |
| 8 | conference - scheduled - call - dial - calls | 285 | 8_conference_scheduled_call_dial |
| 9 | nda - ndas - enron - counterparties - counterparty | 230 | 9_nda_ndas_enron_counterparties |
| 10 | nasdaq - nyse - stocks - stock - market | 229 | 10_nasdaq_nyse_stocks_stock |
| 11 | arbitration - litigation - lawsuit - plaintiff - proceedings | 215 | 11_arbitration_litigation_lawsuit_plaintiff |
| 12 | enron - resumes - resume - associate - hiring | 201 | 12_enron_resumes_resume_associate |
| 13 | electricity - energy - gov - governors - federal | 193 | 13_electricity_energy_gov_governors |
| 14 | enron - allisonmchenryenroncom - eb - corp - geniafitzgeraldenroncom | 193 | 14_enron_allisonmchenryenroncom_eb_corp |
| 15 | expense - expenses - budgets - expenditures - allocations | 191 | 15_expense_expenses_budgets_expenditures |
| 16 | weekend - meet - trip - going - austin | 173 | 16_weekend_meet_trip_going |
| 17 | enrononline - wwwenrononlinecom - desk - httpwwwenrononlinecom - password | 163 | 17_enrononline_wwwenrononlinecom_desk_httpwwwenrononlinecom |
| 18 | nymexcomex - nymex - comex - futures - exchange | 154 | 18_nymexcomex_nymex_comex_futures |
| 19 | curves - curve - curvetypecd - marking - prices | 154 | 19_curves_curve_curvetypecd_marking |
| 20 | ontario - alberta - transcanada - toronto - agreements | 149 | 20_ontario_alberta_transcanada_toronto |
| 21 | brazilian - banco - transferncia - rio - janeiro | 148 | 21_brazilian_banco_transferncia_rio |
| 22 | enron - goldman - eci - repos - securities | 145 | 22_enron_goldman_eci_repos |
| 23 | enron - enrons - bankruptcy - bankrupt - savings | 141 | 23_enron_enrons_bankruptcy_bankrupt |
| 24 | enterprise - ebusiness - enron - enrononline - ecommerce | 137 | 24_enterprise_ebusiness_enron_enrononline |
| 25 | haas - mba - undergraduate - haasnewshaasberkeleyedu - university | 137 | 25_haas_mba_undergraduate_haasnewshaasberkeleyedu |
| 26 | meter - meters - contract - noms - flowed | 136 | 26_meter_meters_contract_noms |
| 27 | html - contenttexthtml - tdfont - tdtrtable - tdimg | 135 | 27_html_contenttexthtml_tdfont_tdtrtable |
| 28 | excel - spreadsheet - reports - spreadsheets - excelergy | 133 | 28_excel_spreadsheet_reports_spreadsheets |
| 29 | telephone - call - phone - voicemail - contact | 131 | 29_telephone_call_phone_voicemail |
| 30 | email - tracygeacconeenroncom - eb3102g - address - enform | 126 | 30_email_tracygeacconeenroncom_eb3102g_address |
| 31 | faxes - fax - faxing - faxed - efax | 123 | 31_faxes_fax_faxing_faxed |
| 32 | linguaphilewordsmithorg - dictionary - words - wordsmith - word | 120 | 32_linguaphilewordsmithorg_dictionary_words_wordsmith |
| 33 | agreements - contract - obligations - agreement - gas | 117 | 33_agreements_contract_obligations_agreement |
| 34 | var - risktrac - risk - variance - volatility | 115 | 34_var_risktrac_risk_variance |
| 35 | invoices - invoice - invoiced - payments - billed | 114 | 35_invoices_invoice_invoiced_payments |
| 36 | energynewslivecom - opec - energy - senate - oil | 112 | 36_energynewslivecom_opec_energy_senate |
| 37 | vacation - holiday - hours - payroll - days | 108 | 37_vacation_holiday_hours_payroll |
| 38 | prc - prcs - analysts - evaluation - vp | 108 | 38_prc_prcs_analysts_evaluation |
| 39 | sailing - boat - cruising - surf - fishing | 108 | 39_sailing_boat_cruising_surf |
| 40 | dinner - hope - happy - weekend - going | 107 | 40_dinner_hope_happy_weekend |
| 41 | collateral - csa - bank - credit - debt | 107 | 41_collateral_csa_bank_credit |
| 42 | congress - congressional - senate - legislative - senators | 105 | 42_congress_congressional_senate_legislative |
| 43 | enron - enrononline - yen - ena - japanese | 104 | 43_enron_enrononline_yen_ena |
| 44 | enron - reuters - enrons - journal - securities | 102 | 44_enron_reuters_enrons_journal |
| 45 | photos - prints - gallery - photoworks - albums | 101 | 45_photos_prints_gallery_photoworks |
| 46 | pipeline - audreyrobertsonenroncom - audrey - kimberlywatsonenroncom - pipe | 100 | 46_pipeline_audreyrobertsonenroncom_audrey_kimberlywatsonenroncom |
| 47 | expense - expenses - enronupdateconcureworkplacecom - approver - invoices | 100 | 47_expense_expenses_enronupdateconcureworkplacecom_approver |
| 48 | counterparties - counterparty - counterpartylonectect - gcp - inc | 100 | 48_counterparties_counterparty_counterpartylonectect_gcp |
| 49 | amendments - amendment - amending - revised - agreement | 99 | 49_amendments_amendment_amending_revised |
| 50 | ny - nytimescom - headlines - news - nytdirectnytimescom | 96 | 50_ny_nytimescom_headlines_news |
| 51 | enron - inc - corp - enroncorrugated - shackletonhouectect | 96 | 51_enron_inc_corp_enroncorrugated |
| 52 | terminations - termination - terminated - enrononline - enron | 95 | 52_terminations_termination_terminated_enrononline |
| 53 | lunch - enron - luncheon - lunches - reschedule | 94 | 53_lunch_enron_luncheon_lunches |
| 54 | pga - golfer - golfers - golf - golfing | 94 | 54_pga_golfer_golfers_golf |
| 55 | longhorns - longhorn - baylor - fsu - lsu | 91 | 55_longhorns_longhorn_baylor_fsu |
| 56 | gisb - gisbs - gisb1aolcom - gisb4aolcom - enron | 90 | 56_gisb_gisbs_gisb1aolcom_gisb4aolcom |
| 57 | enron - guaranty - guaranties - guarantors - guarantor | 89 | 57_enron_guaranty_guaranties_guarantors |
| 58 | enroll - enrolled - cancellation - enrollment - attend | 88 | 58_enroll_enrolled_cancellation_enrollment |
| 59 | creditworthiness - enroncreditcom - enrononline - credit - credits | 88 | 59_creditworthiness_enroncreditcom_enrononline_credit |
| 60 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000076513pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000061279pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000058213pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000067320pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000076416pageapproval | 88 | 60_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000076513pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000061279pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000058213pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000067320pageapproval |
| 61 | books - book - trader - traders - netco | 85 | 61_books_book_trader_traders |
| 62 | drafts - draft - draftdoc - manuscript - enronltsadraftdoc | 84 | 62_drafts_draft_draftdoc_manuscript |
| 63 | enron - bankruptcy - creditors - bankrupcy - liquidation | 84 | 63_enron_bankruptcy_creditors_bankrupcy |
| 64 | electricity - utility - energy - utilities - electric | 84 | 64_electricity_utility_energy_utilities |
| 65 | schedules2002013110txt - schedules2002013010txt - schedules2002011110txt - schedules2002010507txt - schedules2002011414txt | 84 | 65_schedules2002013110txt_schedules2002013010txt_schedules2002011110txt_schedules2002010507txt |
| 66 | rtos - rto - ferc - fercs - rtg | 84 | 66_rtos_rto_ferc_fercs |
| 67 | interviewed - interviewing - interview - interviews - interviewers | 81 | 67_interviewed_interviewing_interview_interviews |
| 68 | enron - executives - encouragement - managementvitality - buddha | 81 | 68_enron_executives_encouragement_managementvitality |
| 69 | schedules2001122322txt - schedules2001122820txt - schedules2001122418txt - schedules2001122416txt - schedules2001122002txt | 81 | 69_schedules2001122322txt_schedules2001122820txt_schedules2001122418txt_schedules2001122416txt |
| 70 | conference - presentations - symposium - presentation - attend | 80 | 70_conference_presentations_symposium_presentation |
| 71 | netco - risktrac - newco - hierarchy - access | 80 | 71_netco_risktrac_newco_hierarchy |
| 72 | meeting - meet - houston - vince - email | 80 | 72_meeting_meet_houston_vince |
| 73 | agreements - netting - agreement - enroncom - collateral | 77 | 73_agreements_netting_agreement_enroncom |
| 74 | udcs - deadline - dasrs - dasr - suspended | 77 | 74_udcs_deadline_dasrs_dasr |
| 75 | enron - reuters - dow - tariff - indias | 77 | 75_enron_reuters_dow_tariff |
| 76 | edwardsacksenroncom - larryhunterenroncom - sbradfordenroncom - sharencasonenroncom - billhareenroncom | 76 | 76_edwardsacksenroncom_larryhunterenroncom_sbradfordenroncom_sharencasonenroncom |
| 77 | authorized - cps - trading - cp - approved | 76 | 77_authorized_cps_trading_cp |
| 78 | gtcs - gtc - gta - epc - revised | 75 | 78_gtcs_gtc_gta_epc |
| 79 | hourahead - schedules2001101406txt - schedules2001101404txt - schedules2001100504txt - schedules2001101803txt | 75 | 79_hourahead_schedules2001101406txt_schedules2001101404txt_schedules2001100504txt |
| 80 | shipping - shipment - delivery - order - ship | 75 | 80_shipping_shipment_delivery_order |
| 81 | chairperson - invitation - appointment - calendar - meeting | 74 | 81_chairperson_invitation_appointment_calendar |
| 82 | letter - letters - richardsandersenroncom - note - commission | 73 | 82_letter_letters_richardsandersenroncom_note |
| 83 | pulpex - market - pulp - production - kraft | 71 | 83_pulpex_market_pulp_production |
| 84 | enron - enrons - houston - houstons - ews | 71 | 84_enron_enrons_houston_houstons |
| 85 | marriott - hotel - marriottcom - hotels - booking | 71 | 85_marriott_hotel_marriottcom_hotels |
| 86 | communicationsenron - waiver - coauthor - warehouse - steel | 70 | 86_communicationsenron_waiver_coauthor_warehouse |
| 87 | nokia - motorola - sprint - cellular - phone | 68 | 87_nokia_motorola_sprint_cellular |
| 88 | admissions - mba - gmat - admission - applying | 68 | 88_admissions_mba_gmat_admission |
| 89 | attachment - inline - exchangeadministratorenroncom - oenronounacnrecipientscncjohnso5 - ebow | 68 | 89_attachment_inline_exchangeadministratorenroncom_oenronounacnrecipientscncjohnso5 |
| 90 | revisions - agreements - contracts - contract - agreement | 67 | 90_revisions_agreements_contracts_contract |
| 91 | task - schedules2001043002txt - schedulingiso - schedule - subject | 67 | 91_task_schedules2001043002txt_schedulingiso_schedule |
| 92 | turbines - turbine - sale - purchase - transfer | 65 | 92_turbines_turbine_sale_purchase |
| 93 | schedules2001040112txt - schedules2001022808txt - schedules2001042816txt - schedules2001041404txt - schedules2001030808txt | 65 | 93_schedules2001040112txt_schedules2001022808txt_schedules2001042816txt_schedules2001041404txt |
| 94 | merrillhaasenroncom - annwhiteenroncom - geniafitzgeraldenroncom - nonyfloresenroncom - loustolerenroncom | 64 | 94_merrillhaasenroncom_annwhiteenroncom_geniafitzgeraldenroncom_nonyfloresenroncom |
| 95 | enrondoc - ryansiurekenroncom - aes - edwardbradyenroncom - jaddoanenroncom | 64 | 95_enrondoc_ryansiurekenroncom_aes_edwardbradyenroncom |
| 96 | netware - novellnetwarebdcimailcom - httpwwwnwfusioncomnewslettersnetwareindexhtml - visualwares - httpa676gakamaitechnetf67677360mimagesm0netcatcontenthotwire023spacergif | 64 | 96_netware_novellnetwarebdcimailcom_httpwwwnwfusioncomnewslettersnetwareindexhtml_visualwares |
| 97 | enronupdateconcureworkplacecom - expenses - expense - paid - payment | 63 | 97_enronupdateconcureworkplacecom_expenses_expense_paid |
| 98 | enron - agreements - agreement - sale - corp | 63 | 98_enron_agreements_agreement_sale |
| 99 | office - billwilliamsenroncom - wednesday - monday - laynieeastenroncom | 61 | 99_office_billwilliamsenroncom_wednesday_monday |
| 100 | signatures - signature - signed - signing - enron | 61 | 100_signatures_signature_signed_signing |
| 101 | trades - deals - eol - prices - total | 61 | 101_trades_deals_eol_prices |
| 102 | internship - intern - internships - interns - recruiting | 60 | 102_internship_intern_internships_interns |
| 103 | sweepstakes - prizes - prize - jackpot - contest | 60 | 103_sweepstakes_prizes_prize_jackpot |
| 104 | easement - property - easements - appraisal - encroachments | 60 | 104_easement_property_easements_appraisal |
| 105 | hourahead - hour - crawler - schedule - failed | 60 | 105_hourahead_hour_crawler_schedule |
| 106 | schedule - dinner - invitation - lunch - meet | 59 | 106_schedule_dinner_invitation_lunch |
| 107 | paso - pasos - el - tariff - ferc | 59 | 107_paso_pasos_el_tariff |
| 108 | operating - procedure - iso - procedures - isos | 59 | 108_operating_procedure_iso_procedures |
| 109 | airlines - flights - airport - booking - boeing | 58 | 109_airlines_flights_airport_booking |
| 110 | storage - storeage - storagebdcimailcom - storagearea - capacity | 58 | 110_storage_storeage_storagebdcimailcom_storagearea |
| 111 | eb2601 - eb2834 - eb - eb3259 - eb3268 | 58 | 111_eb2601_eb2834_eb_eb3259 |
| 112 | nfl - commissionercom - sportslinecom - league - fantasy | 58 | 112_nfl_commissionercom_sportslinecom_league |
| 113 | epowers - epower - erex - westpower - reports | 58 | 113_epowers_epower_erex_westpower |
| 114 | ctc - ctcs - edison - edisons - owe | 57 | 114_ctc_ctcs_edison_edisons |
| 115 | futures - charts - candlesticks - carr - financial | 57 | 115_futures_charts_candlesticks_carr |
| 116 | dwr - dwrs - cdwr - revenue - cdwrs | 57 | 116_dwr_dwrs_cdwr_revenue |
| 117 | tw - tws - allocation - capacity - imbalance | 57 | 117_tw_tws_allocation_capacity |
| 118 | vince - vinces - vincekaminskienroncom - kaminski - jkaminskienroncom | 57 | 118_vince_vinces_vincekaminskienroncom_kaminski |
| 119 | lotus - documents - access - database - notes | 57 | 119_lotus_documents_access_database |
| 120 | variances - variance - schedule - schedules - schedtype | 56 | 120_variances_variance_schedule_schedules |
| 121 | unsubscribe - offersondemand - offermailboundary - advertisement - email | 56 | 121_unsubscribe_offersondemand_offermailboundary_advertisement |
| 122 | meteorological - weather - temperatures - weathernet - temperature | 56 | 122_meteorological_weather_temperatures_weathernet |
| 123 | wedding - brides - married - carolines - hotel | 56 | 123_wedding_brides_married_carolines |
| 124 | wines - winery - wine - sauvignon - winecom | 54 | 124_wines_winery_wine_sauvignon |
| 125 | enron - ena - enas - futures - exemption | 54 | 125_enron_ena_enas_futures |
| 126 | permits - permit - az - permitting - epa | 54 | 126_permits_permit_az_permitting |
| 127 | azps - pnpkaps230 - reservation - pnpk - oasis502apsccom | 53 | 127_azps_pnpkaps230_reservation_pnpk |
| 128 | outage - sap - notificationiscenroncom - enronlistenroncom - enronusersenroncom | 53 | 128_outage_sap_notificationiscenroncom_enronlistenroncom |
| 129 | nyisotechexchange - nyiso - prices - hours - 18002000 | 53 | 129_nyisotechexchange_nyiso_prices_hours |
| 130 | pira - forecast - piras - demand - weekly | 52 | 130_pira_forecast_piras_demand |
| 131 | irs - taxes - filing - tax - 1099 | 52 | 131_irs_taxes_filing_tax |
| 132 | office - mailtoimceanotes22rivest2c20anabeth22203crivest40gmpvt2ecom3e40 - assistant - mail - ext | 51 | 132_office_mailtoimceanotes22rivest2c20anabeth22203crivest40gmpvt2ecom3e40_assistant_mail |
| 133 | cap - caps - markets - market - prices | 51 | 133_cap_caps_markets_market |
| 134 | tickets - astros - ticket - enron - seats | 51 | 134_tickets_astros_ticket_enron |
| 135 | ceo - vice - promotion - oprah - chairman | 51 | 135_ceo_vice_promotion_oprah |
| 136 | recount - ballots - electors - recounts - florida | 51 | 136_recount_ballots_electors_recounts |
| 137 | sweaters - sweater - shirts - tuxedo - shopping | 51 | 137_sweaters_sweater_shirts_tuxedo |
| 138 | ercot - ercots - oomc - clientrelationsercotcom - oomcoome | 51 | 138_ercot_ercots_oomc_clientrelationsercotcom |
| 139 | epmicalpool - bethperlmanenroncom - ebs - dfs - esc | 50 | 139_epmicalpool_bethperlmanenroncom_ebs_dfs |
| 140 | variances - variance - schedules - schedtype - schedule | 50 | 140_variances_variance_schedules_schedtype |
| 141 | exchange - transfer - membership - member - exchangeinfonymexcom | 50 | 141_exchange_transfer_membership_member |
| 142 | mortgage - refinancing - refinance - mortgages - loans | 50 | 142_mortgage_refinancing_refinance_mortgages |
| 143 | marketer - recruiter - trading - opportunities - resume | 50 | 143_marketer_recruiter_trading_opportunities |
| 144 | smtp - mail - messageid - esmtp - xmsmailpriority | 50 | 144_smtp_mail_messageid_esmtp |
| 145 | gifts - gift - giftamazoncom - shopping - purchase | 49 | 145_gifts_gift_giftamazoncom_shopping |
| 146 | bandwidth - broadband - communicationsenron - telecommunications - telecom | 49 | 146_bandwidth_broadband_communicationsenron_telecommunications |
| 147 | earn - income - opportunities - financial - work | 49 | 147_earn_income_opportunities_financial |
| 148 | dbcaps97data - schedules2002011801txt - schedules2002011816txt - schedules2002011815txt - schedules2002011824txt | 48 | 148_dbcaps97data_schedules2002011801txt_schedules2002011816txt_schedules2002011815txt |
| 149 | nyisotechexchange - nyisotechexchangeglobal2000net - marketrelationsnyisocom - bulletin - nyiso | 48 | 149_nyisotechexchange_nyisotechexchangeglobal2000net_marketrelationsnyisocom_bulletin |
| 150 | fee - invoice - demand - reimbursement - amount | 48 | 150_fee_invoice_demand_reimbursement |
| 151 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000007876page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000078934pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000080623pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000078936pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000078937pageapproval | 47 | 151_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000007876page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000078934pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000080623pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000078936pageapproval |
| 152 | enrons - hfosterenroncom - markguzmanenroncom - michaeldriscollenroncom - aprilhrachenroncom | 47 | 152_enrons_hfosterenroncom_markguzmanenroncom_michaeldriscollenroncom |
| 153 | postcard - card - greetingcardscom - cards - cardmember | 47 | 153_postcard_card_greetingcardscom_cards |
| 154 | turbines - turbine - curtail200206db - curtailed - wind | 46 | 154_turbines_turbine_curtail200206db_curtailed |
| 155 | enron - merger - dynegy - dynegys - enrons | 45 | 155_enron_merger_dynegy_dynegys |
| 156 | enrons - feedback - review - reviewer - process | 45 | 156_enrons_feedback_review_reviewer |
| 157 | mexico - mexicos - mexican - perez - monterrey | 45 | 157_mexico_mexicos_mexican_perez |
| 158 | allencohrsenroncom - kenpowersenroncom - billmangelsenroncom - kenandersonenroncom - stephenherberenroncom | 44 | 158_allencohrsenroncom_kenpowersenroncom_billmangelsenroncom_kenandersonenroncom |
| 159 | deals - contract - sitara - price - deal | 44 | 159_deals_contract_sitara_price |
| 160 | outage - outages - outagesindustrialinfocom - email - deliveries | 44 | 160_outage_outages_outagesindustrialinfocom_email |
| 161 | synchronizing - synchronization - folder - folders - inbox | 44 | 161_synchronizing_synchronization_folder_folders |
| 162 | motley - forbescom - stocks - stock - investments | 44 | 162_motley_forbescom_stocks_stock |
| 163 | gifts - gift - ebay - fathers - shopping | 44 | 163_gifts_gift_ebay_fathers |
| 164 | wharton - enron - symposium - ebusiness - attend | 43 | 164_wharton_enron_symposium_ebusiness |
| 165 | surveys - enronfcucom - survey - questionnaire - enronanywhereenroncom | 43 | 165_surveys_enronfcucom_survey_questionnaire |
| 166 | seminar - seminars - prices - electricity - price | 43 | 166_seminar_seminars_prices_electricity |
| 167 | fantasy - commissionercom - league - nfl - tds | 43 | 167_fantasy_commissionercom_league_nfl |
| 168 | ferc - fercs - audit - investigation - commission | 43 | 168_ferc_fercs_audit_investigation |
| 169 | houstonchroniclecom - houston - houstons - houstonian - techxans | 43 | 169_houstonchroniclecom_houston_houstons_houstonian |
| 170 | enron - enronannouncementsenroncom - mailtoeiaenroncom - sponsor - announcement | 42 | 170_enron_enronannouncementsenroncom_mailtoeiaenroncom_sponsor |
| 171 | equityalert - equityalertcom - alert - alerts - ipo | 42 | 171_equityalert_equityalertcom_alert_alerts |
| 172 | forex - yens - yen - forexnews - currencies | 42 | 172_forex_yens_yen_forexnews |
| 173 | enronupdateconcureworkplacecom - approval - mgr - review - sjm11152001 | 41 | 173_enronupdateconcureworkplacecom_approval_mgr_review |
| 174 | swap - swaps - raptor - finalized - transaction | 41 | 174_swap_swaps_raptor_finalized |
| 175 | rent - rental - apartment - apartments - tenant | 41 | 175_rent_rental_apartment_apartments |
| 176 | pricing - priced - deals - sale - price | 40 | 176_pricing_priced_deals_sale |
| 177 | fts - columbias - columbia - capacity - allocations | 39 | 177_fts_columbias_columbia_capacity |
| 178 | httpwwwpowermarketerscompmanathtm - httpwwwpowermarketerscompmakrthtm - httpwwwpowermarketerscompmajobshtm - httpwwwpowermarketerscompmaintrhtm - httpwwwpowermarketerscomptraxhtm | 39 | 178_httpwwwpowermarketerscompmanathtm_httpwwwpowermarketerscompmakrthtm_httpwwwpowermarketerscompmajobshtm_httpwwwpowermarketerscompmaintrhtm |
| 179 | seawind - rpm - windsea - wind - port | 39 | 179_seawind_rpm_windsea_wind |
| 180 | lendingdayahead - deals - buyresale - sell - sales | 38 | 180_lendingdayahead_deals_buyresale_sell |
| 181 | socal - socals - socaltopock - socalgas - tws | 38 | 181_socal_socals_socaltopock_socalgas |
| 182 | walgreenscom - prescription - walgreens - prescriptions - merckmedcocom | 38 | 182_walgreenscom_prescription_walgreens_prescriptions |
| 183 | songs - cds - mp3 - musicmatch - artistdirect | 38 | 183_songs_cds_mp3_musicmatch |
| 184 | tennis - play - playing - tournament - weekend | 38 | 184_tennis_play_playing_tournament |
| 185 | enron - enrons - kevinhannonenroncom - ehaedickeenroncom - danleffenroncom | 38 | 185_enron_enrons_kevinhannonenroncom_ehaedickeenroncom |
| 186 | escrow - signature - certificate - collateral - certificates | 38 | 186_escrow_signature_certificate_collateral |
| 187 | ngprice - ng - ngpl - ngdkrs1001 - ngpropt | 38 | 187_ngprice_ng_ngpl_ngdkrs1001 |
| 188 | outlook - outlookteamenroncom - appointment - calendar - meetinglynns | 38 | 188_outlook_outlookteamenroncom_appointment_calendar |
| 189 | thanksgiving - thankful - celebrating - holiday - holidays | 38 | 189_thanksgiving_thankful_celebrating_holiday |
| 190 | powerpoint - presentations - presentation - presentationworking - slides | 38 | 190_powerpoint_presentations_presentation_presentationworking |
| 191 | payment - visa - card - paypal - providian | 37 | 191_payment_visa_card_paypal |
| 192 | parkingtransportationenroncom - parking - transportationparkingenroncom - transportationcorpenron - park | 37 | 192_parkingtransportationenroncom_parking_transportationparkingenroncom_transportationcorpenron |
| 193 | lynneenroncom - billenroncom - 82bruceenroncom - andyenroncom - 80steveenroncom | 37 | 193_lynneenroncom_billenroncom_82bruceenroncom_andyenroncom |
| 194 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000014988page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000014989page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000034918page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000034919page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000021433page | 37 | 194_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000014988page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000014989page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000034918page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000034919page |
| 195 | enron - enroncomment5101doc - insurance - attorneyclient - corp | 37 | 195_enron_enroncomment5101doc_insurance_attorneyclient |
| 196 | schedule - schedules - schedulingiso - hourahead - schedtype | 36 | 196_schedule_schedules_schedulingiso_hourahead |
| 197 | lineup - trade - freeman - taylor - jasonbass2compaqcom | 36 | 197_lineup_trade_freeman_taylor |
| 198 | weather - winds - snow - thunderstorm - precipitation | 36 | 198_weather_winds_snow_thunderstorm |
| 199 | enron - bankruptcy - enrons - bankrupt - savings | 36 | 199_enron_bankruptcy_enrons_bankrupt |
| 200 | hourahead - schedulingiso - hour - crawler - schedules2001041020txt | 36 | 200_hourahead_schedulingiso_hour_crawler |
| 201 | pdf - stockspdf - acrobat - charts - opecpdf | 36 | 201_pdf_stockspdf_acrobat_charts |
| 202 | meditation - faith - meditations - spiritual - verse | 36 | 202_meditation_faith_meditations_spiritual |
| 203 | reports - reporting - report - robinrodrigueenroncom - access | 35 | 203_reports_reporting_report_robinrodrigueenroncom |
| 204 | etiquette - what - favorite - kids - waitress | 35 | 204_etiquette_what_favorite_kids |
| 205 | emails - retention - archiving - outlook - mail | 34 | 205_emails_retention_archiving_outlook |
| 206 | enron - virus - viruses - virusfree - malicious | 34 | 206_enron_virus_viruses_virusfree |
| 207 | qb - wr - wrs - questionable - injured | 34 | 207_qb_wr_wrs_questionable |
| 208 | allencohrsenroncom - kenpowersenroncom - ljohnsonenroncom - tklohmanenroncom - stephenherberenroncom | 34 | 208_allencohrsenroncom_kenpowersenroncom_ljohnsonenroncom_tklohmanenroncom |
| 209 | schedules2002020118txt - schedules2002020418txt - schedules2002020218txt - schedules2002011418txt - schedules2002011016txt | 34 | 209_schedules2002020118txt_schedules2002020418txt_schedules2002020218txt_schedules2002011418txt |
| 210 | appointment - clinic - appointments - clinicians - therapist | 34 | 210_appointment_clinic_appointments_clinicians |
| 211 | httpa676gakamaitechnetf67677360mimagesm0netcm50content12610900810180021f1cspacergif - httpa676gakamaitechnetf67677360mimagesm0netcatcontenthpshopping122601transpacegif - httpa676gakamaitechnetf67677390mimagesm0netcatcontenthotwire010802spacergif - drilling - offshore | 33 | 211_httpa676gakamaitechnetf67677360mimagesm0netcm50content12610900810180021f1cspacergif_httpa676gakamaitechnetf67677360mimagesm0netcatcontenthpshopping122601transpacegif_httpa676gakamaitechnetf67677390mimagesm0netcatcontenthotwire010802spacergif_drilling |
| 212 | dbcaps97data - renpowercapscaps97datamdb - schedules2001102106txt - schedules2001102201txt - schedules2001102114txt | 33 | 212_dbcaps97data_renpowercapscaps97datamdb_schedules2001102106txt_schedules2001102201txt |
| 213 | mariner - offshore - ecr - gulf - hms | 33 | 213_mariner_offshore_ecr_gulf |
| 214 | san - pasos - ca - deliveries - socal | 33 | 214_san_pasos_ca_deliveries |
| 215 | carolyncooneyenroncom - jeandresslerenroncom - billmooreenroncom - charlesyeungenroncom - robinkittelenroncom | 33 | 215_carolyncooneyenroncom_jeandresslerenroncom_billmooreenroncom_charlesyeungenroncom |
| 216 | recipe - recipes - macaroni - dinner - baked | 33 | 216_recipe_recipes_macaroni_dinner |
| 217 | furniture - chair - chairs - sofa - couch | 33 | 217_furniture_chair_chairs_sofa |
| 218 | westpower - east - httpeastpowerdevcorpenroncomcoopermidnyisoasp - erv - httpeastpowerdevcorpenroncomsummarynyisosummaryasp | 33 | 218_westpower_east_httpeastpowerdevcorpenroncomcoopermidnyisoasp_erv |
| 219 | transmission - scheduling - psco - poston - confirmation | 33 | 219_transmission_scheduling_psco_poston |
| 220 | debt - debts - creditors - consolidate - loans | 33 | 220_debt_debts_creditors_consolidate |
| 221 | enron - egm - egms - commodities - enrons | 33 | 221_enron_egm_egms_commodities |
| 222 | outlook - exchange - mailbox - email - outage | 33 | 222_outlook_exchange_mailbox_email |
| 223 | postmasterenroncom - mailerdaemonpostmasterenroncom - recipient - mailmanenroncom - mail | 32 | 223_postmasterenroncom_mailerdaemonpostmasterenroncom_recipient_mailmanenroncom |
| 224 | powerindexlistintcxcom - intercontinentalexchange - intercontinentalexchanges - salesintcxcom - index | 32 | 224_powerindexlistintcxcom_intercontinentalexchange_intercontinentalexchanges_salesintcxcom |
| 225 | ce - pas - partielle - merci - message | 32 | 225_ce_pas_partielle_merci |
| 226 | calpine - calpines - calpeak - calpeaks - san | 32 | 226_calpine_calpines_calpeak_calpeaks |
| 227 | enron - hpl0926xls - hpln0210xls - hpln0213xls - hpln0113xls | 32 | 227_enron_hpl0926xls_hpln0210xls_hpln0213xls |
| 228 | bonds - bond - debt - bonding - investors | 32 | 228_bonds_bond_debt_bonding |
| 229 | renewable - energycommitteemembersxls - energy - emissions - solar | 32 | 229_renewable_energycommitteemembersxls_energy_emissions |
| 230 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000034925page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000034922page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000034923page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000008491page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000034918page | 32 | 230_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000034925page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000034922page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000034923page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000008491page |
| 231 | mike - maggi - michelle - sarah - talking | 32 | 231_mike_maggi_michelle_sarah |
| 232 | profile - looking - appear - contact - message | 32 | 232_profile_looking_appear_contact |
| 233 | charts - futures - carr - 1500 - matrices | 32 | 233_charts_futures_carr_1500 |
| 234 | documents - document - files - copies - docs | 32 | 234_documents_document_files_copies |
| 235 | outlookteamenroncom - outlook - laptop - databases - lotus | 32 | 235_outlookteamenroncom_outlook_laptop_databases |
| 236 | thomasunderwoodenroncom - scottnealenroncom - ashleyworthingenroncom - seanriordanenroncom - davidbaumbachenroncom | 31 | 236_thomasunderwoodenroncom_scottnealenroncom_ashleyworthingenroncom_seanriordanenroncom |
| 237 | pipelines - pipeline - frontier - transcanada - pipelinesouthcentral | 31 | 237_pipelines_pipeline_frontier_transcanada |
| 238 | schedules2002011204txt - schedules2002011201txt - schedules2002012504txt - schedules2002011203txt - schedules2002012503txt | 31 | 238_schedules2002011204txt_schedules2002011201txt_schedules2002012504txt_schedules2002011203txt |
| 239 | nfl - yards - jets - sportslinecom - league | 31 | 239_nfl_yards_jets_sportslinecom |
| 240 | tx - consumercareautobytelcom - chrysler - dodge - dealership | 31 | 240_tx_consumercareautobytelcom_chrysler_dodge |
| 241 | expiring - expiration - nymex - operational - status | 31 | 241_expiring_expiration_nymex_operational |
| 242 | sidewalks - curbs - street - streets - townhomes | 31 | 242_sidewalks_curbs_street_streets |
| 243 | sheetalpatelenroncom - larryhunterenroncom - kevinmeredithenroncom - bernicerodriguezenroncom - adnanpatelenroncom | 31 | 243_sheetalpatelenroncom_larryhunterenroncom_kevinmeredithenroncom_bernicerodriguezenroncom |
| 244 | survey - elbertsonhouectect - joneshouectect - haashouectect - sandershouectect | 31 | 244_survey_elbertsonhouectect_joneshouectect_haashouectect |
| 245 | message - disclosure - futures - prohibited - interdite | 30 | 245_message_disclosure_futures_prohibited |
| 246 | invoice - ipayitenroncom - ipayitipayitenroncomenron - ipayit - ehronlinesap | 30 | 246_invoice_ipayitenroncom_ipayitipayitenroncomenron_ipayit |
| 247 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000009688page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000008348page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000076513pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000082421pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000082437pageapproval | 30 | 247_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000009688page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000008348page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000076513pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000082421pageapproval |
| 248 | allegheny - alleghenys - alleghenymerrill - merrillissuexls - merrillallegheny | 30 | 248_allegheny_alleghenys_alleghenymerrill_merrillissuexls |
| 249 | offshore - oilfield - petroleum - gulf - oil | 30 | 249_offshore_oilfield_petroleum_gulf |
| 250 | schedulers - volumes - agl - calculated - worksheets | 30 | 250_schedulers_volumes_agl_calculated |
| 251 | enron - draft - corp - hanover - debraperlingiereenroncom | 30 | 251_enron_draft_corp_hanover |
| 252 | enrononline - counterparties - ena - counterparty - sydney | 29 | 252_enrononline_counterparties_ena_counterparty |
| 253 | reports - reporting - erv - peak - erms | 29 | 253_reports_reporting_erv_peak |
| 254 | jeandresslerenroncom - carolyncooneyenroncom - charlesyeungenroncom - stephenburnsenroncom - billmooreenroncom | 29 | 254_jeandresslerenroncom_carolyncooneyenroncom_charlesyeungenroncom_stephenburnsenroncom |
| 255 | bpas - bpa - bps - bpatbl - enrons | 29 | 255_bpas_bpa_bps_bpatbl |
| 256 | egm - ebitda - ewc - e262a1 - ermspc | 29 | 256_egm_ebitda_ewc_e262a1 |
| 257 | types - type - ethylene - approved - fin | 29 | 257_types_type_ethylene_approved |
| 258 | dell - dells - inspiron - pocket - notebook | 29 | 258_dell_dells_inspiron_pocket |
| 259 | revenue - tw - tws - quarterly - revenues | 29 | 259_revenue_tw_tws_quarterly |
| 260 | schedules2002020616txt - schedules2002010110txt - schedules2002020503txt - schedules2002020508txt - schedules2002020506txt | 29 | 260_schedules2002020616txt_schedules2002010110txt_schedules2002020503txt_schedules2002020508txt |
| 261 | emergency - fires - edis - sacramento - fire | 28 | 261_emergency_fires_edis_sacramento |
| 262 | enron - ceoenron - ets - operationsenron - chairman | 28 | 262_enron_ceoenron_ets_operationsenron |
| 263 | tx - houston - address - addresss - 3317048 | 28 | 263_tx_houston_address_addresss |
| 264 | rewards - promotionssportingbetusacom - holiday - sportsline - sportslinecoms | 28 | 264_rewards_promotionssportingbetusacom_holiday_sportsline |
| 265 | meetingeb4102 - meeting4102 - invitation - calendar - enronwide | 28 | 265_meetingeb4102_meeting4102_invitation_calendar |
| 266 | bpas - bpa - bpadiscoverymerkellawcom - mthomasmtpowercom - pmurphymbllpcom | 28 | 266_bpas_bpa_bpadiscoverymerkellawcom_mthomasmtpowercom |
| 267 | scoopmailermoneynet - mailing - send - message - address | 28 | 267_scoopmailermoneynet_mailing_send_message |
| 268 | eastrans - bmcmillsdukeenergycom - mcmills - 500000mmbtu - nominations | 28 | 268_eastrans_bmcmillsdukeenergycom_mcmills_500000mmbtu |
| 269 | pricing - costs - lpg - freight - cost | 28 | 269_pricing_costs_lpg_freight |
| 270 | servicebusinesswirecom - txindustrialinforesbusinesswirecom - houstonbusiness - industrialinfocom - emaildeliverybusinesswirecom | 28 | 270_servicebusinesswirecom_txindustrialinforesbusinesswirecom_houstonbusiness_industrialinfocom |
| 271 | feedback - deadline - status - pepenquiriesenroncom - closes | 28 | 271_feedback_deadline_status_pepenquiriesenroncom |
| 272 | epsa - epsas - panelists - commission - policy | 28 | 272_epsa_epsas_panelists_commission |
| 273 | ceraweek - cera - ceras - ceracom - energy | 28 | 273_ceraweek_cera_ceras_ceracom |
| 274 | schedules2001030813txt - schedules2001042206txt - schedules2001040916txt - schedules2001040920txt - schedules2001031606txt | 27 | 274_schedules2001030813txt_schedules2001042206txt_schedules2001040916txt_schedules2001040920txt |
| 275 | enronmailsweeperadminenroncom - enron - virus - quarantine - quarantined | 27 | 275_enronmailsweeperadminenroncom_enron_virus_quarantine |
| 276 | feedback - assessment - executive - process - participate | 27 | 276_feedback_assessment_executive_process |
| 277 | schedules2002020602txt - schedules2002020601txt - schedules2002020607txt - schedules2002020606txt - schedules2002020603txt | 27 | 277_schedules2002020602txt_schedules2002020601txt_schedules2002020607txt_schedules2002020606txt |
| 278 | sportslinecom - font - helveticaplayerfontfontbfonttdtd - html - td | 26 | 278_sportslinecom_font_helveticaplayerfontfontbfonttdtd_html |
| 279 | sap - xms - expense - isc - express | 26 | 279_sap_xms_expense_isc |
| 280 | postmasterdowjonescom - postmasterenroncom - mail - recipients - postmastersims1rcsntxswbellnet | 26 | 280_postmasterdowjonescom_postmasterenroncom_mail_recipients |
| 281 | httpwwwenroncomcorppressroomreleases2001eneeneq101ltrhtml - seanriordanenroncom - larrymayenroncom - dutchquigleyenroncom - markgreenbergenroncom | 26 | 281_httpwwwenroncomcorppressroomreleases2001eneeneq101ltrhtml_seanriordanenroncom_larrymayenroncom_dutchquigleyenroncom |
| 282 | moveteamenroncom - moved - moving - move - scheduled | 26 | 282_moveteamenroncom_moved_moving_move |
| 283 | fares - fare - airfare - farewatcher - flights | 26 | 283_fares_fare_airfare_farewatcher |
| 284 | bank - banks - banking - henningbrutteldresdnerbankcom - bankers | 26 | 284_bank_banks_banking_henningbrutteldresdnerbankcom |
| 285 | cds - cd - discs - disc - dvds | 26 | 285_cds_cd_discs_disc |
| 286 | schedules2002020322txt - schedules2002020321txt - schedules2002020207txt - schedules2002020214txt - schedules2002020220txt | 26 | 286_schedules2002020322txt_schedules2002020321txt_schedules2002020207txt_schedules2002020214txt |
| 287 | palm - palmcom - palmnet - palmtm - palmt | 26 | 287_palm_palmcom_palmnet_palmtm |
| 288 | playoffs - fantasy - playoff - league - standings | 26 | 288_playoffs_fantasy_playoff_league |
| 289 | kenseysubscribermailmanenroncom - kenseydeletion - bulkdiscussionrecommendationsnews - kensey - recommendlist | 26 | 289_kenseysubscribermailmanenroncom_kenseydeletion_bulkdiscussionrecommendationsnews_kensey |
| 290 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000009797page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000010917page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000010211page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000007591page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000007593page | 25 | 290_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000009797page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000010917page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000010211page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000007591page |
| 291 | turbine - turbines - lv - agreement - cogen | 25 | 291_turbine_turbines_lv_agreement |
| 292 | nerc - stakeholders - trustees - ferc - organization | 25 | 292_nerc_stakeholders_trustees_ferc |
| 293 | gulf - enerfax - oil - prices - futures | 25 | 293_gulf_enerfax_oil_prices |
| 294 | agt - pipeline - scheduled - algonquin - compressor | 25 | 294_agt_pipeline_scheduled_algonquin |
| 295 | nasdaq - index - stocks - crb - trading | 25 | 295_nasdaq_index_stocks_crb |
| 296 | outage - maintenance - unloaders - station - refuel | 25 | 296_outage_maintenance_unloaders_station |
| 297 | enron - enrons - bankruptcy - bankrupt - savings | 25 | 297_enron_enrons_bankruptcy_bankrupt |
| 298 | bank - chasemail - enronna0037 - chase - jpmorgan | 25 | 298_bank_chasemail_enronna0037_chase |
| 299 | invoices - ibuyitpayablesenroncom - invoice - ibuyit - ibuyitpayables | 25 | 299_invoices_ibuyitpayablesenroncom_invoice_ibuyit |
| 300 | rotations - rotation - rotating - rotational - trading | 25 | 300_rotations_rotation_rotating_rotational |
| 301 | enerfaxdailyenerfaxcom - enerfax - enerfaxdaily - enerfaxgoldsubscribeegroupscom - email | 25 | 301_enerfaxdailyenerfaxcom_enerfax_enerfaxdaily_enerfaxgoldsubscribeegroupscom |
| 302 | kennethkaaseenroncom - drewhillenroncom - gregmcclendonenroncom - martinolearyenroncom - russelldiamondenroncom | 25 | 302_kennethkaaseenroncom_drewhillenroncom_gregmcclendonenroncom_martinolearyenroncom |
| 303 | cd - receipt - cbl - cgtcustomerservicesnisourcecom - delivery | 25 | 303_cd_receipt_cbl_cgtcustomerservicesnisourcecom |
| 304 | hourstimesheet - timesheet - timesheets - payroll - timesheetsiconconsultantscom | 24 | 304_hourstimesheet_timesheet_timesheets_payroll |
| 305 | forecasting - strategies - traders - markets - market | 24 | 305_forecasting_strategies_traders_markets |
| 306 | tonight - tonights - play - game - brianhoskinsenroncom | 24 | 306_tonight_tonights_play_game |
| 307 | msn - 8am - chjohnson98hotmailcom - scottwlhotmailcom - explorer | 24 | 307_msn_8am_chjohnson98hotmailcom_scottwlhotmailcom |
| 308 | schedule - forecast - forecasting - hourly - tuesday | 24 | 308_schedule_forecast_forecasting_hourly |
| 309 | enroncounterpartyenroncom - communicationsenron - counterpartyenron - billhareenroncom - fayettcorpenronenron | 24 | 309_enroncounterpartyenroncom_communicationsenron_counterpartyenron_billhareenroncom |
| 310 | cartridges - printer - inkjets - inkjet - cartridge | 24 | 310_cartridges_printer_inkjets_inkjet |
| 311 | transwestern - positions - position - jeffrichterenroncom - ashleyworthingenroncom | 24 | 311_transwestern_positions_position_jeffrichterenroncom |
| 312 | energy - commission - tesla - edclfpcom - alanreidpcpca | 24 | 312_energy_commission_tesla_edclfpcom |
| 313 | isda - usregcommisdaorg - swaps - regulatory - exchangeable | 24 | 313_isda_usregcommisdaorg_swaps_regulatory |
| 314 | reunion - smu - fraternity - homecoming - 76ers | 24 | 314_reunion_smu_fraternity_homecoming |
| 315 | datek - dateks - brokerage - wwwdatekcom - trading | 24 | 315_datek_dateks_brokerage_wwwdatekcom |
| 316 | interviewed - interviews - interviewers - interview - interviewing | 24 | 316_interviewed_interviews_interviewers_interview |
| 317 | rac - racs - raarc - audit - manage | 24 | 317_rac_racs_raarc_audit |
| 318 | auction - bidding - bid - auctions - seller | 24 | 318_auction_bidding_bid_auctions |
| 319 | enronmda - enron - peru - earthquakeperu - legislators | 24 | 319_enronmda_enron_peru_earthquakeperu |
| 320 | tennessee - unionville - capacity - tcoleach - available | 24 | 320_tennessee_unionville_capacity_tcoleach |
| 321 | sun - expansion - devil - tw - caithness | 24 | 321_sun_expansion_devil_tw |
| 322 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000033931page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000033931pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000079824pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000076650pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000057285pageapproval | 24 | 322_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000033931page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000033931pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000079824pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000076650pageapproval |
| 323 | dash - dashs - dashes - dashpastoria4030901 - signed | 23 | 323_dash_dashs_dashes_dashpastoria4030901 |
| 324 | feedback - enrons - process - requests - status | 23 | 324_feedback_enrons_process_requests |
| 325 | enron - secon3 - secon - facilities - unconfirmed | 23 | 325_enron_secon3_secon_facilities |
| 326 | evite - evites - invitation - invitations - visit | 23 | 326_evite_evites_invitation_invitations |
| 327 | schedule - weekend - week - weeks - sunday | 23 | 327_schedule_weekend_week_weeks |
| 328 | variance - variances - hourahead - hour - schedule | 23 | 328_variance_variances_hourahead_hour |
| 329 | schedules2002011324txt - schedules2002020324txt - schedules2002012324txt - schedules2002011024txt - schedules2002010324txt | 23 | 329_schedules2002011324txt_schedules2002020324txt_schedules2002012324txt_schedules2002011024txt |
| 330 | bonuses - bonusraise - incentive - raise - payments | 23 | 330_bonuses_bonusraise_incentive_raise |
| 331 | venturewire - venture - ventures - enterprise - investors | 23 | 331_venturewire_venture_ventures_enterprise |
| 332 | prices - eca - market - markets - hour | 23 | 332_prices_eca_market_markets |
| 333 | appointment - cherylnelsonenroncom - beth - beths - cheryl | 23 | 333_appointment_cherylnelsonenroncom_beth_beths |
| 334 | emails - email - message - receive - ship | 23 | 334_emails_email_message_receive |
| 335 | bpas - resigned - jeffreykeelerenroncom - mhainectenroncom - susanjmaraenroncom | 23 | 335_bpas_resigned_jeffreykeelerenroncom_mhainectenroncom |
| 336 | reinsurance - insurance - insurancerelated - insurers - reinsurers | 22 | 336_reinsurance_insurance_insurancerelated_insurers |
| 337 | reviewers - feedback - approval - reviewer - approved | 22 | 337_reviewers_feedback_approval_reviewer |
| 338 | laptop - ibm - thinkpad - upgradable - compaqcom | 22 | 338_laptop_ibm_thinkpad_upgradable |
| 339 | brian - brianc - plans - chet - night | 22 | 339_brian_brianc_plans_chet |
| 340 | tonnes - johnsherriffenroncom - gas - tons - gregpiperenroncom | 22 | 340_tonnes_johnsherriffenroncom_gas_tons |
| 341 | tradersnewsindexesipgdirectcom - tradersnews - hourlymeth010801doc - hourly - httpwwwtradersnewsenergycom | 22 | 341_tradersnewsindexesipgdirectcom_tradersnews_hourlymeth010801doc_hourly |
| 342 | dow - seats - jones - investext - investexts | 22 | 342_dow_seats_jones_investext |
| 343 | nymex - nymexng - exchange - enymex - futures | 22 | 343_nymex_nymexng_exchange_enymex |
| 344 | outage - email - customerserviceindustrialinfocom - mailerindustrialinfocom - delivery | 22 | 344_outage_email_customerserviceindustrialinfocom_mailerindustrialinfocom |
| 345 | visas - visa - consulate - anshumansrivastavenroncom - anshuman | 22 | 345_visas_visa_consulate_anshumansrivastavenroncom |
| 346 | onepass - mailtoonepasscoaircom - httponepasscontinentalcomemailupgrades - airlines - elite | 22 | 346_onepass_mailtoonepasscoaircom_httponepasscontinentalcomemailupgrades_airlines |
| 347 | tariff - tariffs - interstates - transportation - production | 22 | 347_tariff_tariffs_interstates_transportation |
| 348 | calpine - calger - houston - ricky - rickyacalpinecom | 21 | 348_calpine_calger_houston_ricky |
| 349 | chron - edison - administration - in - regulators | 21 | 349_chron_edison_administration_in |
| 350 | schedules2001123020txt - schedules2001122208txt - schedules2001123012txt - schedules2001123002txt - schedules2001123017txt | 21 | 350_schedules2001123020txt_schedules2001122208txt_schedules2001123012txt_schedules2001123002txt |
| 351 | sapsecurityenroncom - ehronline - sap - ehronlineenroncom - password | 21 | 351_sapsecurityenroncom_ehronline_sap_ehronlineenroncom |
| 352 | schedule - enron - schedules - frazier - tuesday | 21 | 352_schedule_enron_schedules_frazier |
| 353 | risk - risks - policy - compliance - liquidityfunding | 21 | 353_risk_risks_policy_compliance |
| 354 | schedules2002011508txt - schedules2002011208txt - schedules2002020208txt - schedules2002011408txt - schedules2002020408txt | 21 | 354_schedules2002011508txt_schedules2002011208txt_schedules2002020208txt_schedules2002011408txt |
| 355 | enron - kitchen - employeename - louisekitchenenroncom - employee | 21 | 355_enron_kitchen_employeename_louisekitchenenroncom |
| 356 | np15 - broker - energy - deal - caiso | 21 | 356_np15_broker_energy_deal |
| 357 | thelawnet - law - attorneys - attorney - judicial | 21 | 357_thelawnet_law_attorneys_attorney |
| 358 | ehronline - ipayitenroncom - enron1 - ipayit - sap | 21 | 358_ehronline_ipayitenroncom_enron1_ipayit |
| 359 | diabetes - diabetesorg - diabetic - insulin - diabetesonestopcom | 21 | 359_diabetes_diabetesorg_diabetic_insulin |
| 360 | severance - terminated - unemployment - bonuses - employment | 21 | 360_severance_terminated_unemployment_bonuses |
| 361 | tallahassee - dominion - payment - deal - florida | 21 | 361_tallahassee_dominion_payment_deal |
| 362 | sportslinecom - font - td - html - fantasy | 21 | 362_sportslinecom_font_td_html |
| 363 | communicationsenron - facilitiesenron - centerenron - heardenron - approvals | 20 | 363_communicationsenron_facilitiesenron_centerenron_heardenron |
| 364 | revised - markbernsteinenroncom - markmcconnellenroncom - technical - review | 20 | 364_revised_markbernsteinenroncom_markmcconnellenroncom_technical |
| 365 | sap - sapsecurityenroncom - enterprise - approvergpg - accountantgpg | 20 | 365_sap_sapsecurityenroncom_enterprise_approvergpg |
| 366 | schedules2002010816txt - schedules2002012505txt - schedules2002010614txt - schedules2002010810txt - schedules2002010608txt | 20 | 366_schedules2002010816txt_schedules2002012505txt_schedules2002010614txt_schedules2002010810txt |
| 367 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000082135pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000081799pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000076650pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000081127pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000081214pageapproval | 20 | 367_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000082135pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000081799pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000076650pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000081127pageapproval |
| 368 | payrollenronenroncom - enronpayrollenroncom - payroll - paycheck - paychecks | 20 | 368_payrollenronenroncom_enronpayrollenroncom_payroll_paycheck |
| 369 | postmasterenroncom - recipients - delivery - notification - relayed | 20 | 369_postmasterenroncom_recipients_delivery_notification |
| 370 | dvds - dvd - vhs - httpwwwnetworksstatuscomdvddvd25 - copy | 20 | 370_dvds_dvd_vhs_httpwwwnetworksstatuscomdvddvd25 |
| 371 | enron - enrons - ldayenroncom - willisphilipenroncom - kevinclineenroncom | 20 | 371_enron_enrons_ldayenroncom_willisphilipenroncom |
| 372 | lunch - chris - germanyhouectect - germany - wed | 20 | 372_lunch_chris_germanyhouectect_germany |
| 373 | passwords - password - passworddoc - enrononline - security | 20 | 373_passwords_password_passworddoc_enrononline |
| 374 | schedules2002010502txt - schedules2002010815txt - schedules2002010813txt - schedules2002010320txt - schedules2002010804txt | 20 | 374_schedules2002010502txt_schedules2002010815txt_schedules2002010813txt_schedules2002010320txt |
| 375 | documents - hawaii - wordperfect - enronhawaii - amendedrestated | 20 | 375_documents_hawaii_wordperfect_enronhawaii |
| 376 | espeak - enron - enronannouncementsenroncom - ethinkenroncom - ethink | 20 | 376_espeak_enron_enronannouncementsenroncom_ethinkenroncom |
| 377 | agreements - ena - contract - agreement - negotiation | 20 | 377_agreements_ena_contract_agreement |
| 378 | mailbox - outlook - mailboxoutlook - exchangeadministratorenroncom - delete | 19 | 378_mailbox_outlook_mailboxoutlook_exchangeadministratorenroncom |
| 379 | insurance - httpwwwlifeinsuranceoptionscomindexrmhtml - premium - premiums - medicaid | 19 | 379_insurance_httpwwwlifeinsuranceoptionscomindexrmhtml_premium_premiums |
| 380 | mcf - mmbtu - flow - btu - flowed | 19 | 380_mcf_mmbtu_flow_btu |
| 381 | mailbox - outlook - exchangeadministratorenroncom - delete - folder | 19 | 381_mailbox_outlook_exchangeadministratorenroncom_delete |
| 382 | providers - networker - networking - network - broadband | 19 | 382_providers_networker_networking_network |
| 383 | holiday - nytimescoms - nytimescom - nytdirectnytimescom - ny | 19 | 383_holiday_nytimescoms_nytimescom_nytdirectnytimescom |
| 384 | annuity - annuities - annuites - 2001annuities - approve | 19 | 384_annuity_annuities_annuites_2001annuities |
| 385 | fee - agreement - beta - prebon - revised | 19 | 385_fee_agreement_beta_prebon |
| 386 | gtv - gtv2 - gtvii - rollups - rollup | 19 | 386_gtv_gtv2_gtvii_rollups |
| 387 | pge - pges - regulated - utilities - pgespreadvaluationdefinition | 19 | 387_pge_pges_regulated_utilities |
| 388 | enron - enrons - bankruptcy - bankrupt - donate | 19 | 388_enron_enrons_bankruptcy_bankrupt |
| 389 | enron - correctenron - kenneththibodeauxenroncom - johnallisonenroncom - davidportenroncom | 19 | 389_enron_correctenron_kenneththibodeauxenroncom_johnallisonenroncom |
| 390 | schedules2002011505txt - schedules2002011015txt - schedules2002020115txt - schedules2002012815txt - schedules2002013015txt | 19 | 390_schedules2002011505txt_schedules2002011015txt_schedules2002020115txt_schedules2002012815txt |
| 391 | sexxxmailer - sexxxmaillistemailbuckscom - porno - nude - uncensored | 19 | 391_sexxxmailer_sexxxmaillistemailbuckscom_porno_nude |
| 392 | allencohrsenroncom - ljohnsonenroncom - kenandersonenroncom - rogerwestfallenroncom - daveneubauerenroncom | 19 | 392_allencohrsenroncom_ljohnsonenroncom_kenandersonenroncom_rogerwestfallenroncom |
| 393 | duke - contract - parkercorpenronenron - deals - llc | 19 | 393_duke_contract_parkercorpenronenron_deals |
| 394 | daughter - birth - emily - prenatal - nanny | 19 | 394_daughter_birth_emily_prenatal |
| 395 | houston - tollway - highway - greatwood - gate | 19 | 395_houston_tollway_highway_greatwood |
| 396 | schedule - scheduling - schedules - schedulesercot - portlandshiftenroncom | 19 | 396_schedule_scheduling_schedules_schedulesercot |
| 397 | wildhorse - wildhorses - wildhorse1doc - wildhorse2doc - proposal | 19 | 397_wildhorse_wildhorses_wildhorse1doc_wildhorse2doc |
| 398 | abcnewsnoweditormailabcnewsgocom - abcnews - abcnewscom - httpwwwmsnbccomnewsinfectiousfrontaspbtnmbtuhttpwwwmsnbccom - mailings | 19 | 398_abcnewsnoweditormailabcnewsgocom_abcnews_abcnewscom_httpwwwmsnbccomnewsinfectiousfrontaspbtnmbtuhttpwwwmsnbccom |
| 399 | agreements - agreementsdoc - memorandum - agreement - revisions | 19 | 399_agreements_agreementsdoc_memorandum_agreement |
| 400 | tax - taxes - reserve - valuation - accounting | 18 | 400_tax_taxes_reserve_valuation |
| 401 | realplayer - realplayerr - realnetworksr - realjukeboxr - realjukebox | 18 | 401_realplayer_realplayerr_realnetworksr_realjukeboxr |
| 402 | plattslistservplattscom - platts - unsubscribing - unsubscribe - unsubscribed | 18 | 402_plattslistservplattscom_platts_unsubscribing_unsubscribe |
| 403 | rfrankenroncom - bansharmaenroncom - jdasovicenroncom - jeffreykeelerenroncom - smaraenroncom | 18 | 403_rfrankenroncom_bansharmaenroncom_jdasovicenroncom_jeffreykeelerenroncom |
| 404 | market - nat - analysis - traders - capstonetexasnet | 18 | 404_market_nat_analysis_traders |
| 405 | amitavadharenroncom - tomhalliburtonenroncom - ainsleygaddisenroncom - kevinmooreenroncom - adamstevensenroncom | 18 | 405_amitavadharenroncom_tomhalliburtonenroncom_ainsleygaddisenroncom_kevinmooreenroncom |
| 406 | exchanges - markets - powermarketerscom - liquidity - powerplants | 18 | 406_exchanges_markets_powermarketerscom_liquidity |
| 407 | outlook - outlookteamenroncom - appointment - calendar - priority | 18 | 407_outlook_outlookteamenroncom_appointment_calendar |
| 408 | schedules2002013112txt - schedules2002011013txt - schedules2002013113txt - schedules2002013013txt - schedules2002010412txt | 18 | 408_schedules2002013112txt_schedules2002011013txt_schedules2002013113txt_schedules2002013013txt |
| 409 | schedules2002010824txt - schedules2002010613txt - schedules2002010702txt - schedules2002010405txt - schedules2002010706txt | 18 | 409_schedules2002010824txt_schedules2002010613txt_schedules2002010702txt_schedules2002010405txt |
| 410 | nancysellersrobertmondavicom - nancy - mieux - bois - chante | 18 | 410_nancysellersrobertmondavicom_nancy_mieux_bois |
| 411 | schedules2002011017txt - schedules2002013017txt - schedules2002011117txt - schedules2002011417txt - schedules2002013117txt | 18 | 411_schedules2002011017txt_schedules2002013017txt_schedules2002011117txt_schedules2002011417txt |
| 412 | credit - creditcheckr - creditcheck - equifax - loan | 18 | 412_credit_creditcheckr_creditcheck_equifax |
| 413 | edclfpcom - hacknecescecom - pikesepenergycom - carolynvealhunterasmcagov - dominicdimarecalchambercom | 18 | 413_edclfpcom_hacknecescecom_pikesepenergycom_carolynvealhunterasmcagov |
| 414 | ferc - refunds - fercs - refund - fercbriefdoc | 18 | 414_ferc_refunds_fercs_refund |
| 415 | hrefhttpwwwnytimescomemail - srchttpgraphicsnytimescomimagesemailtodaysheadlinestodayshedflaggif - srchttpwwwnytimescomimagesemailwinenytnewslettersgif - hrefhttpwwwnytimescompagesnationalindexhtmltodaysheadlinemore - hrefmailtofeedbacknytimescomfeedbacknytimescoma | 18 | 415_hrefhttpwwwnytimescomemail_srchttpgraphicsnytimescomimagesemailtodaysheadlinestodayshedflaggif_srchttpwwwnytimescomimagesemailwinenytnewslettersgif_hrefhttpwwwnytimescompagesnationalindexhtmltodaysheadlinemore |
| 416 | energy - epri - svmg - policymakers - infrastructure | 18 | 416_energy_epri_svmg_policymakers |
| 417 | lehman - economy - yield - inflation - market | 18 | 417_lehman_economy_yield_inflation |
| 418 | san - plan - los - energy - summer | 18 | 418_san_plan_los_energy |
| 419 | play - lsu - teams - plays - tournament | 18 | 419_play_lsu_teams_plays |
| 420 | griffith - enronmessagingadministrationenroncom - flows - flow - contract | 18 | 420_griffith_enronmessagingadministrationenroncom_flows_flow |
| 421 | panelists - industrials - industrial - howardfromerenroncom - roberthemstockenroncom | 17 | 421_panelists_industrials_industrial_howardfromerenroncom |
| 422 | deer - lease - hunt - ranch - hunting | 17 | 422_deer_lease_hunt_ranch |
| 423 | entex - ena - exxon - ect - enbrel | 17 | 423_entex_ena_exxon_ect |
| 424 | indexoption - indexes - index - indexforward - indices | 17 | 424_indexoption_indexes_index_indexforward |
| 425 | navajo - tribes - tribe - tribal - excavation | 17 | 425_navajo_tribes_tribe_tribal |
| 426 | energy - html - srchttpwwwenergycentralcomimagesecdecdxgif - srchttpwwwenergycentralcomimagestransgif - srchttpwwwenergycentralcomimagesiconsdiskicongif | 17 | 426_energy_html_srchttpwwwenergycentralcomimagesecdecdxgif_srchttpwwwenergycentralcomimagestransgif |
| 427 | schedules - sf - weekly - ca - francisco | 17 | 427_schedules_sf_weekly_ca |
| 428 | nadzieje - andrzej - polska - pisze - ze | 17 | 428_nadzieje_andrzej_polska_pisze |
| 429 | scholarships - scholarship - fafsa - fastweb - aidscholarship | 17 | 429_scholarships_scholarship_fafsa_fastweb |
| 430 | hourahead - hour - crawler - schedule - failed | 17 | 430_hourahead_hour_crawler_schedule |
| 431 | foolcom - foolmotleyfoolcom - fool - foolwatch - fools | 17 | 431_foolcom_foolmotleyfoolcom_fool_foolwatch |
| 432 | mailbox - outlook - exchangeadministratorenroncom - journaling - journal | 17 | 432_mailbox_outlook_exchangeadministratorenroncom_journaling |
| 433 | pipeline - logistics - pipes - pipe - transports | 17 | 433_pipeline_logistics_pipes_pipe |
| 434 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000037746page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000029730page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000008254page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000077033pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000010552page | 17 | 434_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000037746page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000029730page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000008254page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000077033pageapproval |
| 435 | enron - bankruptcy - bankrupt - enrons - savings | 17 | 435_enron_bankruptcy_bankrupt_enrons |
| 436 | qfs - qf - contracts - receivables - inga | 17 | 436_qfs_qf_contracts_receivables |
| 437 | leagues - league - commissionercom - transaction - httpfootballcommissionercomenron2000 | 17 | 437_leagues_league_commissionercom_transaction |
| 438 | nepco - vepco - dpc - dpca - eeos | 17 | 438_nepco_vepco_dpc_dpca |
| 439 | transformerpleasanton - abb - transformer - transformers - contract | 17 | 439_transformerpleasanton_abb_transformer_transformers |
| 440 | invoices - contract - enfolio - revised - amendedlnk | 17 | 440_invoices_contract_enfolio_revised |
| 441 | sportslinecom - font - td - fantasy - html | 17 | 441_sportslinecom_font_td_fantasy |
| 442 | ttc - marketopshouraheadcaisocom - mktstathouraheadcaisocom - pdci - status | 17 | 442_ttc_marketopshouraheadcaisocom_mktstathouraheadcaisocom_pdci |
| 443 | danleffenroncom - edwardsacksenroncom - martysundeenroncom - jameswrightenroncom - rogeryangenroncom | 17 | 443_danleffenroncom_edwardsacksenroncom_martysundeenroncom_jameswrightenroncom |
| 444 | fitrxcom - specialsfitrxcom - drugstorecom - specials - myoplex | 17 | 444_fitrxcom_specialsfitrxcom_drugstorecom_specials |
| 445 | transmission - contract - buyresale - resale - enronioadraftdoc | 17 | 445_transmission_contract_buyresale_resale |
| 446 | wholesale - utility - customers - ena - commodity | 17 | 446_wholesale_utility_customers_ena |
| 447 | fayeellisenroncom - houstonproductenroncom - edicksonenroncom - dielterryenroncom - dianesalcidoenroncom | 17 | 447_fayeellisenroncom_houstonproductenroncom_edicksonenroncom_dielterryenroncom |
| 448 | enron - enrons - bankruptcy - bankrupt - savings | 17 | 448_enron_enrons_bankruptcy_bankrupt |
| 449 | poem - happiness - inspirational - loving - life | 16 | 449_poem_happiness_inspirational_loving |
| 450 | edicksonenroncom - ecradyenroncom - carolyngeorgeenroncom - andrewedisonenroncom - peterdelenroncom | 16 | 450_edicksonenroncom_ecradyenroncom_carolyngeorgeenroncom_andrewedisonenroncom |
| 451 | enron - enrons - bankruptcy - bankrupt - savings | 16 | 451_enron_enrons_bankruptcy_bankrupt |
| 452 | overview - brochure - meeting - robertsupertyenroncom - cus25220 | 16 | 452_overview_brochure_meeting_robertsupertyenroncom |
| 453 | pira - forecast - piras - demand - reports | 16 | 453_pira_forecast_piras_demand |
| 454 | outlook - outlookteamenroncom - mailbox - mailings - migration | 16 | 454_outlook_outlookteamenroncom_mailbox_mailings |
| 455 | fuelcell - fuel - energyrequest - hillingscorpenron - palmercorpenron | 16 | 455_fuelcell_fuel_energyrequest_hillingscorpenron |
| 456 | technologydailyalertnationaljournalcom - httpnationaljournalcommembersnews2001101024insiderhtm - technology - technologydailyalertownermail2cloakroomcom - httpnationaljournalcommembersnewsbillsbillprogresshtm | 16 | 456_technologydailyalertnationaljournalcom_httpnationaljournalcommembersnews2001101024insiderhtm_technology_technologydailyalertownermail2cloakroomcom |
| 457 | enronannouncementsenroncom - enovate - robertsupertyenroncom - susantrevinoenroncom - eb50m | 16 | 457_enronannouncementsenroncom_enovate_robertsupertyenroncom_susantrevinoenroncom |
| 458 | homecoming - reunion - 25th - dates - chuck | 16 | 458_homecoming_reunion_25th_dates |
| 459 | postmasterenroncom - smtp - mailmanenroncom - enemta01enronrelease - enemta01enroncom | 16 | 459_postmasterenroncom_smtp_mailmanenroncom_enemta01enronrelease |
| 460 | enron - enrononline - weeklyreportenroncom - europeeolenroncom - europe | 16 | 460_enron_enrononline_weeklyreportenroncom_europeeolenroncom |
| 461 | pge - pges - pgt - policies - agreements | 16 | 461_pge_pges_pgt_policies |
| 462 | houstonreportenroncom - outage - outages - scheduled - corp | 16 | 462_houstonreportenroncom_outage_outages_scheduled |
| 463 | liquidations - liquidation - pjm - liquidated - financial | 16 | 463_liquidations_liquidation_pjm_liquidated |
| 464 | craigccalpinecom - socalgas - wreedsempracom - llorenzsocalgascom - tbrillsempracom | 16 | 464_craigccalpinecom_socalgas_wreedsempracom_llorenzsocalgascom |
| 465 | runnersenroncom - track - runners - uscaa - running | 16 | 465_runnersenroncom_track_runners_uscaa |
| 466 | futures - demand - market - cash - bcfday | 16 | 466_futures_demand_market_cash |
| 467 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000057285pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000080512pageapproval - pending - approval - contacting | 16 | 467_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000057285pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000080512pageapproval_pending_approval |
| 468 | dbcaps97data - schedules2002011817txt - schedules2002011718txt - hourahead - schedules2002011717txt | 16 | 468_dbcaps97data_schedules2002011817txt_schedules2002011718txt_hourahead |
| 469 | broadband - providers - provider - isp - modem | 16 | 469_broadband_providers_provider_isp |
| 470 | attend - classes - recruiting - training - schedule | 16 | 470_attend_classes_recruiting_training |
| 471 | mailtonancysellersrobertmondavicom - scottwlhotmailcom - palo - recipient - eweek | 16 | 471_mailtonancysellersrobertmondavicom_scottwlhotmailcom_palo_recipient |
| 472 | suneetsharmaenroncom - geniafitzgeraldenroncom - sharencasonenroncom - jameswrightenroncom - elysekalmansenroncom | 16 | 472_suneetsharmaenroncom_geniafitzgeraldenroncom_sharencasonenroncom_jameswrightenroncom |
| 473 | shellywoodenroncom - cassandraschultzenroncom - brianenroncom - kenneththibodeauxenroncom - johnsherriffenroncom | 16 | 473_shellywoodenroncom_cassandraschultzenroncom_brianenroncom_kenneththibodeauxenroncom |
| 474 | enron - enrononline - httpwwwenroncomcorpinvestors - llc - markets | 16 | 474_enron_enrononline_httpwwwenroncomcorpinvestors_llc |
| 475 | classmatescom - classmates - alumni - alumniaccess - alumniuserboldfish2snowballcom | 15 | 475_classmatescom_classmates_alumni_alumniaccess |
| 476 | hurricane - winds - latitude - weather - bermuda | 15 | 476_hurricane_winds_latitude_weather |
| 477 | ub27045105938paulzq4dzq4thomaszq2enronzq4comjoinmailccbncom - ub27045105390paulzq4dzq4thomaszq2enronzq4comjoinmailccbncom - ub29066108956paulzq4dzq4thomaszq2enronzq4comjoinmailccbncom - ub27045108461paulzq4dzq4thomaszq2enronzq4comjoinmailccbncom - cegirnewsuma2110607joinmailccbncom | 15 | 477_ub27045105938paulzq4dzq4thomaszq2enronzq4comjoinmailccbncom_ub27045105390paulzq4dzq4thomaszq2enronzq4comjoinmailccbncom_ub29066108956paulzq4dzq4thomaszq2enronzq4comjoinmailccbncom_ub27045108461paulzq4dzq4thomaszq2enronzq4comjoinmailccbncom |
| 478 | meeting - meet - lunch - wednesday - tuesday | 15 | 478_meeting_meet_lunch_wednesday |
| 479 | prayer - prayed - pray - praying - thou | 15 | 479_prayer_prayed_pray_praying |
| 480 | attachment - inline - subject - follows - 01vcorpedd4bea3f5113ae5nahoumsmbx01vcorpe | 15 | 480_attachment_inline_subject_follows |
| 481 | downtown - club - membership - clubs - fee | 15 | 481_downtown_club_membership_clubs |
| 482 | pipelines - pipeline - pipe - legislation - senate | 15 | 482_pipelines_pipeline_pipe_legislation |
| 483 | enronfcucom - enron - efcuenroncom - efcu - unionenron | 15 | 483_enronfcucom_enron_efcuenroncom_efcu |
| 484 | danleffenroncom - edisons - kevinkeeneyenroncom - jeremyblachmanenroncom - daviddelaineyenroncom | 15 | 484_danleffenroncom_edisons_kevinkeeneyenroncom_jeremyblachmanenroncom |
| 485 | receipt - received - document - lindastepenovitchbankofamericacom - email | 15 | 485_receipt_received_document_lindastepenovitchbankofamericacom |
| 486 | cnn - cnnmoney - cnns - breakingnewsmailcnncom - cnncoms | 15 | 486_cnn_cnnmoney_cnns_breakingnewsmailcnncom |
| 487 | enron - enronoptionsmailingenroncom - enronoptions - enronbenefits - enronannouncementsenroncom | 15 | 487_enron_enronoptionsmailingenroncom_enronoptions_enronbenefits |
| 488 | texasspp - aeps - aep - markets - texas | 15 | 488_texasspp_aeps_aep_markets |
| 489 | enron - financing - banks - bank - funding | 15 | 489_enron_financing_banks_bank |
| 490 | econnectvpn - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000079989pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000079466pageapproval - econnect - approval | 15 | 490_econnectvpn_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000079989pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000079466pageapproval_econnect |
| 491 | eb3892 - eb3817 - eb - fax7136463490 - eb3880d | 15 | 491_eb3892_eb3817_eb_fax7136463490 |
| 492 | tolkien - tolkiens - taliban - qaeda - nytimescom | 15 | 492_tolkien_tolkiens_taliban_qaeda |
| 493 | enrononline5enroncom - aep - enrononline3enroncom - enrononline2enroncom - enrononline6enroncom | 15 | 493_enrononline5enroncom_aep_enrononline3enroncom_enrononline2enroncom |
| 494 | pricing - commodity - price - fuel - uranium | 15 | 494_pricing_commodity_price_fuel |
| 495 | schedules2002011506txt - schedules2002011206txt - schedules2002020206txt - schedules2002012506txt - schedules2002011006txt | 15 | 495_schedules2002011506txt_schedules2002011206txt_schedules2002020206txt_schedules2002012506txt |
| 496 | enrononline - approval - brochure - review - postcard | 15 | 496_enrononline_approval_brochure_review |
| 497 | lexisnexis - lexis - certifinder - enronpubrecdoc - filings | 15 | 497_lexisnexis_lexis_certifinder_enronpubrecdoc |
| 498 | reviewers - reviewed - prc - approved - perfmgmtenroncomenron | 15 | 498_reviewers_reviewed_prc_approved |
| 499 | payroll - germanyhouectect - germanyenroncom - accounts - germanjbasfcorpcom | 15 | 499_payroll_germanyhouectect_germanyenroncom_accounts |
| 500 | taxbills - foundmoneycom - cash - grants - foundmoney | 15 | 500_taxbills_foundmoneycom_cash_grants |
| 501 | sleuth - searches - company - research - infonautics | 15 | 501_sleuth_searches_company_research |
| 502 | gasindexlistintcxcom - powerindexlistintcxcom - index - indices - intercontinentalexchange | 15 | 502_gasindexlistintcxcom_powerindexlistintcxcom_index_indices |
| 503 | directv - channels20 - channels - satellite - free | 15 | 503_directv_channels20_channels_satellite |
| 504 | berneycaucoinenroncom - markbernsteinenroncom - joequenetenroncom - edwardbaughmanenroncom - danadavisenroncom | 15 | 504_berneycaucoinenroncom_markbernsteinenroncom_joequenetenroncom_edwardbaughmanenroncom |
| 505 | pipeline - pressures - pressure - psi - compressor | 15 | 505_pipeline_pressures_pressure_psi |
| 506 | enrononline - trader - gcpenron - traders - desk | 14 | 506_enrononline_trader_gcpenron_traders |
| 507 | managementcrd - managementgas - portfolioid - effdt - 1222000 | 14 | 507_managementcrd_managementgas_portfolioid_effdt |
| 508 | nysrc - reliability - nyisotechexchangeglobal2000net - nys - compliance | 14 | 508_nysrc_reliability_nyisotechexchangeglobal2000net_nys |
| 509 | lsu - going - tennessee - tailgating - shreveport | 14 | 509_lsu_going_tennessee_tailgating |
| 510 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000057285pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000059240pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000080512pageapproval - pending - approval | 14 | 510_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000057285pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000059240pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000080512pageapproval_pending |
| 511 | dynegydirect - dynegy - dynegymarand - dynergywpd - customercaredynegycom | 14 | 511_dynegydirect_dynegy_dynegymarand_dynergywpd |
| 512 | schedules2001101503txt - schedules2001101614txt - schedules2001101620txt - schedules2001101821txt - schedules2002011316txt | 14 | 512_schedules2001101503txt_schedules2001101614txt_schedules2001101620txt_schedules2001101821txt |
| 513 | jewelry - earrings - diamonds - diamond - gemstone | 14 | 513_jewelry_earrings_diamonds_diamond |
| 514 | hotel - florence - sicily - palermo - naples | 14 | 514_hotel_florence_sicily_palermo |
| 515 | transaction - datek - order - delivery - mailbotdatekcom | 14 | 515_transaction_datek_order_delivery |
| 516 | rodeo - rodeodoc - rodeoget - rodeocarnival - houston | 14 | 516_rodeo_rodeodoc_rodeoget_rodeocarnival |
| 517 | sheetalpatelenroncom - larrycashenroncom - angiecollinsenroncom - drewhillenroncom - gregmcclendonenroncom | 14 | 517_sheetalpatelenroncom_larrycashenroncom_angiecollinsenroncom_drewhillenroncom |
| 518 | riskopstm - riskops - riskmetrics - enterprise - risk | 14 | 518_riskopstm_riskops_riskmetrics_enterprise |
| 519 | jdasovicenroncom - ehughes2enroncom - dblackenroncom - bfrankenroncom - kgustafsenroncom | 14 | 519_jdasovicenroncom_ehughes2enroncom_dblackenroncom_bfrankenroncom |
| 520 | enrononline - billfoxenroncom - eolmsteadenroncom - johnsherriffenroncom - josephhirlenroncom | 14 | 520_enrononline_billfoxenroncom_eolmsteadenroncom_johnsherriffenroncom |
| 521 | confirms - confirmations - confirming - deals - confirmation | 14 | 521_confirms_confirmations_confirming_deals |
| 522 | edison - hydroelectric - securitization - belowmarketpricepower - perpetual | 14 | 522_edison_hydroelectric_securitization_belowmarketpricepower |
| 523 | lynn - glen - laura - gisb - lindley | 14 | 523_lynn_glen_laura_gisb |
| 524 | pcas - pca - bankruptcy - creditors - filing | 14 | 524_pcas_pca_bankruptcy_creditors |
| 525 | enron - enrons - bankruptcy - bankrupt - savings | 14 | 525_enron_enrons_bankruptcy_bankrupt |
| 526 | edwardsacksenroncom - larryhunterenroncom - kevinmeredithenroncom - bernicerodriguezenroncom - adnanpatelenroncom | 14 | 526_edwardsacksenroncom_larryhunterenroncom_kevinmeredithenroncom_bernicerodriguezenroncom |
| 527 | cassandraschultzenroncom - thompsonenroncom - toddhallenroncom - susanlairdenroncom - ews | 14 | 527_cassandraschultzenroncom_thompsonenroncom_toddhallenroncom_susanlairdenroncom |
| 528 | rigzone - newsletterrigzonecom - news - httpwwwrigzonecomnewslettershowaspnid207 - httpwwwrigzonecomnewslettershowaspnid195 | 14 | 528_rigzone_newsletterrigzonecom_news_httpwwwrigzonecomnewslettershowaspnid207 |
| 529 | nng - nngs - nnorth - capacity - customers | 14 | 529_nng_nngs_nnorth_capacity |
| 530 | halloween - costumes - pumpkin - decorations - dressed | 14 | 530_halloween_costumes_pumpkin_decorations |
| 531 | izio - notified - izioautoemailconvenecom - email - iziosupportconvenecom | 14 | 531_izio_notified_izioautoemailconvenecom_email |
| 532 | berkeley - achenusibmcom - nealehaasberkeleyedu - srobinsohaasberkeleyedu - sboylehaasberkeleyedu | 14 | 532_berkeley_achenusibmcom_nealehaasberkeleyedu_srobinsohaasberkeleyedu |
| 533 | economist - economistcom - economists - economics - economy | 14 | 533_economist_economistcom_economists_economics |
| 534 | energystorage - storage - ziff - gas - ziffs | 14 | 534_energystorage_storage_ziff_gas |
| 535 | enrondoc - chase - confirms - agreement - ena | 14 | 535_enrondoc_chase_confirms_agreement |
| 536 | balances - balance - imbalance - balancing - imbalances | 14 | 536_balances_balance_imbalance_balancing |
| 537 | agreements - ecc - weatherinsurancetransformer - counterparties - isda | 14 | 537_agreements_ecc_weatherinsurancetransformer_counterparties |
| 538 | turbines - gen - generation - energypanda - emissions | 13 | 538_turbines_gen_generation_energypanda |
| 539 | telecommunications - telecom - fcc - carriers - communications | 13 | 539_telecommunications_telecom_fcc_carriers |
| 540 | venezuelan - chavez - venezuela - chavezs - venezuelas | 13 | 540_venezuelan_chavez_venezuela_chavezs |
| 541 | capacity - basin - 25000 - northwest - available | 13 | 541_capacity_basin_25000_northwest |
| 542 | borland - schedules2001040402txt - schedules2001040403txt - schedules2001102118txt - schedules2001040205txt | 13 | 542_borland_schedules2001040402txt_schedules2001040403txt_schedules2001102118txt |
| 543 | outlookteamenroncom - billwillmanenroncom - pennycraneenroncom - andregibsonenroncom - stephenstenhouseenroncom | 13 | 543_outlookteamenroncom_billwillmanenroncom_pennycraneenroncom_andregibsonenroncom |
| 544 | enrondoc - contract - authorized - trailblazeropen - approved | 13 | 544_enrondoc_contract_authorized_trailblazeropen |
| 545 | fontstyle - fontweightbold - fontfamily - bold - fontweight | 13 | 545_fontstyle_fontweightbold_fontfamily_bold |
| 546 | agreements - compression - compressor - tws - agreement | 13 | 546_agreements_compression_compressor_tws |
| 547 | enron - louisekitchenenroncom - assistant - tammieschoppeenroncom - elizabethlayenroncom | 13 | 547_enron_louisekitchenenroncom_assistant_tammieschoppeenroncom |
| 548 | tickets - concert - ticket - tour - coliseum | 13 | 548_tickets_concert_ticket_tour |
| 549 | ews - brownbag - eb - lunch - bag | 13 | 549_ews_brownbag_eb_lunch |
| 550 | accounting - liquidations - financialssettlements - settlement - balance | 13 | 550_accounting_liquidations_financialssettlements_settlement |
| 551 | writeup - assignment - write - exam - tomorrow | 13 | 551_writeup_assignment_write_exam |
| 552 | tklohmanenroncom - dthd - logistics - ets - 3143b | 13 | 552_tklohmanenroncom_dthd_logistics_ets |
| 553 | louisiana - pipeline - 800l - gasindexlistintcxcom - index | 13 | 553_louisiana_pipeline_800l_gasindexlistintcxcom |
| 554 | martysundeenroncom - scottnealenroncom - thomaswhiteenroncom - russelldykenroncom - berneyaucoinenroncom | 13 | 554_martysundeenroncom_scottnealenroncom_thomaswhiteenroncom_russelldykenroncom |
| 555 | transactions - trade - counts - cnt - count | 13 | 555_transactions_trade_counts_cnt |
| 556 | mgs - mgl - mg - mgi - mgu | 13 | 556_mgs_mgl_mg_mgi |
| 557 | profit - revenue - profits - seminar - profitable | 13 | 557_profit_revenue_profits_seminar |
| 558 | aps - pinnacle - pinnaclewe - west - confirms | 13 | 558_aps_pinnacle_pinnaclewe_west |
| 559 | sadlersaearthlinknet - edson - cjrudmansoftcomnet - robbizcwocom - cmichelicarpentersnodgrasscom | 13 | 559_sadlersaearthlinknet_edson_cjrudmansoftcomnet_robbizcwocom |
| 560 | enron - emw - leefascettienroncom - wwhiteenroncom - mloveenroncom | 13 | 560_enron_emw_leefascettienroncom_wwhiteenroncom |
| 561 | rockets - rocketscom - rocketsspurs - tickets - nba | 13 | 561_rockets_rocketscom_rocketsspurs_tickets |
| 562 | peak - enrons - prices - erv - power | 13 | 562_peak_enrons_prices_erv |
| 563 | billcordesenroncom - larryderoinenroncom - janetplaceenroncom - debbiethompsonenroncom - leehuberenroncom | 13 | 563_billcordesenroncom_larryderoinenroncom_janetplaceenroncom_debbiethompsonenroncom |
| 564 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000041547pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000053239pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000070996pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000081784pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000081482pageapproval | 13 | 564_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000041547pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000053239pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000070996pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000081784pageapproval |
| 565 | kudlow - kudlow102601doc - kudlowcom - kudlows - wwwkudlowcom | 13 | 565_kudlow_kudlow102601doc_kudlowcom_kudlows |
| 566 | outage - outages - scheduled - maintenance - 500pm | 13 | 566_outage_outages_scheduled_maintenance |
| 567 | prospects - prospect - businesses - industries - trueadvantages | 13 | 567_prospects_prospect_businesses_industries |
| 568 | outlook - outlookteamenroncom - appointment - tms - conference | 13 | 568_outlook_outlookteamenroncom_appointment_tms |
| 569 | kevinhannonenroncom - inezdauteriveenroncom - markkoenigenroncom - kenriceenroncom - thomaswhiteenroncom | 13 | 569_kevinhannonenroncom_inezdauteriveenroncom_markkoenigenroncom_kenriceenroncom |
| 570 | tariff - san - juan - outage - efficiencies | 13 | 570_tariff_san_juan_outage |
| 571 | enron03 - enron06 - enron04 - enron05 - enron02 | 13 | 571_enron03_enron06_enron04_enron05 |
| 572 | leagues - httpfootballfantasysportslinecomspoogers - httpfootballfantasysportslinecomgetbad - rosters - notification | 13 | 572_leagues_httpfootballfantasysportslinecomspoogers_httpfootballfantasysportslinecomgetbad_rosters |
| 573 | credit2b - credit - financial - liabilities - riskcalc | 13 | 573_credit2b_credit_financial_liabilities |
| 574 | ca - costigan - sacramento - governor - california | 13 | 574_ca_costigan_sacramento_governor |
| 575 | pgev - contract - deal - deals - nomd | 12 | 575_pgev_contract_deal_deals |
| 576 | schedulingiso - schedules2001102003txt - schedules2001101702txt - schedules2002020515txt - schedules2001102006txt | 12 | 576_schedulingiso_schedules2001102003txt_schedules2001101702txt_schedules2002020515txt |
| 577 | enrononline - enrons - johnsherriffenroncom - markkoenigenroncom - jeffskillingenroncom | 12 | 577_enrononline_enrons_johnsherriffenroncom_markkoenigenroncom |
| 578 | lewisworthyenroncom - ecp - kevinlissenroncom - filed - fcc | 12 | 578_lewisworthyenroncom_ecp_kevinlissenroncom_filed |
| 579 | pjm - portfolio - portfolioid - cct - managementcrd | 12 | 579_pjm_portfolio_portfolioid_cct |
| 580 | markets - demand - prices - refined - oilrefined | 12 | 580_markets_demand_prices_refined |
| 581 | cipco - cipcos - ntec - coop - iowa | 12 | 581_cipco_cipcos_ntec_coop |
| 582 | migrating - migration - gem - 2000 - application | 12 | 582_migrating_migration_gem_2000 |
| 583 | ge - ges - contract - indemnity - schwartzenburgenrondevelopmentenrondevelopment | 12 | 583_ge_ges_contract_indemnity |
| 584 | larrydallmanenroncom - subsidiaries - sapdataenroncom - datasapenroncom - thomasmooreenroncom | 12 | 584_larrydallmanenroncom_subsidiaries_sapdataenroncom_datasapenroncom |
| 585 | spread - value - cumulative - pgespreadvaluationdefinitiondoc - calculation | 12 | 585_spread_value_cumulative_pgespreadvaluationdefinitiondoc |
| 586 | etrade - etradecom - etrades - ameritrade020717ameritradecustomeremailcom - ameritrade | 12 | 586_etrade_etradecom_etrades_ameritrade020717ameritradecustomeremailcom |
| 587 | bpats - tariff - transmission - policy - oasispostingsllisttransmissionbpagov | 12 | 587_bpats_tariff_transmission_policy |
| 588 | bouquet - flowers - gifting - gifts - gift | 12 | 588_bouquet_flowers_gifting_gifts |
| 589 | nymex - trades - gdnymex - deals - contracts | 12 | 589_nymex_trades_gdnymex_deals |
| 590 | exchangeadministratorenroncom - recipients - email - attachment - mailtosandymbramantireliantenergycom | 12 | 590_exchangeadministratorenroncom_recipients_email_attachment |
| 591 | newpowercommunicationnewpowercom - newpower - newpowers - newpowertm - wwwnewpowercom | 12 | 591_newpowercommunicationnewpowercom_newpower_newpowers_newpowertm |
| 592 | daviddelaineyenroncom - geniafitzgeraldenroncom - markguzmanenroncom - billwilliamsenroncom - brianbierbachenroncom | 12 | 592_daviddelaineyenroncom_geniafitzgeraldenroncom_markguzmanenroncom_billwilliamsenroncom |
| 593 | powerindexlistintcxcom - hrefhttpwwwintcxcomsubscriberservletsubscriberservletclassoperationpowerindexformhuballindex - indices - hrefmailtosalesintcxcomsalesintcxcoma - hrefhttpwwwintcxcomsubscribehtmlunsubscribea | 12 | 593_powerindexlistintcxcom_hrefhttpwwwintcxcomsubscriberservletsubscriberservletclassoperationpowerindexformhuballindex_indices_hrefmailtosalesintcxcomsalesintcxcoma |
| 594 | nytimescom - newsletters - httpwwwnytimescompagesbusinessindexhtmltodaysheadline - httpwwwnytimescompagestechnologyindexhtmltodaysheadline - httpwwwnytimescompagesnationalindexhtmltodaysheadline | 12 | 594_nytimescom_newsletters_httpwwwnytimescompagesbusinessindexhtmltodaysheadline_httpwwwnytimescompagestechnologyindexhtmltodaysheadline |
| 595 | pricing - prices - price - offer - prepaid | 12 | 595_pricing_prices_price_offer |
| 596 | cp - cps - approved - power - approvals | 12 | 596_cp_cps_approved_power |
| 597 | larrylawyerenroncom - bethperlmanenroncom - stephenwallaceenroncom - crookjodyenroncom - dbaughmanenroncom | 12 | 597_larrylawyerenroncom_bethperlmanenroncom_stephenwallaceenroncom_crookjodyenroncom |
| 598 | enronbridgeline - bridgeline - enrontexaco - texacobridgeline - rogerwestbrookenroncom | 12 | 598_enronbridgeline_bridgeline_enrontexaco_texacobridgeline |
| 599 | outage - outages - scheduled - maintenance - impacted | 12 | 599_outage_outages_scheduled_maintenance |
| 600 | approver - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000012734page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000009397page - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000070768pagemyreq - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000005413page | 12 | 600_approver_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000012734page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000009397page_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000070768pagemyreq |
| 601 | pjm - pjmcustomerrelationsandtrainingpjmcom - wwwpjmcom - ownerpjmcustomerinfopjmcom - pjmcustomerinforisc1pjmcom | 12 | 601_pjm_pjmcustomerrelationsandtrainingpjmcom_wwwpjmcom_ownerpjmcustomerinfopjmcom |
| 602 | swap - swaps - scheduled - revisions - reworked | 12 | 602_swap_swaps_scheduled_revisions |
| 603 | forecasts - prices - rockies - pricesmwh - peakheavy | 12 | 603_forecasts_prices_rockies_pricesmwh |
| 604 | hrefhttpwwwenergynewslivecomexperience - newscast - hrefhttpwwwrealcomreal - srchttpwwwenergynewslivecomcontentemailgraphicssignaturejpg - link0000ff | 12 | 604_hrefhttpwwwenergynewslivecomexperience_newscast_hrefhttpwwwrealcomreal_srchttpwwwenergynewslivecomcontentemailgraphicssignaturejpg |
| 605 | geocities - hosting - yahoo - internet - web | 12 | 605_geocities_hosting_yahoo_internet |
| 606 | wti - barrels - transacting - trades - crude | 12 | 606_wti_barrels_transacting_trades |
| 607 | realnetworksreg - realnetworks - realjukebox - realjukeboxreg - realplayer | 11 | 607_realnetworksreg_realnetworks_realjukebox_realjukeboxreg |
| 608 | approved - approval - approve - enrononline - eol | 11 | 608_approved_approval_approve_enrononline |
| 609 | equityalertcom - equityalert - alertsequityalertcom - alert - disclaimer | 11 | 609_equityalertcom_equityalert_alertsequityalertcom_alert |
| 610 | aol - aim - messenger - aolcom - aimaolcomredirectsaimemailaimhomehtml | 11 | 610_aol_aim_messenger_aolcom |
| 611 | outage - email - delivery - customerserviceindustrialinfocom - updates | 11 | 611_outage_email_delivery_customerserviceindustrialinfocom |
| 612 | capacity - allocations - allocating - overruns - customers | 11 | 612_capacity_allocations_allocating_overruns |
| 613 | mailbox - mail - delete - folder - deleting | 11 | 613_mailbox_mail_delete_folder |
| 614 | powermarketerscom - monthly - 196year - wholesale - merchant | 11 | 614_powermarketerscom_monthly_196year_wholesale |
| 615 | meeting - allworldwideenroncom - interoffice - houston - streamed | 11 | 615_meeting_allworldwideenroncom_interoffice_houston |
| 616 | tdickerswesterngascom - trdillwesternhubscom - porterdkscecom - askaffenergylawgroupcom - opening | 11 | 616_tdickerswesterngascom_trdillwesternhubscom_porterdkscecom_askaffenergylawgroupcom |
| 617 | enron - enronmessagingadministrationenroncom - enroncom - email - outlook | 11 | 617_enron_enronmessagingadministrationenroncom_enroncom_email |
| 618 | discount - discounts - deal - negotiated - price | 11 | 618_discount_discounts_deal_negotiated |
| 619 | enron - enrononline - dynegy - dynegyice - exchange | 11 | 619_enron_enrononline_dynegy_dynegyice |
| 620 | ca - iso - california - compliance - caiso | 11 | 620_ca_iso_california_compliance |
| 621 | outlook - outlookteamenroncom - appointment - calendar - buchanan | 11 | 621_outlook_outlookteamenroncom_appointment_calendar |
| 622 | schedules2001022813txt - schedules2001030802txt - schedules2001041822txt - schedules2001030805txt - schedules2001030801txt | 11 | 622_schedules2001022813txt_schedules2001030802txt_schedules2001041822txt_schedules2001030805txt |
| 623 | enerfax - httpwwwenerfaxcom - enerfaxwebegroupscom - enerfax1bellsouthnet - dailys | 11 | 623_enerfax_httpwwwenerfaxcom_enerfaxwebegroupscom_enerfax1bellsouthnet |
| 624 | ebill - reliantenergyebillcarecom - payment - billpay - httpsbankingwellsfargocom | 11 | 624_ebill_reliantenergyebillcarecom_payment_billpay |
| 625 | michaelbriggsreliantenergycom - smcfaddenigservicecom - benledeneaecca - spb1cwclawcom - kpooleadamsbroadwellcom | 11 | 625_michaelbriggsreliantenergycom_smcfaddenigservicecom_benledeneaecca_spb1cwclawcom |
| 626 | futures - charts - spreads - crude - httpwwwcarrfutcomresearchenergy1heatspread25pdf | 11 | 626_futures_charts_spreads_crude |
| 627 | dayahead - crawler - schedule - failed - error | 11 | 627_dayahead_crawler_schedule_failed |
| 628 | enronfacilitiesenroncom - enroncounterpartyenroncom - fayeellisenroncom - susantrevinoenroncom - scottlytleenroncom | 11 | 628_enronfacilitiesenroncom_enroncounterpartyenroncom_fayeellisenroncom_susantrevinoenroncom |
| 629 | receipt - recipient - mail - send - enronmessag | 11 | 629_receipt_recipient_mail_send |
| 630 | venture - ventures - venturewire - investors - investments | 11 | 630_venture_ventures_venturewire_investors |
| 631 | calculated - contractual - payment - amount - transaction | 11 | 631_calculated_contractual_payment_amount |
| 632 | rigzone - newsletterrigzonecom - newsletter - news - httpwwwrigzonecomnewslettershowaspnid201 | 11 | 632_rigzone_newsletterrigzonecom_newsletter_news |
| 633 | solarc - sappi - garetshawenroncom - reports - billcaseyenroncom | 11 | 633_solarc_sappi_garetshawenroncom_reports |
| 634 | danleffenroncom - martysundeenroncom - robertjohnsonenroncom - powerplants - neilbresnanenroncom | 11 | 634_danleffenroncom_martysundeenroncom_robertjohnsonenroncom_powerplants |
| 635 | forestweb - forest - news - newsbeat - forests | 11 | 635_forestweb_forest_news_newsbeat |
| 636 | antiviral - hiv - pharmaceuticals - clinical - treatment | 11 | 636_antiviral_hiv_pharmaceuticals_clinical |
| 637 | swap - amended - exchange - enron - termination | 11 | 637_swap_amended_exchange_enron |
| 638 | enronmetals - enronnetworks - enrononline - metals - aluminium | 11 | 638_enronmetals_enronnetworks_enrononline_metals |
| 639 | esmtp - postmasterenroncom - efittswebtvnet - smtp - smtpin1029bryantwebtvnet | 10 | 639_esmtp_postmasterenroncom_efittswebtvnet_smtp |
| 640 | bermudan - bermuda - reinsurance - arbitrage - eb | 10 | 640_bermudan_bermuda_reinsurance_arbitrage |
| 641 | soccer - league - canfield - teams - practice | 10 | 641_soccer_league_canfield_teams |
| 642 | enron - enrononline - eims - eim - freight | 10 | 642_enron_enrononline_eims_eim |
| 643 | send - yahoo - mail - emails - megrogers99yahoocom | 10 | 643_send_yahoo_mail_emails |
| 644 | meeting - meet - lunch - tuesday - saturday | 10 | 644_meeting_meet_lunch_tuesday |
| 645 | schedule - 10am - cst - thursday - meeting | 10 | 645_schedule_10am_cst_thursday |
| 646 | enron - enrons - bankruptcy - bankrupt - savings | 10 | 646_enron_enrons_bankruptcy_bankrupt |
| 647 | sell - selling - sale - sold - 25mw | 10 | 647_sell_selling_sale_sold |
| 648 | aps - transmission - enronmay2001xls - tranny - congestion | 10 | 648_aps_transmission_enronmay2001xls_tranny |
| 649 | bids - marketrelationsnyisocom - nyisotechexchange - bidding - auction | 10 | 649_bids_marketrelationsnyisocom_nyisotechexchange_bidding |
| 650 | httpitcappscorpenroncomsrrsauthemaillinkaspid000000000076650pageapproval - httpitcappscorpenroncomsrrsauthemaillinkaspid000000000059231pageapproval - pending - approval - arsystemmailmanenroncom | 10 | 650_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000076650pageapproval_httpitcappscorpenroncomsrrsauthemaillinkaspid000000000059231pageapproval_pending_approval |
| 651 | texas - austin - ut - campus - colleges | 10 | 651_texas_austin_ut_campus |
| 652 | gamma - optiongrid - gammas - ltoptions - parameters | 10 | 652_gamma_optiongrid_gammas_ltoptions |
| 653 | enrononline - ehaedickeenroncom - taylorenroncom - tarasweitzerenroncom - sbradfordenroncom | 10 | 653_enrononline_ehaedickeenroncom_taylorenroncom_tarasweitzerenroncom |
| 654 | promotions - manager - althaus - director - vp | 10 | 654_promotions_manager_althaus_director |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.25.2
* HDBSCAN: 0.8.33
* UMAP: 0.5.6
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.7.0
* Transformers: 4.40.1
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
| {"library_name": "bertopic", "tags": ["bertopic"], "pipeline_tag": "text-classification"} | antulik/BERTopic-enron-50k | null | [
"bertopic",
"text-classification",
"region:us"
] | null | 2024-05-02T09:27:22+00:00 | [] | [] | TAGS
#bertopic #text-classification #region-us
| BERTopic-enron-50k
==================
This is a BERTopic model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
Usage
-----
To use this model, please install BERTopic:
You can use the model as follows:
Topic overview
--------------
* Number of topics: 656
* Number of training documents: 50000
Click here for an overview of all topics.
Training hyperparameters
------------------------
* calculate\_probabilities: False
* language: english
* low\_memory: False
* min\_topic\_size: 10
* n\_gram\_range: (1, 1)
* nr\_topics: None
* seed\_topic\_list: None
* top\_n\_words: 10
* verbose: False
* zeroshot\_min\_similarity: 0.7
* zeroshot\_topic\_list: None
Framework versions
------------------
* Numpy: 1.25.2
* HDBSCAN: 0.8.33
* UMAP: 0.5.6
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.7.0
* Transformers: 4.40.1
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
| [] | [
"TAGS\n#bertopic #text-classification #region-us \n"
] | [
13
] | [
"TAGS\n#bertopic #text-classification #region-us \n"
] |
null | null | Apakah Dozerex Tablet?
Dozerex harga ialah kapsul kesihatan lelaki berkualiti premium yang diformulasikan untuk menyokong tahap kecergasan dan tenaga. Formula termajunya menggabungkan gabungan sinergistik vitamin, mineral dan ekstrak herba, yang dipilih khusus untuk menggalakkan kesihatan dan kesejahteraan optimum pada lelaki.
Laman web rasmi:<a href="https://www.nutritionsee.com/dozermlaysi">www.Dozerex.com</a>
<p><a href="https://www.nutritionsee.com/dozermlaysi"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/Dozerex-Malaysia-1.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/dozermlaysi">Beli sekarang!! Klik pautan di bawah untuk maklumat lanjut dan dapatkan diskaun 50% sekarang... Cepat</a>
Laman web rasmi:<a href="https://www.nutritionsee.com/dozermlaysi">www.Dozerex.com</a> | {"license": "apache-2.0"} | Dozerex/DozerexMalaysia | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T09:27:47+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| Apakah Dozerex Tablet?
Dozerex harga ialah kapsul kesihatan lelaki berkualiti premium yang diformulasikan untuk menyokong tahap kecergasan dan tenaga. Formula termajunya menggabungkan gabungan sinergistik vitamin, mineral dan ekstrak herba, yang dipilih khusus untuk menggalakkan kesihatan dan kesejahteraan optimum pada lelaki.
Laman web rasmi:<a href="URL
<p><a href="URL <img src="URL alt="enter image description here"> </a></p>
<a href="URL sekarang!! Klik pautan di bawah untuk maklumat lanjut dan dapatkan diskaun 50% sekarang... Cepat</a>
Laman web rasmi:<a href="URL | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] | [
13
] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
text-generation | transformers | # merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Llama-2-13b-hf](https://huggingface.co/NousResearch/Llama-2-13b-hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 24]
model: NousResearch/Llama-2-13b-hf
- sources:
- layer_range: [37, 40]
model: NousResearch/Llama-2-13b-hf
``` | {"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Llama-2-13b-hf"]} | Syed-Hasan-8503/Llama-2-8B-Pruned | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:NousResearch/Llama-2-13b-hf",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:28:25+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-NousResearch/Llama-2-13b-hf #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merged
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* NousResearch/Llama-2-13b-hf
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merged\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the passthrough merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Llama-2-13b-hf",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-NousResearch/Llama-2-13b-hf #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merged\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the passthrough merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Llama-2-13b-hf",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
68,
17,
4,
18,
31,
16
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-NousResearch/Llama-2-13b-hf #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# merged\n\nThis is a merge of pre-trained language models created using mergekit.## Merge Details### Merge Method\n\nThis model was merged using the passthrough merge method.### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Llama-2-13b-hf### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | vaatsav06/Llama3_medqa_finetune | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-02T09:29:47+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
51,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3-8B-Instruct-MoE-spider
This model is a fine-tuned version of [VictorDCh/Llama-3-8B-Instruct-MoE](https://huggingface.co/VictorDCh/Llama-3-8B-Instruct-MoE) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "VictorDCh/Llama-3-8B-Instruct-MoE", "model-index": [{"name": "Llama-3-8B-Instruct-MoE-spider", "results": []}]} | VictorDCh/Llama-3-8B-Instruct-MoE-spider | null | [
"peft",
"tensorboard",
"safetensors",
"mixtral",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:VictorDCh/Llama-3-8B-Instruct-MoE",
"region:us"
] | null | 2024-05-02T09:30:48+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #mixtral #trl #sft #generated_from_trainer #dataset-generator #base_model-VictorDCh/Llama-3-8B-Instruct-MoE #region-us
|
# Llama-3-8B-Instruct-MoE-spider
This model is a fine-tuned version of VictorDCh/Llama-3-8B-Instruct-MoE on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 | [
"# Llama-3-8B-Instruct-MoE-spider\n\nThis model is a fine-tuned version of VictorDCh/Llama-3-8B-Instruct-MoE on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #mixtral #trl #sft #generated_from_trainer #dataset-generator #base_model-VictorDCh/Llama-3-8B-Instruct-MoE #region-us \n",
"# Llama-3-8B-Instruct-MoE-spider\n\nThis model is a fine-tuned version of VictorDCh/Llama-3-8B-Instruct-MoE on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] | [
56,
46,
7,
9,
9,
4,
128,
5,
55
] | [
"TAGS\n#peft #tensorboard #safetensors #mixtral #trl #sft #generated_from_trainer #dataset-generator #base_model-VictorDCh/Llama-3-8B-Instruct-MoE #region-us \n# Llama-3-8B-Instruct-MoE-spider\n\nThis model is a fine-tuned version of VictorDCh/Llama-3-8B-Instruct-MoE on the generator dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1### Training results### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | azhara001/donut-base-demo-final_1e-05_Adam | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:32:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
35,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
YetAnother_Open-Llama-3B-LoRA-OpenOrca - bnb 4bits
- Model creator: https://huggingface.co/Andron00e/
- Original model: https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca/
Original model description:
---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
library_name: transformers
pipeline_tag: question-answering
metrics:
- accuracy
---
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Andron00e
- **Language(s) (NLP):** Python (PyTorch, transformers, peft)
- **License:** apache-2.0
- **Finetuned from model:** openlm-research/open_llama_3b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/Andron00e/Fine-Tuning-project
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Open-Orca/OpenOrca
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Evaluation of the model was carried out using EulerAI library, more [precisely](https://github.com/EleutherAI/lm-evaluation-harness/tree/e47e01beea79cfe87421e2dac49e64d499c240b4#task-versioning)
#### Testing Data
<!-- This should link to a Data Card if possible. -->
hellaswag testing dataset
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Accuracy
### Results and Model Examination
| Task |Version| Metric |Value | |Stderr|
|---------|------:|--------|-----:|---|-----:|
|hellaswag| 0|acc |0.4899|± |0.0050|
| | |acc_norm|0.6506|± |0.0048|
## Citations
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{eval-harness,
author = {Gao, Leo and
Tow, Jonathan and
Biderman, Stella and
Black, Sid and
DiPofi, Anthony and
Foster, Charles and
Golding, Laurence and
Hsu, Jeffrey and
McDonell, Kyle and
Muennighoff, Niklas and
Phang, Jason and
Reynolds, Laria and
Tang, Eric and
Thite, Anish and
Wang, Ben and
Wang, Kevin and
Zou, Andy},
title = {A framework for few-shot language model evaluation},
month = sep,
year = 2021,
publisher = {Zenodo},
version = {v0.0.1},
doi = {10.5281/zenodo.5371628},
url = {https://doi.org/10.5281/zenodo.5371628}
}
```
## Model Card Authors and Contact
[Andron00e](https://github.com/Andron00e)
| {} | RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-4bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T09:33:24+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
YetAnother\_Open-Llama-3B-LoRA-OpenOrca - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
license: apache-2.0
datasets:
* Open-Orca/OpenOrca
language:
* en
library\_name: transformers
pipeline\_tag: question-answering
metrics:
* accuracy
---
Model Details
-------------
### Model Description
* Developed by: Andron00e
* Language(s) (NLP): Python (PyTorch, transformers, peft)
* License: apache-2.0
* Finetuned from model: openlm-research/open\_llama\_3b
### Model Sources [optional]
* Repository: URL
### Training Data
URL
Evaluation
----------
Evaluation of the model was carried out using EulerAI library, more precisely
#### Testing Data
hellaswag testing dataset
#### Metrics
Accuracy
### Results and Model Examination
s
Model Card Authors and Contact
------------------------------
Andron00e
| [
"### Model Description\n\n\n* Developed by: Andron00e\n* Language(s) (NLP): Python (PyTorch, transformers, peft)\n* License: apache-2.0\n* Finetuned from model: openlm-research/open\\_llama\\_3b",
"### Model Sources [optional]\n\n\n* Repository: URL",
"### Training Data\n\n\nURL\n\n\nEvaluation\n----------\n\n\nEvaluation of the model was carried out using EulerAI library, more precisely",
"#### Testing Data\n\n\nhellaswag testing dataset",
"#### Metrics\n\n\nAccuracy",
"### Results and Model Examination\n\n\n\ns\n\n\nModel Card Authors and Contact\n------------------------------\n\n\nAndron00e"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Description\n\n\n* Developed by: Andron00e\n* Language(s) (NLP): Python (PyTorch, transformers, peft)\n* License: apache-2.0\n* Finetuned from model: openlm-research/open\\_llama\\_3b",
"### Model Sources [optional]\n\n\n* Repository: URL",
"### Training Data\n\n\nURL\n\n\nEvaluation\n----------\n\n\nEvaluation of the model was carried out using EulerAI library, more precisely",
"#### Testing Data\n\n\nhellaswag testing dataset",
"#### Metrics\n\n\nAccuracy",
"### Results and Model Examination\n\n\n\ns\n\n\nModel Card Authors and Contact\n------------------------------\n\n\nAndron00e"
] | [
38,
64,
13,
33,
13,
7,
47
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n### Model Description\n\n\n* Developed by: Andron00e\n* Language(s) (NLP): Python (PyTorch, transformers, peft)\n* License: apache-2.0\n* Finetuned from model: openlm-research/open\\_llama\\_3b### Model Sources [optional]\n\n\n* Repository: URL### Training Data\n\n\nURL\n\n\nEvaluation\n----------\n\n\nEvaluation of the model was carried out using EulerAI library, more precisely#### Testing Data\n\n\nhellaswag testing dataset#### Metrics\n\n\nAccuracy### Results and Model Examination\n\n\n\ns\n\n\nModel Card Authors and Contact\n------------------------------\n\n\nAndron00e"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "reinforce-model001", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "9.30 +/- 12.13", "name": "mean_reward", "verified": false}]}]}]} | pietroorlandi/reinforce-model001 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-05-02T09:34:31+00:00 | [] | [] | TAGS
#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing Pixelcopter-PLE-v0
This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
37,
56
] | [
"TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
null | transformers |
# Uploaded model
- **Developed by:** HadjYahia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | HadjYahia/lora_llama3 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:35:20+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: HadjYahia
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: HadjYahia\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: HadjYahia\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
64,
81
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: HadjYahia\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** ntvcie
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"} | ntvcie/Phi3MiniVinhntV02 | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:35:20+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: ntvcie
- License: apache-2.0
- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: ntvcie\n- License: apache-2.0\n- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: ntvcie\n- License: apache-2.0\n- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
81,
84
] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: ntvcie\n- License: apache-2.0\n- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Access_to_Communications
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5158
- Accuracy: 0.5714
- F1: 0.5524
- Auc: 0.5833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|
| No log | 1.0 | 15 | 0.6945 | 0.5714 | 0.5524 | 0.5 |
| No log | 2.0 | 30 | 0.7051 | 0.2857 | 0.1905 | 0.5833 |
| No log | 3.0 | 45 | 0.6958 | 0.5714 | 0.5143 | 0.5833 |
| No log | 4.0 | 60 | 0.6721 | 0.5714 | 0.5143 | 0.6667 |
| No log | 5.0 | 75 | 0.6649 | 0.5714 | 0.5143 | 0.5833 |
| No log | 6.0 | 90 | 0.6833 | 0.4286 | 0.4048 | 0.5833 |
| No log | 7.0 | 105 | 0.7160 | 0.5714 | 0.5714 | 0.5833 |
| No log | 8.0 | 120 | 0.8591 | 0.5714 | 0.5524 | 0.5833 |
| No log | 9.0 | 135 | 0.9619 | 0.4286 | 0.4286 | 0.5833 |
| No log | 10.0 | 150 | 1.2991 | 0.5714 | 0.5524 | 0.5833 |
| No log | 11.0 | 165 | 1.1564 | 0.4286 | 0.4286 | 0.5 |
| No log | 12.0 | 180 | 1.2430 | 0.5714 | 0.5524 | 0.5833 |
| No log | 13.0 | 195 | 1.4338 | 0.5714 | 0.5524 | 0.5833 |
| No log | 14.0 | 210 | 1.5501 | 0.5714 | 0.5524 | 0.5833 |
| No log | 15.0 | 225 | 1.5269 | 0.5714 | 0.5524 | 0.5833 |
| No log | 16.0 | 240 | 1.5305 | 0.5714 | 0.5524 | 0.5833 |
| No log | 17.0 | 255 | 1.5196 | 0.5714 | 0.5524 | 0.5833 |
| No log | 18.0 | 270 | 1.5175 | 0.5714 | 0.5524 | 0.5833 |
| No log | 19.0 | 285 | 1.5144 | 0.5714 | 0.5524 | 0.5833 |
| No log | 20.0 | 300 | 1.5158 | 0.5714 | 0.5524 | 0.5833 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.18.0
- Tokenizers 0.15.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "bert-base-uncased", "model-index": [{"name": "Access_to_Communications", "results": []}]} | cc5954772/Access_to_Communications | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:37:26+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Access\_to\_Communications
==========================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5158
* Accuracy: 0.5714
* F1: 0.5524
* Auc: 0.5833
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.0.1+cu117
* Datasets 2.18.0
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.0.1+cu117\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.0.1+cu117\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] | [
55,
101,
5,
47
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20### Training results### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.0.1+cu117\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "facebook/bart-large-cnn", "model-index": [{"name": "tmp_trainer", "results": []}]} | aritrosakar/tmp_trainer | null | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:38:35+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# tmp_trainer
This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# tmp_trainer\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# tmp_trainer\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
54,
27,
7,
9,
9,
4,
95,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# tmp_trainer\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers | Model Runs: [run0](https://wandb.ai/eleutherai/conditional-training/runs/z0o5ehs0), [run1](https://wandb.ai/eleutherai/conditional-training/runs/fy58oeve), [run2](https://wandb.ai/eleutherai/conditional-training/runs/0edfpkai) | {} | usvsnsp/pythia-410m-conditional-training-0.1 | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:39:17+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Model Runs: run0, run1, run2 | [] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
37
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mervyn999/mistral-7b-distilabel-mini-DPO | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space"
] | null | 2024-05-02T09:40:32+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #has_space
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #has_space \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
48,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #has_space \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "261.63 +/- 18.99", "name": "mean_reward", "verified": false}]}]}]} | Srikanth0108/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-05-02T09:41:34+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
31,
35,
17
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"language": ["hi"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small Hi - Sanchit Gandhi", "results": []}]} | Ojro/whisper-tiny-mn-semi-supervised | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:42:54+00:00 | [] | [
"hi"
] | TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #hi #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us
|
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# Whisper Small Hi - Sanchit Gandhi\r\n\r\nThis model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset.",
"## Model description\r\n\r\nMore information needed",
"## Intended uses & limitations\r\n\r\nMore information needed",
"## Training and evaluation data\r\n\r\nMore information needed",
"## Training procedure",
"### Training hyperparameters\r\n\r\nThe following hyperparameters were used during training:\r\n- learning_rate: 1e-05\r\n- train_batch_size: 64\r\n- eval_batch_size: 64\r\n- seed: 42\r\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\r\n- lr_scheduler_type: linear\r\n- lr_scheduler_warmup_steps: 500\r\n- training_steps: 30\r\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\r\n\r\n- Transformers 4.38.2\r\n- Pytorch 2.1.1+cu121\r\n- Datasets 2.18.0\r\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #hi #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Whisper Small Hi - Sanchit Gandhi\r\n\r\nThis model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset.",
"## Model description\r\n\r\nMore information needed",
"## Intended uses & limitations\r\n\r\nMore information needed",
"## Training and evaluation data\r\n\r\nMore information needed",
"## Training procedure",
"### Training hyperparameters\r\n\r\nThe following hyperparameters were used during training:\r\n- learning_rate: 1e-05\r\n- train_batch_size: 64\r\n- eval_batch_size: 64\r\n- seed: 42\r\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\r\n- lr_scheduler_type: linear\r\n- lr_scheduler_warmup_steps: 500\r\n- training_steps: 30\r\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\r\n\r\n- Transformers 4.38.2\r\n- Pytorch 2.1.1+cu121\r\n- Datasets 2.18.0\r\n- Tokenizers 0.15.2"
] | [
71,
34,
7,
9,
9,
4,
113,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #hi #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us \n# Whisper Small Hi - Sanchit Gandhi\r\n\r\nThis model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset.## Model description\r\n\r\nMore information needed## Intended uses & limitations\r\n\r\nMore information needed## Training and evaluation data\r\n\r\nMore information needed## Training procedure### Training hyperparameters\r\n\r\nThe following hyperparameters were used during training:\r\n- learning_rate: 1e-05\r\n- train_batch_size: 64\r\n- eval_batch_size: 64\r\n- seed: 42\r\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\r\n- lr_scheduler_type: linear\r\n- lr_scheduler_warmup_steps: 500\r\n- training_steps: 30\r\n- mixed_precision_training: Native AMP### Training results### Framework versions\r\n\r\n- Transformers 4.38.2\r\n- Pytorch 2.1.1+cu121\r\n- Datasets 2.18.0\r\n- Tokenizers 0.15.2"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "282.08 +/- 21.98", "name": "mean_reward", "verified": false}]}]}]} | raydium/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-05-02T09:44:13+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
31,
35,
17
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | transformers |
# Uploaded model
- **Developed by:** ntvcie
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"} | ntvcie/Phi3MiniVinhntV03 | null | [
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:44:28+00:00 | [] | [
"en"
] | TAGS
#transformers #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: ntvcie
- License: apache-2.0
- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: ntvcie\n- License: apache-2.0\n- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: ntvcie\n- License: apache-2.0\n- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
64,
84
] | [
"TAGS\n#transformers #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: ntvcie\n- License: apache-2.0\n- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | fmshahata/phi-moe-8k | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:44:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
43,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** armanbabayan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "dpo"], "base_model": "unsloth/llama-2-7b-chat-bnb-4bit"} | armanbabayan/Llama2_Immigration_Chat_16bit | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:unsloth/llama-2-7b-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:46:07+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: armanbabayan
- License: apache-2.0
- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: armanbabayan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: armanbabayan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
82,
84
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: armanbabayan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-PixelCopter", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "19.20 +/- 15.64", "name": "mean_reward", "verified": false}]}]}]} | elisamammi/Reinforce-PixelCopter | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-05-02T09:46:07+00:00 | [] | [] | TAGS
#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing Pixelcopter-PLE-v0
This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
37,
56
] | [
"TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"license": "apache-2.0", "library_name": "transformers"} | chlee10/T3Q-LLM3-Llama3-sft1.0-dpo1.0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:47:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
55,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
YetAnother_Open-Llama-3B-LoRA-OpenOrca - bnb 8bits
- Model creator: https://huggingface.co/Andron00e/
- Original model: https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca/
Original model description:
---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
library_name: transformers
pipeline_tag: question-answering
metrics:
- accuracy
---
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Andron00e
- **Language(s) (NLP):** Python (PyTorch, transformers, peft)
- **License:** apache-2.0
- **Finetuned from model:** openlm-research/open_llama_3b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/Andron00e/Fine-Tuning-project
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Open-Orca/OpenOrca
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Evaluation of the model was carried out using EulerAI library, more [precisely](https://github.com/EleutherAI/lm-evaluation-harness/tree/e47e01beea79cfe87421e2dac49e64d499c240b4#task-versioning)
#### Testing Data
<!-- This should link to a Data Card if possible. -->
hellaswag testing dataset
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Accuracy
### Results and Model Examination
| Task |Version| Metric |Value | |Stderr|
|---------|------:|--------|-----:|---|-----:|
|hellaswag| 0|acc |0.4899|± |0.0050|
| | |acc_norm|0.6506|± |0.0048|
## Citations
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{eval-harness,
author = {Gao, Leo and
Tow, Jonathan and
Biderman, Stella and
Black, Sid and
DiPofi, Anthony and
Foster, Charles and
Golding, Laurence and
Hsu, Jeffrey and
McDonell, Kyle and
Muennighoff, Niklas and
Phang, Jason and
Reynolds, Laria and
Tang, Eric and
Thite, Anish and
Wang, Ben and
Wang, Kevin and
Zou, Andy},
title = {A framework for few-shot language model evaluation},
month = sep,
year = 2021,
publisher = {Zenodo},
version = {v0.0.1},
doi = {10.5281/zenodo.5371628},
url = {https://doi.org/10.5281/zenodo.5371628}
}
```
## Model Card Authors and Contact
[Andron00e](https://github.com/Andron00e)
| {} | RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-8bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-02T09:51:16+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
YetAnother\_Open-Llama-3B-LoRA-OpenOrca - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
license: apache-2.0
datasets:
* Open-Orca/OpenOrca
language:
* en
library\_name: transformers
pipeline\_tag: question-answering
metrics:
* accuracy
---
Model Details
-------------
### Model Description
* Developed by: Andron00e
* Language(s) (NLP): Python (PyTorch, transformers, peft)
* License: apache-2.0
* Finetuned from model: openlm-research/open\_llama\_3b
### Model Sources [optional]
* Repository: URL
### Training Data
URL
Evaluation
----------
Evaluation of the model was carried out using EulerAI library, more precisely
#### Testing Data
hellaswag testing dataset
#### Metrics
Accuracy
### Results and Model Examination
s
Model Card Authors and Contact
------------------------------
Andron00e
| [
"### Model Description\n\n\n* Developed by: Andron00e\n* Language(s) (NLP): Python (PyTorch, transformers, peft)\n* License: apache-2.0\n* Finetuned from model: openlm-research/open\\_llama\\_3b",
"### Model Sources [optional]\n\n\n* Repository: URL",
"### Training Data\n\n\nURL\n\n\nEvaluation\n----------\n\n\nEvaluation of the model was carried out using EulerAI library, more precisely",
"#### Testing Data\n\n\nhellaswag testing dataset",
"#### Metrics\n\n\nAccuracy",
"### Results and Model Examination\n\n\n\ns\n\n\nModel Card Authors and Contact\n------------------------------\n\n\nAndron00e"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Model Description\n\n\n* Developed by: Andron00e\n* Language(s) (NLP): Python (PyTorch, transformers, peft)\n* License: apache-2.0\n* Finetuned from model: openlm-research/open\\_llama\\_3b",
"### Model Sources [optional]\n\n\n* Repository: URL",
"### Training Data\n\n\nURL\n\n\nEvaluation\n----------\n\n\nEvaluation of the model was carried out using EulerAI library, more precisely",
"#### Testing Data\n\n\nhellaswag testing dataset",
"#### Metrics\n\n\nAccuracy",
"### Results and Model Examination\n\n\n\ns\n\n\nModel Card Authors and Contact\n------------------------------\n\n\nAndron00e"
] | [
38,
64,
13,
33,
13,
7,
47
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n### Model Description\n\n\n* Developed by: Andron00e\n* Language(s) (NLP): Python (PyTorch, transformers, peft)\n* License: apache-2.0\n* Finetuned from model: openlm-research/open\\_llama\\_3b### Model Sources [optional]\n\n\n* Repository: URL### Training Data\n\n\nURL\n\n\nEvaluation\n----------\n\n\nEvaluation of the model was carried out using EulerAI library, more precisely#### Testing Data\n\n\nhellaswag testing dataset#### Metrics\n\n\nAccuracy### Results and Model Examination\n\n\n\ns\n\n\nModel Card Authors and Contact\n------------------------------\n\n\nAndron00e"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tomaszki/llama-13 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:51:35+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | mccartni-aws/mccartni-orpo-llama-3-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:52:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetRoberta_5epochs
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3387
- Accuracy: 0.9452
- F1: 0.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.2755 | 1.0 | 2500 | 0.4153 | 0.9176 | 0.9129 |
| 0.2107 | 2.0 | 5000 | 0.3076 | 0.9312 | 0.9341 |
| 0.1421 | 3.0 | 7500 | 0.2812 | 0.9428 | 0.9424 |
| 0.081 | 4.0 | 10000 | 0.3304 | 0.9464 | 0.9466 |
| 0.0601 | 5.0 | 12500 | 0.3387 | 0.9452 | 0.9454 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "cardiffnlp/twitter-roberta-base-sentiment", "model-index": [{"name": "TweetRoberta_5epochs", "results": []}]} | AndreiUrsu/TweetRoberta_5epochs | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-sentiment",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:53:53+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-cardiffnlp/twitter-roberta-base-sentiment #autotrain_compatible #endpoints_compatible #region-us
| TweetRoberta\_5epochs
=====================
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3387
* Accuracy: 0.9452
* F1: 0.9454
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-cardiffnlp/twitter-roberta-base-sentiment #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
52,
101,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-cardiffnlp/twitter-roberta-base-sentiment #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thesis-bart-finetuned
This model is a fine-tuned version of [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4528
- Rouge1: 40.6581
- Rouge2: 14.5046
- Rougel: 21.8144
- Rougelsum: 36.4842
- Gen Len: 141.4392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.5368 | 1.0 | 2811 | 2.4528 | 40.6581 | 14.5046 | 21.8144 | 36.4842 | 141.4392 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "sshleifer/distilbart-cnn-6-6", "model-index": [{"name": "thesis-bart-finetuned", "results": []}]} | roofdancer/thesis-bart-finetuned | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:sshleifer/distilbart-cnn-6-6",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:53:54+00:00 | [] | [] | TAGS
#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-sshleifer/distilbart-cnn-6-6 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| thesis-bart-finetuned
=====================
This model is a fine-tuned version of sshleifer/distilbart-cnn-6-6 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4528
* Rouge1: 40.6581
* Rouge2: 14.5046
* Rougel: 21.8144
* Rougelsum: 36.4842
* Gen Len: 141.4392
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-sshleifer/distilbart-cnn-6-6 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
63,
112,
5,
40
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-sshleifer/distilbart-cnn-6-6 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4843
- Rouge1: 79.689
- Rouge2: 52.3784
- Rougel: 79.7235
- Rougelsum: 79.6844
- Gen Len: 9.736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "facebook/bart-large", "model-index": [{"name": "results", "results": []}]} | mlnha/results | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:54:19+00:00 | [] | [] | TAGS
#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# results
This model is a fine-tuned version of facebook/bart-large on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4843
- Rouge1: 79.689
- Rouge2: 52.3784
- Rougel: 79.7235
- Rougelsum: 79.6844
- Gen Len: 9.736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# results\n\nThis model is a fine-tuned version of facebook/bart-large on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.4843\n- Rouge1: 79.689\n- Rouge2: 52.3784\n- Rougel: 79.7235\n- Rougelsum: 79.6844\n- Gen Len: 9.736",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1\n- label_smoothing_factor: 0.1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# results\n\nThis model is a fine-tuned version of facebook/bart-large on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.4843\n- Rouge1: 79.689\n- Rouge2: 52.3784\n- Rougel: 79.7235\n- Rougelsum: 79.6844\n- Gen Len: 9.736",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1\n- label_smoothing_factor: 0.1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
53,
82,
7,
9,
9,
4,
116,
5,
44
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# results\n\nThis model is a fine-tuned version of facebook/bart-large on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.4843\n- Rouge1: 79.689\n- Rouge2: 52.3784\n- Rougel: 79.7235\n- Rougelsum: 79.6844\n- Gen Len: 9.736## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1\n- label_smoothing_factor: 0.1### Training results### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** AvizvaSolutions
- **License:** apache-2.0
- **Finetuned from model :** openchat/openchat-3.5-1210
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "openchat/openchat-3.5-1210"} | AvizvaSolutions/sftVersion-3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:openchat/openchat-3.5-1210",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:54:41+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-openchat/openchat-3.5-1210 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: AvizvaSolutions
- License: apache-2.0
- Finetuned from model : openchat/openchat-3.5-1210
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: AvizvaSolutions\n- License: apache-2.0\n- Finetuned from model : openchat/openchat-3.5-1210\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-openchat/openchat-3.5-1210 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: AvizvaSolutions\n- License: apache-2.0\n- Finetuned from model : openchat/openchat-3.5-1210\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
73,
80
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-openchat/openchat-3.5-1210 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: AvizvaSolutions\n- License: apache-2.0\n- Finetuned from model : openchat/openchat-3.5-1210\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_synDB_aug_ow
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 26
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8172 | 1.0 | 26 | 0.6897 |
| 0.5695 | 1.5 | 39 | 0.2902 |
| 0.2726 | 2.0 | 52 | 0.2045 |
| 0.13 | 2.5 | 65 | 0.1414 |
| 0.1012 | 3.0 | 78 | 0.1601 |
| 0.0669 | 3.5 | 91 | 0.1301 |
| 0.061 | 4.0 | 104 | 0.0996 |
| 0.0419 | 4.5 | 117 | 0.1249 |
| 0.0436 | 5.0 | 130 | 0.1206 |
| 0.037 | 5.5 | 143 | 0.1246 |
| 0.0323 | 6.0 | 156 | 0.1200 |
| 0.0266 | 6.5 | 169 | 0.1326 |
| 0.0254 | 7.0 | 182 | 0.1516 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "donut_synDB_aug_ow", "results": []}]} | Donut01/donut_synDB_aug_ow | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:54:53+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
| donut\_synDB\_aug\_ow
=====================
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1516
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7e-05
* train\_batch\_size: 5
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 6
* total\_train\_batch\_size: 30
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 26
* num\_epochs: 12
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 5\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 6\n* total\\_train\\_batch\\_size: 30\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 26\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 5\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 6\n* total\\_train\\_batch\\_size: 30\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 26\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
62,
151,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 5\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 6\n* total\\_train\\_batch\\_size: 30\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 26\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** jurieyel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/tinyllama-bnb-4bit"} | jurieyel/text2sql-tinyllama-4bit-gguf-q4_K_m | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:57:25+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: jurieyel
- License: apache-2.0
- Finetuned from model : unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
57,
77
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
automatic-speech-recognition | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mihaj/whisper-small-karelian-CodeSwitching | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:57:35+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
37,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "facebook/bart-large-cnn", "model-index": [{"name": "results", "results": []}]} | sudhanshusaxena/results | null | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:00:25+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# results
This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# results\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# results\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
54,
24,
7,
9,
9,
4,
93,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# results\n\nThis model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5### Training results### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-bass-classifier9
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the augmented_bass_sounds dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0042
- Accuracy: 0.9994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0525 | 1.0 | 240 | 0.1287 | 0.9797 |
| 0.0 | 2.0 | 480 | 0.0163 | 0.9982 |
| 0.0001 | 3.0 | 720 | 0.0042 | 0.9994 |
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["TheDuyx/augmented_bass_sounds"], "metrics": ["accuracy"], "base_model": "ntu-spml/distilhubert", "model-index": [{"name": "distilhubert-bass-classifier9", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "augmented_bass_sounds", "type": "TheDuyx/augmented_bass_sounds"}, "metrics": [{"type": "accuracy", "value": 0.9994121105232217, "name": "Accuracy"}]}]}]} | TheDuyx/distilhubert-bass-classifier9 | null | [
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:TheDuyx/augmented_bass_sounds",
"base_model:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:00:33+00:00 | [] | [] | TAGS
#transformers #safetensors #hubert #audio-classification #generated_from_trainer #dataset-TheDuyx/augmented_bass_sounds #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #region-us
| distilhubert-bass-classifier9
=============================
This model is a fine-tuned version of ntu-spml/distilhubert on the augmented\_bass\_sounds dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0042
* Accuracy: 0.9994
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.2
* Pytorch 2.2.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #hubert #audio-classification #generated_from_trainer #dataset-TheDuyx/augmented_bass_sounds #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
69,
130,
5,
40
] | [
"TAGS\n#transformers #safetensors #hubert #audio-classification #generated_from_trainer #dataset-TheDuyx/augmented_bass_sounds #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | dellaanima/gpt_neo_1_3b_wikitext2 | null | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:02:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt_neo #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt_neo #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
40,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #gpt_neo #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
automatic-speech-recognition | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ojro/whisper_tiny_semi_supervised | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:02:28+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
34,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
sentence-similarity | sentence-transformers |
# SentenceTransformer based on sentence-transformers/stsb-distilbert-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/stsb-distilbert-base](https://huggingface.co/sentence-transformers/stsb-distilbert-base) on the [sentence-transformers/quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/stsb-distilbert-base](https://huggingface.co/sentence-transformers/stsb-distilbert-base) <!-- at revision 82ad392c08f81be9be9bf065339670b23f2e1493 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [sentence-transformers/quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/stsb-distilbert-base-ocl")
# Run inference
sentences = [
'Is stretching bad?',
'Is stretching good for you?',
'If i=0; what will i=i++ do to i?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `quora-duplicates`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.86 |
| cosine_accuracy_threshold | 0.8104 |
| cosine_f1 | 0.8251 |
| cosine_f1_threshold | 0.7248 |
| cosine_precision | 0.7347 |
| cosine_recall | 0.9407 |
| cosine_ap | 0.8872 |
| dot_accuracy | 0.828 |
| dot_accuracy_threshold | 157.3549 |
| dot_f1 | 0.7899 |
| dot_f1_threshold | 145.7113 |
| dot_precision | 0.7155 |
| dot_recall | 0.8814 |
| dot_ap | 0.8369 |
| manhattan_accuracy | 0.868 |
| manhattan_accuracy_threshold | 208.0035 |
| manhattan_f1 | 0.8308 |
| manhattan_f1_threshold | 208.0035 |
| manhattan_precision | 0.7922 |
| manhattan_recall | 0.8733 |
| manhattan_ap | 0.8868 |
| euclidean_accuracy | 0.867 |
| euclidean_accuracy_threshold | 9.2694 |
| euclidean_f1 | 0.8301 |
| euclidean_f1_threshold | 9.5257 |
| euclidean_precision | 0.7888 |
| euclidean_recall | 0.876 |
| euclidean_ap | 0.8884 |
| max_accuracy | 0.868 |
| max_accuracy_threshold | 208.0035 |
| max_f1 | 0.8308 |
| max_f1_threshold | 208.0035 |
| max_precision | 0.7922 |
| max_recall | 0.9407 |
| **max_ap** | **0.8884** |
#### Paraphrase Mining
* Dataset: `quora-duplicates-dev`
* Evaluated with [<code>ParaphraseMiningEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.ParaphraseMiningEvaluator)
| Metric | Value |
|:----------------------|:-----------|
| **average_precision** | **0.5344** |
| f1 | 0.5448 |
| precision | 0.5311 |
| recall | 0.5592 |
| threshold | 0.8626 |
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.928 |
| cosine_accuracy@3 | 0.9712 |
| cosine_accuracy@5 | 0.9782 |
| cosine_accuracy@10 | 0.9874 |
| cosine_precision@1 | 0.928 |
| cosine_precision@3 | 0.4151 |
| cosine_precision@5 | 0.2666 |
| cosine_precision@10 | 0.1417 |
| cosine_recall@1 | 0.7994 |
| cosine_recall@3 | 0.9342 |
| cosine_recall@5 | 0.9561 |
| cosine_recall@10 | 0.9766 |
| cosine_ndcg@10 | 0.9516 |
| cosine_mrr@10 | 0.9509 |
| **cosine_map@100** | **0.939** |
| dot_accuracy@1 | 0.8926 |
| dot_accuracy@3 | 0.9518 |
| dot_accuracy@5 | 0.9658 |
| dot_accuracy@10 | 0.9768 |
| dot_precision@1 | 0.8926 |
| dot_precision@3 | 0.4027 |
| dot_precision@5 | 0.2608 |
| dot_precision@10 | 0.1388 |
| dot_recall@1 | 0.768 |
| dot_recall@3 | 0.9106 |
| dot_recall@5 | 0.9402 |
| dot_recall@10 | 0.9623 |
| dot_ndcg@10 | 0.9264 |
| dot_mrr@10 | 0.9243 |
| dot_map@100 | 0.9094 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### sentence-transformers/quora-duplicates
* Dataset: [sentence-transformers/quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
* Size: 100,000 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 15.5 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.46 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>0: ~64.10%</li><li>1: ~35.90%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:---------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------|
| <code>What are the best ecommerce blogs to do guest posts on about SEO to gain new clients?</code> | <code>Interested in being a guest blogger for an ecommerce marketing blog?</code> | <code>0</code> |
| <code>How do I learn Informatica online training?</code> | <code>What is Informatica online training?</code> | <code>0</code> |
| <code>What effects does marijuana use have on the flu?</code> | <code>What effects does Marijuana use have on the common cold?</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/losses.html#onlinecontrastiveloss)
### Evaluation Dataset
#### sentence-transformers/quora-duplicates
* Dataset: [sentence-transformers/quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
* Size: 1,000 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 15.82 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.91 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>0: ~62.90%</li><li>1: ~37.10%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------|:---------------------------------------------------|:---------------|
| <code>How should I prepare for JEE Mains 2017?</code> | <code>How do I prepare for the JEE 2016?</code> | <code>0</code> |
| <code>What is the gate exam?</code> | <code>What is the GATE exam in engineering?</code> | <code>0</code> |
| <code>Where do IRS officers get posted?</code> | <code>Does IRS Officers get posted abroad?</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/losses.html#onlinecontrastiveloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: False
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: None
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | cosine_map@100 | quora-duplicates-dev_average_precision | quora-duplicates_max_ap |
|:------:|:----:|:-------------:|:------:|:--------------:|:--------------------------------------:|:-----------------------:|
| 0 | 0 | - | - | 0.9235 | 0.4200 | 0.7276 |
| 0.0640 | 100 | 2.5123 | - | - | - | - |
| 0.1280 | 200 | 2.0534 | - | - | - | - |
| 0.1599 | 250 | - | 1.7914 | 0.9127 | 0.4082 | 0.8301 |
| 0.1919 | 300 | 1.9505 | - | - | - | - |
| 0.2559 | 400 | 1.9836 | - | - | - | - |
| 0.3199 | 500 | 1.8462 | 1.5923 | 0.9190 | 0.4445 | 0.8688 |
| 0.3839 | 600 | 1.7734 | - | - | - | - |
| 0.4479 | 700 | 1.7918 | - | - | - | - |
| 0.4798 | 750 | - | 1.5461 | 0.9291 | 0.4943 | 0.8707 |
| 0.5118 | 800 | 1.6157 | - | - | - | - |
| 0.5758 | 900 | 1.7244 | - | - | - | - |
| 0.6398 | 1000 | 1.7322 | 1.5294 | 0.9309 | 0.5048 | 0.8808 |
| 0.7038 | 1100 | 1.6825 | - | - | - | - |
| 0.7678 | 1200 | 1.6823 | - | - | - | - |
| 0.7997 | 1250 | - | 1.4812 | 0.9351 | 0.5126 | 0.8865 |
| 0.8317 | 1300 | 1.5707 | - | - | - | - |
| 0.8957 | 1400 | 1.6145 | - | - | - | - |
| 0.9597 | 1500 | 1.5795 | 1.4705 | 0.9390 | 0.5344 | 0.8884 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.040 kWh
- **Carbon Emitted**: 0.016 kg of CO2
- **Hours Used**: 0.202 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.0.0.dev0
- Transformers: 4.41.0.dev0
- PyTorch: 2.3.0+cu121
- Accelerate: 0.26.1
- Datasets: 2.18.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"language": ["en"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "loss:OnlineContrastiveLoss"], "metrics": ["cosine_accuracy", "cosine_accuracy_threshold", "cosine_f1", "cosine_f1_threshold", "cosine_precision", "cosine_recall", "cosine_ap", "dot_accuracy", "dot_accuracy_threshold", "dot_f1", "dot_f1_threshold", "dot_precision", "dot_recall", "dot_ap", "manhattan_accuracy", "manhattan_accuracy_threshold", "manhattan_f1", "manhattan_f1_threshold", "manhattan_precision", "manhattan_recall", "manhattan_ap", "euclidean_accuracy", "euclidean_accuracy_threshold", "euclidean_f1", "euclidean_f1_threshold", "euclidean_precision", "euclidean_recall", "euclidean_ap", "max_accuracy", "max_accuracy_threshold", "max_f1", "max_f1_threshold", "max_precision", "max_recall", "max_ap", "average_precision", "f1", "precision", "recall", "threshold", "cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100", "dot_accuracy@1", "dot_accuracy@3", "dot_accuracy@5", "dot_accuracy@10", "dot_precision@1", "dot_precision@3", "dot_precision@5", "dot_precision@10", "dot_recall@1", "dot_recall@3", "dot_recall@5", "dot_recall@10", "dot_ndcg@10", "dot_mrr@10", "dot_map@100"], "base_model": "sentence-transformers/stsb-distilbert-base", "widget": [{"source_sentence": "Why did he go MIA?", "sentences": ["Why did Yahoo kill Konfabulator?", "Why do people get angry with me?", "What are the best waterproof guns?"]}, {"source_sentence": "Who is a soulmate?", "sentences": ["Is she the \u201cOne\u201d?", "Who is Pakistan's biggest enemy?", "Will smoking weed help with my anxiety?"]}, {"source_sentence": "Is this poem good?", "sentences": ["Is my poem any good?", "How can I become a good speaker?", "What is feminism?"]}, {"source_sentence": "Who invented Yoga?", "sentences": ["How was yoga invented?", "Who owns this number 3152150252?", "What is Dynamics CRM Services?"]}, {"source_sentence": "Is stretching bad?", "sentences": ["Is stretching good for you?", "If i=0; what will i=i++ do to i?", "What is the Output of this C program ?"]}], "pipeline_tag": "sentence-similarity", "co2_eq_emissions": {"emissions": 15.707175691967695, "energy_consumed": 0.040409299905757354, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "13th Gen Intel(R) Core(TM) i7-13700K", "ram_total_size": 31.777088165283203, "hours_used": 0.202, "hardware_used": "1 x NVIDIA GeForce RTX 3090"}, "model-index": [{"name": "SentenceTransformer based on sentence-transformers/stsb-distilbert-base", "results": [{"task": {"type": "binary-classification", "name": "Binary Classification"}, "dataset": {"name": "quora duplicates", "type": "quora-duplicates"}, "metrics": [{"type": "cosine_accuracy", "value": 0.86, "name": "Cosine Accuracy"}, {"type": "cosine_accuracy_threshold", "value": 0.8104104995727539, "name": "Cosine Accuracy Threshold"}, {"type": "cosine_f1", "value": 0.8250591016548463, "name": "Cosine F1"}, {"type": "cosine_f1_threshold", "value": 0.7247534394264221, "name": "Cosine F1 Threshold"}, {"type": "cosine_precision", "value": 0.7347368421052631, "name": "Cosine Precision"}, {"type": "cosine_recall", "value": 0.9407008086253369, "name": "Cosine Recall"}, {"type": "cosine_ap", "value": 0.887247904332921, "name": "Cosine Ap"}, {"type": "dot_accuracy", "value": 0.828, "name": "Dot Accuracy"}, {"type": "dot_accuracy_threshold", "value": 157.35491943359375, "name": "Dot Accuracy Threshold"}, {"type": "dot_f1", "value": 0.7898550724637681, "name": "Dot F1"}, {"type": "dot_f1_threshold", "value": 145.7113037109375, "name": "Dot F1 Threshold"}, {"type": "dot_precision", "value": 0.7155361050328227, "name": "Dot Precision"}, {"type": "dot_recall", "value": 0.8814016172506739, "name": "Dot Recall"}, {"type": "dot_ap", "value": 0.8369433397850002, "name": "Dot Ap"}, {"type": "manhattan_accuracy", "value": 0.868, "name": "Manhattan Accuracy"}, {"type": "manhattan_accuracy_threshold", "value": 208.00347900390625, "name": "Manhattan Accuracy Threshold"}, {"type": "manhattan_f1", "value": 0.8307692307692308, "name": "Manhattan F1"}, {"type": "manhattan_f1_threshold", "value": 208.00347900390625, "name": "Manhattan F1 Threshold"}, {"type": "manhattan_precision", "value": 0.7921760391198044, "name": "Manhattan Precision"}, {"type": "manhattan_recall", "value": 0.8733153638814016, "name": "Manhattan Recall"}, {"type": "manhattan_ap", "value": 0.8868217413983182, "name": "Manhattan Ap"}, {"type": "euclidean_accuracy", "value": 0.867, "name": "Euclidean Accuracy"}, {"type": "euclidean_accuracy_threshold", "value": 9.269388198852539, "name": "Euclidean Accuracy Threshold"}, {"type": "euclidean_f1", "value": 0.8301404853128991, "name": "Euclidean F1"}, {"type": "euclidean_f1_threshold", "value": 9.525729179382324, "name": "Euclidean F1 Threshold"}, {"type": "euclidean_precision", "value": 0.7888349514563107, "name": "Euclidean Precision"}, {"type": "euclidean_recall", "value": 0.876010781671159, "name": "Euclidean Recall"}, {"type": "euclidean_ap", "value": 0.8884154240019244, "name": "Euclidean Ap"}, {"type": "max_accuracy", "value": 0.868, "name": "Max Accuracy"}, {"type": "max_accuracy_threshold", "value": 208.00347900390625, "name": "Max Accuracy Threshold"}, {"type": "max_f1", "value": 0.8307692307692308, "name": "Max F1"}, {"type": "max_f1_threshold", "value": 208.00347900390625, "name": "Max F1 Threshold"}, {"type": "max_precision", "value": 0.7921760391198044, "name": "Max Precision"}, {"type": "max_recall", "value": 0.9407008086253369, "name": "Max Recall"}, {"type": "max_ap", "value": 0.8884154240019244, "name": "Max Ap"}]}, {"task": {"type": "paraphrase-mining", "name": "Paraphrase Mining"}, "dataset": {"name": "quora duplicates dev", "type": "quora-duplicates-dev"}, "metrics": [{"type": "average_precision", "value": 0.534436244125929, "name": "Average Precision"}, {"type": "f1", "value": 0.5447997274541295, "name": "F1"}, {"type": "precision", "value": 0.5311002514589362, "name": "Precision"}, {"type": "recall", "value": 0.5592246590398161, "name": "Recall"}, {"type": "threshold", "value": 0.8626040816307068, "name": "Threshold"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.928, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9712, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9782, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9874, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.928, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.4151333333333334, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.26656, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.14166, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.7993523853760618, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.9341884771405065, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9560896250710075, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9766088525134997, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9516150309696244, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.9509392857142857, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.9390263696194139, "name": "Cosine Map@100"}, {"type": "dot_accuracy@1", "value": 0.8926, "name": "Dot Accuracy@1"}, {"type": "dot_accuracy@3", "value": 0.9518, "name": "Dot Accuracy@3"}, {"type": "dot_accuracy@5", "value": 0.9658, "name": "Dot Accuracy@5"}, {"type": "dot_accuracy@10", "value": 0.9768, "name": "Dot Accuracy@10"}, {"type": "dot_precision@1", "value": 0.8926, "name": "Dot Precision@1"}, {"type": "dot_precision@3", "value": 0.40273333333333333, "name": "Dot Precision@3"}, {"type": "dot_precision@5", "value": 0.26076, "name": "Dot Precision@5"}, {"type": "dot_precision@10", "value": 0.13882, "name": "Dot Precision@10"}, {"type": "dot_recall@1", "value": 0.7679620996617761, "name": "Dot Recall@1"}, {"type": "dot_recall@3", "value": 0.9105756956997251, "name": "Dot Recall@3"}, {"type": "dot_recall@5", "value": 0.9402185219519044, "name": "Dot Recall@5"}, {"type": "dot_recall@10", "value": 0.9623418143294613, "name": "Dot Recall@10"}, {"type": "dot_ndcg@10", "value": 0.9263520741106431, "name": "Dot Ndcg@10"}, {"type": "dot_mrr@10", "value": 0.9243020634920638, "name": "Dot Mrr@10"}, {"type": "dot_map@100", "value": 0.9094019438194247, "name": "Dot Map@100"}]}]}]} | tomaarsen/stsb-distilbert-base-ocl | null | [
"sentence-transformers",
"safetensors",
"distilbert",
"sentence-similarity",
"feature-extraction",
"loss:OnlineContrastiveLoss",
"en",
"arxiv:1908.10084",
"base_model:sentence-transformers/stsb-distilbert-base",
"model-index",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:02:38+00:00 | [
"1908.10084"
] | [
"en"
] | TAGS
#sentence-transformers #safetensors #distilbert #sentence-similarity #feature-extraction #loss-OnlineContrastiveLoss #en #arxiv-1908.10084 #base_model-sentence-transformers/stsb-distilbert-base #model-index #co2_eq_emissions #endpoints_compatible #region-us
| SentenceTransformer based on sentence-transformers/stsb-distilbert-base
=======================================================================
This is a sentence-transformers model finetuned from sentence-transformers/stsb-distilbert-base on the sentence-transformers/quora-duplicates dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
-------------
### Model Description
* Model Type: Sentence Transformer
* Base model: sentence-transformers/stsb-distilbert-base
* Maximum Sequence Length: 128 tokens
* Output Dimensionality: 768 tokens
* Similarity Function: Cosine Similarity
* Training Dataset:
+ sentence-transformers/quora-duplicates
* Language: en
### Model Sources
* Documentation: Sentence Transformers Documentation
* Repository: Sentence Transformers on GitHub
* Hugging Face: Sentence Transformers on Hugging Face
### Full Model Architecture
Usage
-----
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
Then you can load this model and run inference.
Evaluation
----------
### Metrics
#### Binary Classification
* Dataset: 'quora-duplicates'
* Evaluated with `BinaryClassificationEvaluator`
#### Paraphrase Mining
* Dataset: 'quora-duplicates-dev'
* Evaluated with `ParaphraseMiningEvaluator`
#### Information Retrieval
* Evaluated with `InformationRetrievalEvaluator`
Training Details
----------------
### Training Dataset
#### sentence-transformers/quora-duplicates
* Dataset: sentence-transformers/quora-duplicates at 451a485
* Size: 100,000 training samples
* Columns: `sentence1`, `sentence2`, and `label`
* Approximate statistics based on the first 1000 samples:
* Samples:
* Loss: `OnlineContrastiveLoss`
### Evaluation Dataset
#### sentence-transformers/quora-duplicates
* Dataset: sentence-transformers/quora-duplicates at 451a485
* Size: 1,000 evaluation samples
* Columns: `sentence1`, `sentence2`, and `label`
* Approximate statistics based on the first 1000 samples:
* Samples:
* Loss: `OnlineContrastiveLoss`
### Training Hyperparameters
#### Non-Default Hyperparameters
* 'eval\_strategy': steps
* 'per\_device\_train\_batch\_size': 64
* 'per\_device\_eval\_batch\_size': 64
* 'num\_train\_epochs': 1
* 'warmup\_ratio': 0.1
* 'fp16': True
* 'batch\_sampler': no\_duplicates
#### All Hyperparameters
Click to expand
* 'overwrite\_output\_dir': False
* 'do\_predict': False
* 'eval\_strategy': steps
* 'prediction\_loss\_only': False
* 'per\_device\_train\_batch\_size': 64
* 'per\_device\_eval\_batch\_size': 64
* 'per\_gpu\_train\_batch\_size': None
* 'per\_gpu\_eval\_batch\_size': None
* 'gradient\_accumulation\_steps': 1
* 'eval\_accumulation\_steps': None
* 'learning\_rate': 5e-05
* 'weight\_decay': 0.0
* 'adam\_beta1': 0.9
* 'adam\_beta2': 0.999
* 'adam\_epsilon': 1e-08
* 'max\_grad\_norm': 1.0
* 'num\_train\_epochs': 1
* 'max\_steps': -1
* 'lr\_scheduler\_type': linear
* 'lr\_scheduler\_kwargs': {}
* 'warmup\_ratio': 0.1
* 'warmup\_steps': 0
* 'log\_level': passive
* 'log\_level\_replica': warning
* 'log\_on\_each\_node': True
* 'logging\_nan\_inf\_filter': True
* 'save\_safetensors': True
* 'save\_on\_each\_node': False
* 'save\_only\_model': False
* 'no\_cuda': False
* 'use\_cpu': False
* 'use\_mps\_device': False
* 'seed': 42
* 'data\_seed': None
* 'jit\_mode\_eval': False
* 'use\_ipex': False
* 'bf16': False
* 'fp16': True
* 'fp16\_opt\_level': O1
* 'half\_precision\_backend': auto
* 'bf16\_full\_eval': False
* 'fp16\_full\_eval': False
* 'tf32': None
* 'local\_rank': 0
* 'ddp\_backend': None
* 'tpu\_num\_cores': None
* 'tpu\_metrics\_debug': False
* 'debug': []
* 'dataloader\_drop\_last': False
* 'dataloader\_num\_workers': 0
* 'dataloader\_prefetch\_factor': None
* 'past\_index': -1
* 'disable\_tqdm': False
* 'remove\_unused\_columns': True
* 'label\_names': None
* 'load\_best\_model\_at\_end': False
* 'ignore\_data\_skip': False
* 'fsdp': []
* 'fsdp\_min\_num\_params': 0
* 'fsdp\_config': {'min\_num\_params': 0, 'xla': False, 'xla\_fsdp\_v2': False, 'xla\_fsdp\_grad\_ckpt': False}
* 'fsdp\_transformer\_layer\_cls\_to\_wrap': None
* 'accelerator\_config': {'split\_batches': False, 'dispatch\_batches': None, 'even\_batches': True, 'use\_seedable\_sampler': True, 'non\_blocking': False, 'gradient\_accumulation\_kwargs': None}
* 'deepspeed': None
* 'label\_smoothing\_factor': 0.0
* 'optim': adamw\_torch
* 'optim\_args': None
* 'adafactor': False
* 'group\_by\_length': False
* 'length\_column\_name': length
* 'ddp\_find\_unused\_parameters': None
* 'ddp\_bucket\_cap\_mb': None
* 'ddp\_broadcast\_buffers': None
* 'dataloader\_pin\_memory': True
* 'dataloader\_persistent\_workers': False
* 'skip\_memory\_metrics': True
* 'use\_legacy\_prediction\_loop': False
* 'push\_to\_hub': False
* 'resume\_from\_checkpoint': None
* 'hub\_model\_id': None
* 'hub\_strategy': every\_save
* 'hub\_private\_repo': False
* 'hub\_always\_push': False
* 'gradient\_checkpointing': False
* 'gradient\_checkpointing\_kwargs': None
* 'include\_inputs\_for\_metrics': False
* 'eval\_do\_concat\_batches': True
* 'fp16\_backend': auto
* 'push\_to\_hub\_model\_id': None
* 'push\_to\_hub\_organization': None
* 'mp\_parameters':
* 'auto\_find\_batch\_size': False
* 'full\_determinism': False
* 'torchdynamo': None
* 'ray\_scope': last
* 'ddp\_timeout': 1800
* 'torch\_compile': False
* 'torch\_compile\_backend': None
* 'torch\_compile\_mode': None
* 'dispatch\_batches': None
* 'split\_batches': None
* 'include\_tokens\_per\_second': False
* 'include\_num\_input\_tokens\_seen': False
* 'neftune\_noise\_alpha': None
* 'optim\_target\_modules': None
* 'batch\_sampler': no\_duplicates
* 'multi\_dataset\_batch\_sampler': proportional
### Training Logs
### Environmental Impact
Carbon emissions were measured using CodeCarbon.
* Energy Consumed: 0.040 kWh
* Carbon Emitted: 0.016 kg of CO2
* Hours Used: 0.202 hours
### Training Hardware
* On Cloud: No
* GPU Model: 1 x NVIDIA GeForce RTX 3090
* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
* RAM Size: 31.78 GB
### Framework Versions
* Python: 3.11.6
* Sentence Transformers: 3.0.0.dev0
* Transformers: 4.41.0.dev0
* PyTorch: 2.3.0+cu121
* Accelerate: 0.26.1
* Datasets: 2.18.0
* Tokenizers: 0.19.1
### BibTeX
#### Sentence Transformers
| [
"### Model Description\n\n\n* Model Type: Sentence Transformer\n* Base model: sentence-transformers/stsb-distilbert-base\n* Maximum Sequence Length: 128 tokens\n* Output Dimensionality: 768 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\t+ sentence-transformers/quora-duplicates\n* Language: en",
"### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face",
"### Full Model Architecture\n\n\nUsage\n-----",
"### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------",
"### Metrics",
"#### Binary Classification\n\n\n* Dataset: 'quora-duplicates'\n* Evaluated with `BinaryClassificationEvaluator`",
"#### Paraphrase Mining\n\n\n* Dataset: 'quora-duplicates-dev'\n* Evaluated with `ParaphraseMiningEvaluator`",
"#### Information Retrieval\n\n\n* Evaluated with `InformationRetrievalEvaluator`\n\n\n\nTraining Details\n----------------",
"### Training Dataset",
"#### sentence-transformers/quora-duplicates\n\n\n* Dataset: sentence-transformers/quora-duplicates at 451a485\n* Size: 100,000 training samples\n* Columns: `sentence1`, `sentence2`, and `label`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `OnlineContrastiveLoss`",
"### Evaluation Dataset",
"#### sentence-transformers/quora-duplicates\n\n\n* Dataset: sentence-transformers/quora-duplicates at 451a485\n* Size: 1,000 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `label`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `OnlineContrastiveLoss`",
"### Training Hyperparameters",
"#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 64\n* 'per\\_device\\_eval\\_batch\\_size': 64\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True\n* 'batch\\_sampler': no\\_duplicates",
"#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 64\n* 'per\\_device\\_eval\\_batch\\_size': 64\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': no\\_duplicates\n* 'multi\\_dataset\\_batch\\_sampler': proportional",
"### Training Logs",
"### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.040 kWh\n* Carbon Emitted: 0.016 kg of CO2\n* Hours Used: 0.202 hours",
"### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB",
"### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1",
"### BibTeX",
"#### Sentence Transformers"
] | [
"TAGS\n#sentence-transformers #safetensors #distilbert #sentence-similarity #feature-extraction #loss-OnlineContrastiveLoss #en #arxiv-1908.10084 #base_model-sentence-transformers/stsb-distilbert-base #model-index #co2_eq_emissions #endpoints_compatible #region-us \n",
"### Model Description\n\n\n* Model Type: Sentence Transformer\n* Base model: sentence-transformers/stsb-distilbert-base\n* Maximum Sequence Length: 128 tokens\n* Output Dimensionality: 768 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\t+ sentence-transformers/quora-duplicates\n* Language: en",
"### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face",
"### Full Model Architecture\n\n\nUsage\n-----",
"### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------",
"### Metrics",
"#### Binary Classification\n\n\n* Dataset: 'quora-duplicates'\n* Evaluated with `BinaryClassificationEvaluator`",
"#### Paraphrase Mining\n\n\n* Dataset: 'quora-duplicates-dev'\n* Evaluated with `ParaphraseMiningEvaluator`",
"#### Information Retrieval\n\n\n* Evaluated with `InformationRetrievalEvaluator`\n\n\n\nTraining Details\n----------------",
"### Training Dataset",
"#### sentence-transformers/quora-duplicates\n\n\n* Dataset: sentence-transformers/quora-duplicates at 451a485\n* Size: 100,000 training samples\n* Columns: `sentence1`, `sentence2`, and `label`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `OnlineContrastiveLoss`",
"### Evaluation Dataset",
"#### sentence-transformers/quora-duplicates\n\n\n* Dataset: sentence-transformers/quora-duplicates at 451a485\n* Size: 1,000 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `label`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `OnlineContrastiveLoss`",
"### Training Hyperparameters",
"#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 64\n* 'per\\_device\\_eval\\_batch\\_size': 64\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True\n* 'batch\\_sampler': no\\_duplicates",
"#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 64\n* 'per\\_device\\_eval\\_batch\\_size': 64\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': no\\_duplicates\n* 'multi\\_dataset\\_batch\\_sampler': proportional",
"### Training Logs",
"### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.040 kWh\n* Carbon Emitted: 0.016 kg of CO2\n* Hours Used: 0.202 hours",
"### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB",
"### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1",
"### BibTeX",
"#### Sentence Transformers"
] | [
80,
72,
29,
12,
37,
5,
28,
34,
36,
6,
80,
6,
80,
8,
106,
1494,
5,
43,
55,
79,
6,
6
] | [
"TAGS\n#sentence-transformers #safetensors #distilbert #sentence-similarity #feature-extraction #loss-OnlineContrastiveLoss #en #arxiv-1908.10084 #base_model-sentence-transformers/stsb-distilbert-base #model-index #co2_eq_emissions #endpoints_compatible #region-us \n### Model Description\n\n\n* Model Type: Sentence Transformer\n* Base model: sentence-transformers/stsb-distilbert-base\n* Maximum Sequence Length: 128 tokens\n* Output Dimensionality: 768 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\t+ sentence-transformers/quora-duplicates\n* Language: en### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face### Full Model Architecture\n\n\nUsage\n-----### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------### Metrics#### Binary Classification\n\n\n* Dataset: 'quora-duplicates'\n* Evaluated with `BinaryClassificationEvaluator`#### Paraphrase Mining\n\n\n* Dataset: 'quora-duplicates-dev'\n* Evaluated with `ParaphraseMiningEvaluator`#### Information Retrieval\n\n\n* Evaluated with `InformationRetrievalEvaluator`\n\n\n\nTraining Details\n----------------### Training Dataset#### sentence-transformers/quora-duplicates\n\n\n* Dataset: sentence-transformers/quora-duplicates at 451a485\n* Size: 100,000 training samples\n* Columns: `sentence1`, `sentence2`, and `label`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `OnlineContrastiveLoss`### Evaluation Dataset#### sentence-transformers/quora-duplicates\n\n\n* Dataset: sentence-transformers/quora-duplicates at 451a485\n* Size: 1,000 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `label`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `OnlineContrastiveLoss`### Training Hyperparameters#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 64\n* 'per\\_device\\_eval\\_batch\\_size': 64\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True\n* 'batch\\_sampler': no\\_duplicates#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 64\n* 'per\\_device\\_eval\\_batch\\_size': 64\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': no\\_duplicates\n* 'multi\\_dataset\\_batch\\_sampler': proportional### Training Logs### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.040 kWh\n* Carbon Emitted: 0.016 kg of CO2\n* Hours Used: 0.202 hours### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1### BibTeX#### Sentence Transformers"
] |
text-generation | transformers | # jarvis_v1_folder
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using Meta-Llama-3-8B as a base.
### Models Merged
The following models were included in the merge:
* Llama-3-SauerkrautLM-8b-Instruct
### Configuration
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": []} | shyamieee/JARVIS-v1.0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T10:03:44+00:00 | [
"2212.04089"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2212.04089 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # jarvis_v1_folder
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the task arithmetic merge method using Meta-Llama-3-8B as a base.
### Models Merged
The following models were included in the merge:
* Llama-3-SauerkrautLM-8b-Instruct
### Configuration
| [
"# jarvis_v1_folder\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the task arithmetic merge method using Meta-Llama-3-8B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* Llama-3-SauerkrautLM-8b-Instruct",
"### Configuration"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2212.04089 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# jarvis_v1_folder\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the task arithmetic merge method using Meta-Llama-3-8B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* Llama-3-SauerkrautLM-8b-Instruct",
"### Configuration"
] | [
59,
22,
4,
29,
31,
4
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2212.04089 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# jarvis_v1_folder\n\nThis is a merge of pre-trained language models created using mergekit.## Merge Details### Merge Method\n\nThis model was merged using the task arithmetic merge method using Meta-Llama-3-8B as a base.### Models Merged\n\nThe following models were included in the merge:\n* Llama-3-SauerkrautLM-8b-Instruct### Configuration"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** walid-iguider
- **License:** cc-by-nc-sa-4.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
## Evaluation
For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard).
Here's a breakdown of the performance metrics:
| Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
|:----------------------------|:----------------------|:----------------|:---------------------|:--------|
| **Accuracy Normalized** | 0.5841 | 0.4414 | 0.5389 | 0.5214 |
---
## How to Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("FairMind/Phi-3-mini-4k-instruct-bnb-4bit-Ita")
model = AutoModelForCausalLM.from_pretrained("FairMind/Phi-3-mini-4k-instruct-bnb-4bit-Ita")
model.to(device)
generation_config = GenerationConfig(
penalty_alpha=0.6, # The values balance the model confidence and the degeneration penalty in contrastive search decoding.
do_sample = True, # Whether or not to use sampling ; use greedy decoding otherwise.
top_k=5, # The number of highest probability vocabulary tokens to keep for top-k-filtering.
temperature=0.001, # The value used to modulate the next token probabilities.
repetition_penalty=1.7, # The parameter for repetition penalty. 1.0 means no penalty.
max_new_tokens = 64, # The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.
eos_token_id=tokenizer.eos_token_id, # The id of the *end-of-sequence* token.
pad_token_id=tokenizer.eos_token_id, # The id of the *padding* token.
)
def generate_answer(question):
messages = [
{"role": "user", "content": question},
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
outputs = model.generate(model_inputs, generation_config=generation_config)
result = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
return result
question = """Quale è la torre più famosa di Parigi?"""
answer = generate_answer(question)
print(answer)
```
---
This model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["it"], "license": "cc-by-nc-sa-4.0", "tags": ["text-generation-inference", "transformers", "unsloth", "trl", "sft", "phi-3", "phi-3-mini", "italian"], "datasets": ["mchl-labs/stambecco_data_it"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"} | FairMind/Phi-3-mini-4k-instruct-bnb-4bit-Ita | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"phi-3",
"phi-3-mini",
"italian",
"conversational",
"it",
"dataset:mchl-labs/stambecco_data_it",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:04:03+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #phi-3 #phi-3-mini #italian #conversational #it #dataset-mchl-labs/stambecco_data_it #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
| Uploaded model
==============
* Developed by: walid-iguider
* License: cc-by-nc-sa-4.0
* Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit
Evaluation
----------
For a detailed comparison of model performance, check out the Leaderboard for Italian Language Models.
Here's a breakdown of the performance metrics:
---
How to Use
----------
---
This model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #phi-3 #phi-3-mini #italian #conversational #it #dataset-mchl-labs/stambecco_data_it #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
119
] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #phi-3 #phi-3-mini #italian #conversational #it #dataset-mchl-labs/stambecco_data_it #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quickstep3621/toggibf | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:04:31+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
41,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quickstep3621/uwvnhsl | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:04:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
41,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** Parssky
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | Parssky/Llama3-8B-TechnicalReport-bf16_GGUF | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:04:45+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Parssky
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Parssky\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Parssky\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
61,
80
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: Parssky\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-large-prova3-drone-2024_05_02-with_data_aug_batch-size32_epochs100_freeze
DinoVd'eau is a fine-tuned version of [facebook/dinov2-large](https://huggingface.co/facebook/dinov2-large) on the multilabel_complete_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2945
- F1 Micro: 0.7964
- F1 Macro: 0.5349
- Roc Auc: 0.8466
- Accuracy: 0.2347
- Learning Rate: 1e-05
## Model description
DinoVd'eau is a model built on top of dinov2 model for underwater multilabel image classification.The classification head is a combination of linear, ReLU, batch normalization, and dropout layers.
- **Developed by:** [lombardata](https://huggingface.co/lombardata), credits to [César Leblanc](https://huggingface.co/CesarLeblanc) and [Victor Illien](https://huggingface.co/groderg)
## Intended uses & limitations
You can use the raw model for classify diverse marine species, encompassing coral morphotypes classes taken from the Global Coral Reef Monitoring Network (GCRMN), habitats classes and seagrass species.
## Training and evaluation data
Details on the number of images for each class are given in the following table:
| |train |val |test |Total |
|--- | --- | --- | --- | --- |
| Acropore_branched | 884 | 293 | 296 | 1473 |
| Acropore_digitised | 220 | 84 | 70 | 374 |
| Acropore_sub_massive | 219 | 101 | 108 | 428 |
| Acropore_tabular | 178 | 69 | 52 | 299 |
| Algae_assembly | 1712 | 558 | 546 | 2816 |
| Algae_drawn_up | 219 | 101 | 108 | 428 |
| Algae_limestone | 218 | 59 | 74 | 351 |
| Algae_sodding | 1228 | 403 | 412 | 2043 |
| Bleached_coral | 219 | 101 | 108 | 428 |
| Dead_coral | 1192 | 393 | 392 | 1977 |
| Fish | 974 | 336 | 328 | 1638 |
| No_acropore_massive | 340 | 127 | 108 | 575 |
| No_acropore_sub_massive | 739 | 244 | 233 | 1216 |
| Rock | 2814 | 935 | 937 | 4686 |
| Sand | 2807 | 936 | 935 | 4678 |
| Scrap | 2565 | 822 | 846 | 4233 |
## Training procedure
### Data Augmentation
Data were augmented using the following transformations :
- training transformations : Sequential(
(0): PreProcess()
(1): Resize(output_size=(159, 159), p=1.0, p_batch=1.0, same_on_batch=True, size=(159, 159), side=short, resample=bilinear, align_corners=True, antialias=False)
(2): RandomHorizontalFlip(p=0.25, p_batch=1.0, same_on_batch=False)
(3): RandomVerticalFlip(p=0.25, p_batch=1.0, same_on_batch=False)
(4): ColorJiggle(brightness=0.0, contrast=0.0, saturation=0.0, hue=0.0, p=0.25, p_batch=1.0, same_on_batch=False)
(5): RandomPerspective(distortion_scale=0.5, p=0.25, p_batch=1.0, same_on_batch=False, align_corners=False, resample=bilinear)
(6): Normalize(p=1.0, p_batch=1.0, same_on_batch=True, mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))
)
- validation transformations : Sequential(
(0): PreProcess()
(1): Resize(output_size=(159, 159), p=1.0, p_batch=1.0, same_on_batch=True, size=(159, 159), side=short, resample=bilinear, align_corners=True, antialias=False)
(2): Normalize(p=1.0, p_batch=1.0, same_on_batch=True, mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))
)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1
- freeze_encoder: True
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Roc Auc | Accuracy | Rate |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-------:|:--------:|:------:|
| No log | 1.0 | 107 | 0.4159 | 0.7692 | 0.5071 | 0.8330 | 0.1710 | 0.001 |
| No log | 2.0 | 214 | 0.3285 | 0.7729 | 0.4613 | 0.8288 | 0.1880 | 0.001 |
| No log | 3.0 | 321 | 0.3240 | 0.7751 | 0.4660 | 0.8311 | 0.1871 | 0.001 |
| No log | 4.0 | 428 | 0.3120 | 0.7876 | 0.5069 | 0.8415 | 0.1862 | 0.001 |
| 0.3847 | 5.0 | 535 | 0.3163 | 0.7831 | 0.4977 | 0.8372 | 0.1737 | 0.001 |
| 0.3847 | 6.0 | 642 | 0.3099 | 0.7868 | 0.4944 | 0.8393 | 0.1898 | 0.001 |
| 0.3847 | 7.0 | 749 | 0.3092 | 0.7892 | 0.5105 | 0.8419 | 0.1889 | 0.001 |
| 0.3847 | 8.0 | 856 | 0.3131 | 0.7896 | 0.5561 | 0.8444 | 0.1880 | 0.001 |
| 0.3847 | 9.0 | 963 | 0.3076 | 0.7870 | 0.5449 | 0.8404 | 0.1808 | 0.001 |
| 0.3097 | 10.0 | 1070 | 0.3084 | 0.7884 | 0.4930 | 0.8409 | 0.1996 | 0.001 |
| 0.3097 | 11.0 | 1177 | 0.3065 | 0.7861 | 0.5140 | 0.8383 | 0.1907 | 0.001 |
| 0.3097 | 12.0 | 1284 | 0.3066 | 0.7858 | 0.5194 | 0.8384 | 0.1853 | 0.001 |
| 0.3097 | 13.0 | 1391 | 0.3058 | 0.7928 | 0.5252 | 0.8449 | 0.1898 | 0.001 |
| 0.3097 | 14.0 | 1498 | 0.3031 | 0.7938 | 0.5230 | 0.8456 | 0.1952 | 0.001 |
| 0.2993 | 15.0 | 1605 | 0.3087 | 0.7894 | 0.5194 | 0.8412 | 0.2023 | 0.001 |
| 0.2993 | 16.0 | 1712 | 0.3117 | 0.7814 | 0.5141 | 0.8337 | 0.1943 | 0.001 |
| 0.2993 | 17.0 | 1819 | 0.3129 | 0.7922 | 0.5232 | 0.8435 | 0.1961 | 0.001 |
| 0.2993 | 18.0 | 1926 | 0.3055 | 0.7911 | 0.5242 | 0.8424 | 0.2059 | 0.001 |
| 0.2952 | 19.0 | 2033 | 0.3077 | 0.7888 | 0.5164 | 0.8411 | 0.1979 | 0.001 |
| 0.2952 | 20.0 | 2140 | 0.3041 | 0.7933 | 0.5229 | 0.8458 | 0.1952 | 0.001 |
| 0.2952 | 21.0 | 2247 | 0.2988 | 0.7945 | 0.5344 | 0.8458 | 0.1916 | 0.0001 |
| 0.2952 | 22.0 | 2354 | 0.2985 | 0.7941 | 0.5420 | 0.8462 | 0.1907 | 0.0001 |
| 0.2952 | 23.0 | 2461 | 0.2991 | 0.7916 | 0.5316 | 0.8440 | 0.1898 | 0.0001 |
| 0.2823 | 24.0 | 2568 | 0.3017 | 0.7930 | 0.5312 | 0.8445 | 0.1934 | 0.0001 |
| 0.2823 | 25.0 | 2675 | 0.3015 | 0.7936 | 0.5404 | 0.8455 | 0.1961 | 0.0001 |
| 0.2823 | 26.0 | 2782 | 0.3005 | 0.7927 | 0.5416 | 0.8449 | 0.1916 | 0.0001 |
| 0.2823 | 27.0 | 2889 | 0.2994 | 0.7931 | 0.5498 | 0.8452 | 0.1952 | 0.0001 |
| 0.2823 | 28.0 | 2996 | 0.2998 | 0.7920 | 0.5446 | 0.8437 | 0.1961 | 0.0001 |
| 0.2722 | 29.0 | 3103 | 0.2991 | 0.7922 | 0.5452 | 0.8437 | 0.1961 | 1e-05 |
| 0.2722 | 30.0 | 3210 | 0.2985 | 0.7918 | 0.5425 | 0.8435 | 0.1961 | 1e-05 |
| 0.2722 | 31.0 | 3317 | 0.2997 | 0.7926 | 0.5476 | 0.8447 | 0.1889 | 1e-05 |
| 0.2722 | 32.0 | 3424 | 0.2994 | 0.7933 | 0.5493 | 0.8453 | 0.1853 | 1e-05 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu118
- Datasets 2.18.0
- Tokenizers 0.15.0
| {"language": ["eng"], "license": "apache-2.0", "tags": ["multilabel-image-classification", "multilabel", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "facebook/dinov2-large", "model-index": [{"name": "dinov2-large-prova3-drone-2024_05_02-with_data_aug_batch-size32_epochs100_freeze", "results": []}]} | lombardata/dinov2-large-prova3-drone-2024_05_02-with_data_aug_batch-size32_epochs100_freeze | null | [
"transformers",
"tensorboard",
"safetensors",
"dinov2",
"multilabel-image-classification",
"multilabel",
"generated_from_trainer",
"eng",
"base_model:facebook/dinov2-large",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:04:59+00:00 | [] | [
"eng"
] | TAGS
#transformers #tensorboard #safetensors #dinov2 #multilabel-image-classification #multilabel #generated_from_trainer #eng #base_model-facebook/dinov2-large #license-apache-2.0 #endpoints_compatible #region-us
| dinov2-large-prova3-drone-2024\_05\_02-with\_data\_aug\_batch-size32\_epochs100\_freeze
=======================================================================================
DinoVd'eau is a fine-tuned version of facebook/dinov2-large on the multilabel\_complete\_dataset dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2945
* F1 Micro: 0.7964
* F1 Macro: 0.5349
* Roc Auc: 0.8466
* Accuracy: 0.2347
* Learning Rate: 1e-05
Model description
-----------------
DinoVd'eau is a model built on top of dinov2 model for underwater multilabel image classification.The classification head is a combination of linear, ReLU, batch normalization, and dropout layers.
* Developed by: lombardata, credits to César Leblanc and Victor Illien
Intended uses & limitations
---------------------------
You can use the raw model for classify diverse marine species, encompassing coral morphotypes classes taken from the Global Coral Reef Monitoring Network (GCRMN), habitats classes and seagrass species.
Training and evaluation data
----------------------------
Details on the number of images for each class are given in the following table:
Training procedure
------------------
### Data Augmentation
Data were augmented using the following transformations :
* training transformations : Sequential(
(0): PreProcess()
(1): Resize(output\_size=(159, 159), p=1.0, p\_batch=1.0, same\_on\_batch=True, size=(159, 159), side=short, resample=bilinear, align\_corners=True, antialias=False)
(2): RandomHorizontalFlip(p=0.25, p\_batch=1.0, same\_on\_batch=False)
(3): RandomVerticalFlip(p=0.25, p\_batch=1.0, same\_on\_batch=False)
(4): ColorJiggle(brightness=0.0, contrast=0.0, saturation=0.0, hue=0.0, p=0.25, p\_batch=1.0, same\_on\_batch=False)
(5): RandomPerspective(distortion\_scale=0.5, p=0.25, p\_batch=1.0, same\_on\_batch=False, align\_corners=False, resample=bilinear)
(6): Normalize(p=1.0, p\_batch=1.0, same\_on\_batch=True, mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))
)
* validation transformations : Sequential(
(0): PreProcess()
(1): Resize(output\_size=(159, 159), p=1.0, p\_batch=1.0, same\_on\_batch=True, size=(159, 159), side=short, resample=bilinear, align\_corners=True, antialias=False)
(2): Normalize(p=1.0, p\_batch=1.0, same\_on\_batch=True, mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))
)
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1
* freeze\_encoder: True
* num\_epochs: 100
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.0+cu118
* Datasets 2.18.0
* Tokenizers 0.15.0
| [
"### Data Augmentation\n\n\nData were augmented using the following transformations :\n\n\n* training transformations : Sequential(\n(0): PreProcess()\n(1): Resize(output\\_size=(159, 159), p=1.0, p\\_batch=1.0, same\\_on\\_batch=True, size=(159, 159), side=short, resample=bilinear, align\\_corners=True, antialias=False)\n(2): RandomHorizontalFlip(p=0.25, p\\_batch=1.0, same\\_on\\_batch=False)\n(3): RandomVerticalFlip(p=0.25, p\\_batch=1.0, same\\_on\\_batch=False)\n(4): ColorJiggle(brightness=0.0, contrast=0.0, saturation=0.0, hue=0.0, p=0.25, p\\_batch=1.0, same\\_on\\_batch=False)\n(5): RandomPerspective(distortion\\_scale=0.5, p=0.25, p\\_batch=1.0, same\\_on\\_batch=False, align\\_corners=False, resample=bilinear)\n(6): Normalize(p=1.0, p\\_batch=1.0, same\\_on\\_batch=True, mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))\n)\n* validation transformations : Sequential(\n(0): PreProcess()\n(1): Resize(output\\_size=(159, 159), p=1.0, p\\_batch=1.0, same\\_on\\_batch=True, size=(159, 159), side=short, resample=bilinear, align\\_corners=True, antialias=False)\n(2): Normalize(p=1.0, p\\_batch=1.0, same\\_on\\_batch=True, mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))\n)",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1\n* freeze\\_encoder: True\n* num\\_epochs: 100",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.0+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #tensorboard #safetensors #dinov2 #multilabel-image-classification #multilabel #generated_from_trainer #eng #base_model-facebook/dinov2-large #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Data Augmentation\n\n\nData were augmented using the following transformations :\n\n\n* training transformations : Sequential(\n(0): PreProcess()\n(1): Resize(output\\_size=(159, 159), p=1.0, p\\_batch=1.0, same\\_on\\_batch=True, size=(159, 159), side=short, resample=bilinear, align\\_corners=True, antialias=False)\n(2): RandomHorizontalFlip(p=0.25, p\\_batch=1.0, same\\_on\\_batch=False)\n(3): RandomVerticalFlip(p=0.25, p\\_batch=1.0, same\\_on\\_batch=False)\n(4): ColorJiggle(brightness=0.0, contrast=0.0, saturation=0.0, hue=0.0, p=0.25, p\\_batch=1.0, same\\_on\\_batch=False)\n(5): RandomPerspective(distortion\\_scale=0.5, p=0.25, p\\_batch=1.0, same\\_on\\_batch=False, align\\_corners=False, resample=bilinear)\n(6): Normalize(p=1.0, p\\_batch=1.0, same\\_on\\_batch=True, mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))\n)\n* validation transformations : Sequential(\n(0): PreProcess()\n(1): Resize(output\\_size=(159, 159), p=1.0, p\\_batch=1.0, same\\_on\\_batch=True, size=(159, 159), side=short, resample=bilinear, align\\_corners=True, antialias=False)\n(2): Normalize(p=1.0, p\\_batch=1.0, same\\_on\\_batch=True, mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))\n)",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1\n* freeze\\_encoder: True\n* num\\_epochs: 100",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.0+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] | [
63,
543,
128,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #dinov2 #multilabel-image-classification #multilabel #generated_from_trainer #eng #base_model-facebook/dinov2-large #license-apache-2.0 #endpoints_compatible #region-us \n### Data Augmentation\n\n\nData were augmented using the following transformations :\n\n\n* training transformations : Sequential(\n(0): PreProcess()\n(1): Resize(output\\_size=(159, 159), p=1.0, p\\_batch=1.0, same\\_on\\_batch=True, size=(159, 159), side=short, resample=bilinear, align\\_corners=True, antialias=False)\n(2): RandomHorizontalFlip(p=0.25, p\\_batch=1.0, same\\_on\\_batch=False)\n(3): RandomVerticalFlip(p=0.25, p\\_batch=1.0, same\\_on\\_batch=False)\n(4): ColorJiggle(brightness=0.0, contrast=0.0, saturation=0.0, hue=0.0, p=0.25, p\\_batch=1.0, same\\_on\\_batch=False)\n(5): RandomPerspective(distortion\\_scale=0.5, p=0.25, p\\_batch=1.0, same\\_on\\_batch=False, align\\_corners=False, resample=bilinear)\n(6): Normalize(p=1.0, p\\_batch=1.0, same\\_on\\_batch=True, mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))\n)\n* validation transformations : Sequential(\n(0): PreProcess()\n(1): Resize(output\\_size=(159, 159), p=1.0, p\\_batch=1.0, same\\_on\\_batch=True, size=(159, 159), side=short, resample=bilinear, align\\_corners=True, antialias=False)\n(2): Normalize(p=1.0, p\\_batch=1.0, same\\_on\\_batch=True, mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))\n)### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1\n* freeze\\_encoder: True\n* num\\_epochs: 100### Training results### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.0+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ekle-me/gemma-2b-it-EBA-finetune-106 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T10:05:23+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
46,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-testcase
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 5 | 3.0276 | 20.8462 | 6.2353 | 14.3336 | 16.8951 | 19.0 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "t5-base", "model-index": [{"name": "t5-base-finetuned-testcase", "results": []}]} | ridhu-s/t5-base-finetuned-testcase | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T10:06:01+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| t5-base-finetuned-testcase
==========================
This model is a fine-tuned version of t5-base on the None dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
62,
112,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null | Model которая основана на
[LLaMA-2](https://ai.meta.com/llama/) и дообучена на корпусах, сгенерированных ChatGPT, таких как
[ru_turbo_alpaca](https://huggingface.co/datasets/IlyaGusev/ru_turbo_alpaca),
[ru_turbo_saiga](https://huggingface.co/datasets/IlyaGusev/ru_turbo_saiga) и
[gpt_roleplay_realm](https://huggingface.co/datasets/IlyaGusev/gpt_roleplay_realm) | {"license": "mit"} | julicq/model-q4_K | null | [
"gguf",
"license:mit",
"region:us"
] | null | 2024-05-02T10:06:02+00:00 | [] | [] | TAGS
#gguf #license-mit #region-us
| Model которая основана на
LLaMA-2 и дообучена на корпусах, сгенерированных ChatGPT, таких как
ru_turbo_alpaca,
ru_turbo_saiga и
gpt_roleplay_realm | [] | [
"TAGS\n#gguf #license-mit #region-us \n"
] | [
13
] | [
"TAGS\n#gguf #license-mit #region-us \n"
] |
null | null |
# LlamaHermes-7B
LlamaHermes-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: DeepMount00/Llama-3-8b-Ita
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/LlamaHermes-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/LlamaHermes-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T10:08:14+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
|
# LlamaHermes-7B
LlamaHermes-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
| [
"# LlamaHermes-7B\n\nLlamaHermes-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n",
"# LlamaHermes-7B\n\nLlamaHermes-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
27,
30,
3,
3
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n# LlamaHermes-7B\n\nLlamaHermes-7B is an automated merge created by Maxime Labonne using the following configuration.## Configuration## Usage"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# avaimon/t5-summarizer
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1356
- Validation Loss: 2.4205
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 60960, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.1356 | 2.4205 | 0 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google/mt5-small", "model-index": [{"name": "avaimon/t5-summarizer", "results": []}]} | avaimon/t5-summarizer | null | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T10:08:39+00:00 | [] | [] | TAGS
#transformers #tf #mt5 #text2text-generation #generated_from_keras_callback #base_model-google/mt5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| avaimon/t5-summarizer
=====================
This model is a fine-tuned version of google/mt5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 4.1356
* Validation Loss: 2.4205
* Epoch: 0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 5.6e-05, 'decay\_steps': 60960, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01}
* training\_precision: mixed\_float16
### Training results
### Framework versions
* Transformers 4.40.1
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5.6e-05, 'decay\\_steps': 60960, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: mixed\\_float16",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #mt5 #text2text-generation #generated_from_keras_callback #base_model-google/mt5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5.6e-05, 'decay\\_steps': 60960, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: mixed\\_float16",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
64,
226,
5,
38
] | [
"TAGS\n#transformers #tf #mt5 #text2text-generation #generated_from_keras_callback #base_model-google/mt5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5.6e-05, 'decay\\_steps': 60960, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: mixed\\_float16### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** LLMalberto
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | LLMalberto/llama3-8b-unsloth-llmmaster | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:08:46+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: LLMalberto
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: LLMalberto\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: LLMalberto\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
77,
80
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: LLMalberto\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "sft"]} | Rhma/mistral_7b_fine-tuned-compt2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T10:12:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
57,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-vocals
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1234
- Cer: 0.2848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 6.9359 | 3.6496 | 500 | 2.8329 | 0.9873 |
| 1.1168 | 7.2993 | 1000 | 1.0561 | 0.3050 |
| 0.3139 | 10.9489 | 1500 | 1.1234 | 0.2848 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.0.1+cu117
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-vocals", "results": []}]} | hongseongpil/wav2vec2-vocals | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:14:09+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-vocals
===============
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1234
* Cer: 0.2848
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 12
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.0.1+cu117
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.0.1+cu117\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.0.1+cu117\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
67,
128,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.0.1+cu117\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null | # Convection Parameterization in CAM
Note that this repository and code is still work in progress and undergoing significant development.
Once a useable release is produced it will be tagged.
## Description
This repository contains code as part of an effort to deploy machine learning (ML) models of geophysical parameterisations into the [Community Earth System Model (CESM)](https://www.cesm.ucar.edu/).
This work is part of the [M<sup>2</sup>LInES](https://m2lines.github.io/) project aiming to improve performance of climate models using ML models for subgrid parameterizations.
A Neural Net providing a subgrid parameterization of atmospheric convection in a [single column model](https://www.arm.gov/publications/proceedings/conf04/extended_abs/randall_da.pdf) has been developed and successfully deployed as part of an atmospheric simulation.
The work is described in a [GRL paper](https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2020GL091363) with [accompanying code available](https://github.com/yaniyuval/Neural_nework_parameterization/tree/v.1.0.3). The repository contains the neural net and its implementation into a simple system for atmospheric modelling, [SAM](http://rossby.msrc.sunysb.edu/~marat/SAM.html).
The aims of this repository are to:
1. develop a standalone fortran module based on this neural net that can be used elsewhere,
2. deploy the module in another atmospheric model, and
3. evaluate its performance.
We may also perform an investigation into interfacing the pytorch implementation of the Neural Net using the [pytorch-fortran bridging code](https://github.com/Cambridge-ICCS/fortran-pytorch-lib) developed at the [Institute of Computing for Climate Science](https://cambridge-iccs.github.io/).
The model will first be deployed into the [Single Column Atmospheric Model (SCAM)](https://www.cesm.ucar.edu/models/simple/scam) - a single column version of the CESM.
We plan to evaluate performance using SCAM in the gateIII configuration for tropical convection in a similar manner described by the [SCAM6 pulication in JAMES](https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2018MS001578).
This will compare model performance to data from an intense observation period (IOP) described in an [AMS publication](https://journals.ametsoc.org/view/journals/atsc/36/1/1520-0469_1979_036_0053_saposs_2_0_co_2.xml).
Long term developments of this project will seek to re-deploy more complex ML parameterizations into mode complex atmospheric models such as the [Community Atmospheric Model (CAM)](https://www.cesm.ucar.edu/models/cam) part of the CESM.
## Repository structure
```
├── NN_module
│ └── ...
└── torch_nets
└── ...
```
### Contents
### `NN_module/`
This folder contains the fortran neural net extracted from the [code referenced above](https://github.com/yaniyuval/Neural_nework_parameterization/tree/v.1.0.3), along with any dependencies, that may be compiled as a standalone fortran module.
Currently there is code that can be built on CSD3 using the included shell script.
This now needs cleaning up, testing, and a proper makefile creating (see open issues #9 and #10).
### ``torch_nets/``
The directory contains the PyTorch versions of the neural networks we are interested in.
## Contributing
This repository is currently private as it is new and work in progress.
Open tickets can be viewed at ['Issues'](https://github.com/m2lines/convection-parameterization-in-CAM/issues).
To contribute find a relevant issue or open a new one and assign yourself to work on it.
Then create a branch in which to add your contribution and open a pull request.
Once ready assign a reviewer and request a code review.
Merging should _only_ be performed once a code review has taken place. | {"language": ["en"], "license": "mit", "tags": ["climate"]} | ICCS/cam-ml-yog-v0 | null | [
"climate",
"en",
"license:mit",
"region:us"
] | null | 2024-05-02T10:14:34+00:00 | [] | [
"en"
] | TAGS
#climate #en #license-mit #region-us
| # Convection Parameterization in CAM
Note that this repository and code is still work in progress and undergoing significant development.
Once a useable release is produced it will be tagged.
## Description
This repository contains code as part of an effort to deploy machine learning (ML) models of geophysical parameterisations into the Community Earth System Model (CESM).
This work is part of the M<sup>2</sup>LInES project aiming to improve performance of climate models using ML models for subgrid parameterizations.
A Neural Net providing a subgrid parameterization of atmospheric convection in a single column model has been developed and successfully deployed as part of an atmospheric simulation.
The work is described in a GRL paper with accompanying code available. The repository contains the neural net and its implementation into a simple system for atmospheric modelling, SAM.
The aims of this repository are to:
1. develop a standalone fortran module based on this neural net that can be used elsewhere,
2. deploy the module in another atmospheric model, and
3. evaluate its performance.
We may also perform an investigation into interfacing the pytorch implementation of the Neural Net using the pytorch-fortran bridging code developed at the Institute of Computing for Climate Science.
The model will first be deployed into the Single Column Atmospheric Model (SCAM) - a single column version of the CESM.
We plan to evaluate performance using SCAM in the gateIII configuration for tropical convection in a similar manner described by the SCAM6 pulication in JAMES.
This will compare model performance to data from an intense observation period (IOP) described in an AMS publication.
Long term developments of this project will seek to re-deploy more complex ML parameterizations into mode complex atmospheric models such as the Community Atmospheric Model (CAM) part of the CESM.
## Repository structure
### Contents
### 'NN_module/'
This folder contains the fortran neural net extracted from the code referenced above, along with any dependencies, that may be compiled as a standalone fortran module.
Currently there is code that can be built on CSD3 using the included shell script.
This now needs cleaning up, testing, and a proper makefile creating (see open issues #9 and #10).
### ''torch_nets/''
The directory contains the PyTorch versions of the neural networks we are interested in.
## Contributing
This repository is currently private as it is new and work in progress.
Open tickets can be viewed at 'Issues'.
To contribute find a relevant issue or open a new one and assign yourself to work on it.
Then create a branch in which to add your contribution and open a pull request.
Once ready assign a reviewer and request a code review.
Merging should _only_ be performed once a code review has taken place. | [
"# Convection Parameterization in CAM\n\nNote that this repository and code is still work in progress and undergoing significant development.\nOnce a useable release is produced it will be tagged.",
"## Description\nThis repository contains code as part of an effort to deploy machine learning (ML) models of geophysical parameterisations into the Community Earth System Model (CESM).\nThis work is part of the M<sup>2</sup>LInES project aiming to improve performance of climate models using ML models for subgrid parameterizations.\n\nA Neural Net providing a subgrid parameterization of atmospheric convection in a single column model has been developed and successfully deployed as part of an atmospheric simulation.\nThe work is described in a GRL paper with accompanying code available. The repository contains the neural net and its implementation into a simple system for atmospheric modelling, SAM.\n\nThe aims of this repository are to:\n1. develop a standalone fortran module based on this neural net that can be used elsewhere,\n2. deploy the module in another atmospheric model, and\n3. evaluate its performance.\n\nWe may also perform an investigation into interfacing the pytorch implementation of the Neural Net using the pytorch-fortran bridging code developed at the Institute of Computing for Climate Science.\n\nThe model will first be deployed into the Single Column Atmospheric Model (SCAM) - a single column version of the CESM.\nWe plan to evaluate performance using SCAM in the gateIII configuration for tropical convection in a similar manner described by the SCAM6 pulication in JAMES.\nThis will compare model performance to data from an intense observation period (IOP) described in an AMS publication.\n\nLong term developments of this project will seek to re-deploy more complex ML parameterizations into mode complex atmospheric models such as the Community Atmospheric Model (CAM) part of the CESM.",
"## Repository structure",
"### Contents",
"### 'NN_module/'\nThis folder contains the fortran neural net extracted from the code referenced above, along with any dependencies, that may be compiled as a standalone fortran module.\n\nCurrently there is code that can be built on CSD3 using the included shell script.\n\nThis now needs cleaning up, testing, and a proper makefile creating (see open issues #9 and #10).",
"### ''torch_nets/''\nThe directory contains the PyTorch versions of the neural networks we are interested in.",
"## Contributing\n\nThis repository is currently private as it is new and work in progress.\nOpen tickets can be viewed at 'Issues'.\n\nTo contribute find a relevant issue or open a new one and assign yourself to work on it.\nThen create a branch in which to add your contribution and open a pull request.\nOnce ready assign a reviewer and request a code review.\nMerging should _only_ be performed once a code review has taken place."
] | [
"TAGS\n#climate #en #license-mit #region-us \n",
"# Convection Parameterization in CAM\n\nNote that this repository and code is still work in progress and undergoing significant development.\nOnce a useable release is produced it will be tagged.",
"## Description\nThis repository contains code as part of an effort to deploy machine learning (ML) models of geophysical parameterisations into the Community Earth System Model (CESM).\nThis work is part of the M<sup>2</sup>LInES project aiming to improve performance of climate models using ML models for subgrid parameterizations.\n\nA Neural Net providing a subgrid parameterization of atmospheric convection in a single column model has been developed and successfully deployed as part of an atmospheric simulation.\nThe work is described in a GRL paper with accompanying code available. The repository contains the neural net and its implementation into a simple system for atmospheric modelling, SAM.\n\nThe aims of this repository are to:\n1. develop a standalone fortran module based on this neural net that can be used elsewhere,\n2. deploy the module in another atmospheric model, and\n3. evaluate its performance.\n\nWe may also perform an investigation into interfacing the pytorch implementation of the Neural Net using the pytorch-fortran bridging code developed at the Institute of Computing for Climate Science.\n\nThe model will first be deployed into the Single Column Atmospheric Model (SCAM) - a single column version of the CESM.\nWe plan to evaluate performance using SCAM in the gateIII configuration for tropical convection in a similar manner described by the SCAM6 pulication in JAMES.\nThis will compare model performance to data from an intense observation period (IOP) described in an AMS publication.\n\nLong term developments of this project will seek to re-deploy more complex ML parameterizations into mode complex atmospheric models such as the Community Atmospheric Model (CAM) part of the CESM.",
"## Repository structure",
"### Contents",
"### 'NN_module/'\nThis folder contains the fortran neural net extracted from the code referenced above, along with any dependencies, that may be compiled as a standalone fortran module.\n\nCurrently there is code that can be built on CSD3 using the included shell script.\n\nThis now needs cleaning up, testing, and a proper makefile creating (see open issues #9 and #10).",
"### ''torch_nets/''\nThe directory contains the PyTorch versions of the neural networks we are interested in.",
"## Contributing\n\nThis repository is currently private as it is new and work in progress.\nOpen tickets can be viewed at 'Issues'.\n\nTo contribute find a relevant issue or open a new one and assign yourself to work on it.\nThen create a branch in which to add your contribution and open a pull request.\nOnce ready assign a reviewer and request a code review.\nMerging should _only_ be performed once a code review has taken place."
] | [
13,
34,
342,
4,
4,
86,
29,
88
] | [
"TAGS\n#climate #en #license-mit #region-us \n# Convection Parameterization in CAM\n\nNote that this repository and code is still work in progress and undergoing significant development.\nOnce a useable release is produced it will be tagged.## Description\nThis repository contains code as part of an effort to deploy machine learning (ML) models of geophysical parameterisations into the Community Earth System Model (CESM).\nThis work is part of the M<sup>2</sup>LInES project aiming to improve performance of climate models using ML models for subgrid parameterizations.\n\nA Neural Net providing a subgrid parameterization of atmospheric convection in a single column model has been developed and successfully deployed as part of an atmospheric simulation.\nThe work is described in a GRL paper with accompanying code available. The repository contains the neural net and its implementation into a simple system for atmospheric modelling, SAM.\n\nThe aims of this repository are to:\n1. develop a standalone fortran module based on this neural net that can be used elsewhere,\n2. deploy the module in another atmospheric model, and\n3. evaluate its performance.\n\nWe may also perform an investigation into interfacing the pytorch implementation of the Neural Net using the pytorch-fortran bridging code developed at the Institute of Computing for Climate Science.\n\nThe model will first be deployed into the Single Column Atmospheric Model (SCAM) - a single column version of the CESM.\nWe plan to evaluate performance using SCAM in the gateIII configuration for tropical convection in a similar manner described by the SCAM6 pulication in JAMES.\nThis will compare model performance to data from an intense observation period (IOP) described in an AMS publication.\n\nLong term developments of this project will seek to re-deploy more complex ML parameterizations into mode complex atmospheric models such as the Community Atmospheric Model (CAM) part of the CESM.## Repository structure### Contents### 'NN_module/'\nThis folder contains the fortran neural net extracted from the code referenced above, along with any dependencies, that may be compiled as a standalone fortran module.\n\nCurrently there is code that can be built on CSD3 using the included shell script.\n\nThis now needs cleaning up, testing, and a proper makefile creating (see open issues #9 and #10).### ''torch_nets/''\nThe directory contains the PyTorch versions of the neural networks we are interested in.## Contributing\n\nThis repository is currently private as it is new and work in progress.\nOpen tickets can be viewed at 'Issues'.\n\nTo contribute find a relevant issue or open a new one and assign yourself to work on it.\nThen create a branch in which to add your contribution and open a pull request.\nOnce ready assign a reviewer and request a code review.\nMerging should _only_ be performed once a code review has taken place."
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Precision: 0.9890
- Recall: 1.0
- F1: 0.9945
- Accuracy: 1.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0002 | 1.0 | 806 | 0.0007 | 0.7980 | 0.9 | 0.8460 | 0.9996 |
| 0.0001 | 2.0 | 1612 | 0.0002 | 0.9569 | 0.9861 | 0.9713 | 0.9999 |
| 0.0 | 3.0 | 2418 | 0.0001 | 0.9728 | 0.9917 | 0.9821 | 0.9999 |
| 0.0 | 4.0 | 3224 | 0.0001 | 0.9863 | 1.0 | 0.9931 | 1.0000 |
| 0.0 | 5.0 | 4030 | 0.0001 | 0.9890 | 1.0 | 0.9945 | 1.0000 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "FacebookAI/xlm-roberta-base", "model-index": [{"name": "my_awesome_model", "results": []}]} | lilyyellow/my_awesome_model | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:15:50+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-FacebookAI/xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| my\_awesome\_model
==================
This model is a fine-tuned version of FacebookAI/xlm-roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0001
* Precision: 0.9890
* Recall: 1.0
* F1: 0.9945
* Accuracy: 1.0000
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-FacebookAI/xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
57,
103,
5,
40
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-FacebookAI/xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | yashdkadam/train-on-cleaned-dataset | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:15:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
45,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_classification_gpt2_1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4571
- Accuracy: 0.792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9088 | 0.1 | 625 | 0.6229 | 0.662 |
| 1.0188 | 0.2 | 1250 | 0.5429 | 0.7092 |
| 0.9781 | 0.3 | 1875 | 0.5086 | 0.7556 |
| 0.6919 | 0.4 | 2500 | 0.4571 | 0.792 |
| 0.002 | 0.5 | 3125 | 0.5278 | 0.7892 |
| 1.6834 | 0.6 | 3750 | 0.5348 | 0.8104 |
| 0.0436 | 0.7 | 4375 | 0.4732 | 0.826 |
| 0.0332 | 0.8 | 5000 | 0.4995 | 0.8252 |
| 0.9907 | 0.9 | 5625 | 0.4764 | 0.8344 |
| 0.9651 | 1.0 | 6250 | 0.4809 | 0.8364 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "gpt2", "model-index": [{"name": "text_classification_gpt2_1", "results": []}]} | badrabbitt/text_classification_gpt2_1 | null | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T10:16:27+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt2 #text-classification #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| text\_classification\_gpt2\_1
=============================
This model is a fine-tuned version of gpt2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4571
* Accuracy: 0.792
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.01
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-classification #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
53,
121,
5,
44
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-classification #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOpeepeepoopoo/herewegoagain2 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:17:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
41,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | null |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {} | miansumairjaved/msj | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2024-05-02T10:17:50+00:00 | [
"1910.09700"
] | [] | TAGS
#arxiv-1910.09700 #region-us
|
# Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#arxiv-1910.09700 #region-us \n",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
15,
28,
4,
50,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#arxiv-1910.09700 #region-us \n# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.## Model Details### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | null |
# ChimerallamaHermes-7B
ChimerallamaHermes-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: mlabonne/ChimeraLlama-3-8B
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/ChimerallamaHermes-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/ChimerallamaHermes-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T10:17:53+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
|
# ChimerallamaHermes-7B
ChimerallamaHermes-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
| [
"# ChimerallamaHermes-7B\n\nChimerallamaHermes-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n",
"# ChimerallamaHermes-7B\n\nChimerallamaHermes-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
27,
34,
3,
3
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n# ChimerallamaHermes-7B\n\nChimerallamaHermes-7B is an automated merge created by Maxime Labonne using the following configuration.## Configuration## Usage"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
YetAnother_Open-Llama-3B-LoRA-OpenOrca - GGUF
- Model creator: https://huggingface.co/Andron00e/
- Original model: https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q2_K.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q2_K.gguf) | Q2_K | 1.84GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.IQ3_XS.gguf) | IQ3_XS | 1.84GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.IQ3_S.gguf) | IQ3_S | 1.84GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q3_K_S.gguf) | Q3_K_S | 1.84GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.IQ3_M.gguf) | IQ3_M | 1.92GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q3_K.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q3_K.gguf) | Q3_K | 1.99GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q3_K_M.gguf) | Q3_K_M | 1.99GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q3_K_L.gguf) | Q3_K_L | 2.06GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.IQ4_XS.gguf) | IQ4_XS | 1.86GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q4_0.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q4_0.gguf) | Q4_0 | 1.84GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.IQ4_NL.gguf) | IQ4_NL | 1.86GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q4_K_S.gguf) | Q4_K_S | 2.24GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q4_K.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q4_K.gguf) | Q4_K | 2.4GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q4_K_M.gguf) | Q4_K_M | 2.4GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q4_1.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q4_1.gguf) | Q4_1 | 2.04GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q5_0.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q5_0.gguf) | Q5_0 | 2.23GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q5_K_S.gguf) | Q5_K_S | 2.42GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q5_K.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q5_K.gguf) | Q5_K | 2.57GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q5_K_M.gguf) | Q5_K_M | 2.57GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q5_1.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q5_1.gguf) | Q5_1 | 2.42GB |
| [YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q6_K.gguf](https://huggingface.co/RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf/blob/main/YetAnother_Open-Llama-3B-LoRA-OpenOrca.Q6_K.gguf) | Q6_K | 3.39GB |
Original model description:
---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
library_name: transformers
pipeline_tag: question-answering
metrics:
- accuracy
---
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Andron00e
- **Language(s) (NLP):** Python (PyTorch, transformers, peft)
- **License:** apache-2.0
- **Finetuned from model:** openlm-research/open_llama_3b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/Andron00e/Fine-Tuning-project
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Open-Orca/OpenOrca
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Evaluation of the model was carried out using EulerAI library, more [precisely](https://github.com/EleutherAI/lm-evaluation-harness/tree/e47e01beea79cfe87421e2dac49e64d499c240b4#task-versioning)
#### Testing Data
<!-- This should link to a Data Card if possible. -->
hellaswag testing dataset
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Accuracy
### Results and Model Examination
| Task |Version| Metric |Value | |Stderr|
|---------|------:|--------|-----:|---|-----:|
|hellaswag| 0|acc |0.4899|± |0.0050|
| | |acc_norm|0.6506|± |0.0048|
## Citations
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{eval-harness,
author = {Gao, Leo and
Tow, Jonathan and
Biderman, Stella and
Black, Sid and
DiPofi, Anthony and
Foster, Charles and
Golding, Laurence and
Hsu, Jeffrey and
McDonell, Kyle and
Muennighoff, Niklas and
Phang, Jason and
Reynolds, Laria and
Tang, Eric and
Thite, Anish and
Wang, Ben and
Wang, Kevin and
Zou, Andy},
title = {A framework for few-shot language model evaluation},
month = sep,
year = 2021,
publisher = {Zenodo},
version = {v0.0.1},
doi = {10.5281/zenodo.5371628},
url = {https://doi.org/10.5281/zenodo.5371628}
}
```
## Model Card Authors and Contact
[Andron00e](https://github.com/Andron00e)
| {} | RichardErkhov/Andron00e_-_YetAnother_Open-Llama-3B-LoRA-OpenOrca-gguf | null | [
"gguf",
"region:us"
] | null | 2024-05-02T10:19:16+00:00 | [] | [] | TAGS
#gguf #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
YetAnother\_Open-Llama-3B-LoRA-OpenOrca - GGUF
* Model creator: URL
* Original model: URL
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q2\_K.gguf, Quant method: Q2\_K, Size: 1.84GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 1.84GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 1.84GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 1.84GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 1.92GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q3\_K.gguf, Quant method: Q3\_K, Size: 1.99GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 1.99GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 2.06GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 1.86GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q4\_0.gguf, Quant method: Q4\_0, Size: 1.84GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 1.86GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 2.24GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q4\_K.gguf, Quant method: Q4\_K, Size: 2.4GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 2.4GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q4\_1.gguf, Quant method: Q4\_1, Size: 2.04GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q5\_0.gguf, Quant method: Q5\_0, Size: 2.23GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 2.42GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q5\_K.gguf, Quant method: Q5\_K, Size: 2.57GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 2.57GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q5\_1.gguf, Quant method: Q5\_1, Size: 2.42GB
Name: YetAnother\_Open-Llama-3B-LoRA-OpenOrca.Q6\_K.gguf, Quant method: Q6\_K, Size: 3.39GB
Original model description:
---------------------------
license: apache-2.0
datasets:
* Open-Orca/OpenOrca
language:
* en
library\_name: transformers
pipeline\_tag: question-answering
metrics:
* accuracy
---
Model Details
-------------
### Model Description
* Developed by: Andron00e
* Language(s) (NLP): Python (PyTorch, transformers, peft)
* License: apache-2.0
* Finetuned from model: openlm-research/open\_llama\_3b
### Model Sources [optional]
* Repository: URL
### Training Data
URL
Evaluation
----------
Evaluation of the model was carried out using EulerAI library, more precisely
#### Testing Data
hellaswag testing dataset
#### Metrics
Accuracy
### Results and Model Examination
s
Model Card Authors and Contact
------------------------------
Andron00e
| [
"### Model Description\n\n\n* Developed by: Andron00e\n* Language(s) (NLP): Python (PyTorch, transformers, peft)\n* License: apache-2.0\n* Finetuned from model: openlm-research/open\\_llama\\_3b",
"### Model Sources [optional]\n\n\n* Repository: URL",
"### Training Data\n\n\nURL\n\n\nEvaluation\n----------\n\n\nEvaluation of the model was carried out using EulerAI library, more precisely",
"#### Testing Data\n\n\nhellaswag testing dataset",
"#### Metrics\n\n\nAccuracy",
"### Results and Model Examination\n\n\n\ns\n\n\nModel Card Authors and Contact\n------------------------------\n\n\nAndron00e"
] | [
"TAGS\n#gguf #region-us \n",
"### Model Description\n\n\n* Developed by: Andron00e\n* Language(s) (NLP): Python (PyTorch, transformers, peft)\n* License: apache-2.0\n* Finetuned from model: openlm-research/open\\_llama\\_3b",
"### Model Sources [optional]\n\n\n* Repository: URL",
"### Training Data\n\n\nURL\n\n\nEvaluation\n----------\n\n\nEvaluation of the model was carried out using EulerAI library, more precisely",
"#### Testing Data\n\n\nhellaswag testing dataset",
"#### Metrics\n\n\nAccuracy",
"### Results and Model Examination\n\n\n\ns\n\n\nModel Card Authors and Contact\n------------------------------\n\n\nAndron00e"
] | [
9,
64,
13,
33,
13,
7,
47
] | [
"TAGS\n#gguf #region-us \n### Model Description\n\n\n* Developed by: Andron00e\n* Language(s) (NLP): Python (PyTorch, transformers, peft)\n* License: apache-2.0\n* Finetuned from model: openlm-research/open\\_llama\\_3b### Model Sources [optional]\n\n\n* Repository: URL### Training Data\n\n\nURL\n\n\nEvaluation\n----------\n\n\nEvaluation of the model was carried out using EulerAI library, more precisely#### Testing Data\n\n\nhellaswag testing dataset#### Metrics\n\n\nAccuracy### Results and Model Examination\n\n\n\ns\n\n\nModel Card Authors and Contact\n------------------------------\n\n\nAndron00e"
] |
null | transformers |
# Uploaded model
- **Developed by:** aiaustin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | aiaustin/llama-3-8b-Instruct-bnb-4bit-aiaustin-demo | null | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"8-bit",
"region:us"
] | null | 2024-05-02T10:21:46+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #gguf #llama #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #8-bit #region-us
|
# Uploaded model
- Developed by: aiaustin
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: aiaustin\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #gguf #llama #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #8-bit #region-us \n",
"# Uploaded model\n\n- Developed by: aiaustin\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
78,
83
] | [
"TAGS\n#transformers #safetensors #gguf #llama #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #8-bit #region-us \n# Uploaded model\n\n- Developed by: aiaustin\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "243.91 +/- 24.07", "name": "mean_reward", "verified": false}]}]}]} | sairangoju/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-05-02T10:24:54+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
31,
35,
17
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
illuni-llama-2-ko-7b-test - bnb 4bits
- Model creator: https://huggingface.co/julleong/
- Original model: https://huggingface.co/julleong/illuni-llama-2-ko-7b-test/
Original model description:
---
language:
- ko
base_model: beomi/llama-2-ko-7b
license: mit
pipeline_tag: question-answering
tags:
- instruct
- instruction
- llama-2
---
# llama2-7b
### Model Details
- Developed by: Julleong
- Backbone Model: beomi/llama-2-ko-7b
- Library: [transformers](https://github.com/huggingface/transformers)
### Used Datasets
- 개체명 사전 2022(모두의 말뭉치)
### Prompt Template
```
<usr>
{Instruction}
<bot>
{Answer}
```
### License
- MIT
| {} | RichardErkhov/julleong_-_illuni-llama-2-ko-7b-test-4bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T10:25:29+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
illuni-llama-2-ko-7b-test - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
language:
- ko
base_model: beomi/llama-2-ko-7b
license: mit
pipeline_tag: question-answering
tags:
- instruct
- instruction
- llama-2
---
# llama2-7b
### Model Details
- Developed by: Julleong
- Backbone Model: beomi/llama-2-ko-7b
- Library: transformers
### Used Datasets
- 개체명 사전 2022(모두의 말뭉치)
### Prompt Template
### License
- MIT
| [
"# llama2-7b",
"### Model Details\n- Developed by: Julleong\n- Backbone Model: beomi/llama-2-ko-7b\n- Library: transformers",
"### Used Datasets\n- 개체명 사전 2022(모두의 말뭉치)",
"### Prompt Template",
"### License\n- MIT"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# llama2-7b",
"### Model Details\n- Developed by: Julleong\n- Backbone Model: beomi/llama-2-ko-7b\n- Library: transformers",
"### Used Datasets\n- 개체명 사전 2022(모두의 말뭉치)",
"### Prompt Template",
"### License\n- MIT"
] | [
38,
7,
32,
38,
5,
6
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n# llama2-7b### Model Details\n- Developed by: Julleong\n- Backbone Model: beomi/llama-2-ko-7b\n- Library: transformers### Used Datasets\n- 개체명 사전 2022(모두의 말뭉치)### Prompt Template### License\n- MIT"
] |
text-generation | transformers |
Trained from athirdpath/Llama-3-15b-Instruct on 0.6 epochs of athirdpath/Merge_Glue
I cut it off pretty early because I didn't want the model taking on the tone/logic of the largely GPT 3.5/4 data, just to heal the grafts
It amazes me how, like the human brain, an LLM can recover from massive trauma by just being given an opportunity to reroute. Neuroplasticity is magic.


<p align="center"><font size="5"> <b>Date Writing Example @ q6_k</b> </font></p>
<b>Write a medium-length story about Alice, a 26 year old cocktail waitress, having steamy sex with a guy she picked up at the bar for a one night stand.</b>
<i>
Sure thing! I'll start by setting the scene for our protagonist, Alice. She's a stunning 26-year-old cocktail waitress who works at an upscale hotel bar near downtown Los Angeles. The atmosphere is always lively and bustling with business professionals and travelers looking to unwind after work or dinner.
Alice has a way of captivating people's attention, drawing them in with her bright blue eyes, long blonde hair, and radiant smile. She effortlessly juggles multiple conversations at once while expertly crafting her signature cocktails. Tonight, she's wearing a sleek black dress that accentuates her curves as she glides from table to table.
As the night wears on, Alice notices a handsome man standing alone at the end of the bar. He's tall and lean, with piercing brown eyes and chiseled features. There's something about him that catches her attention - maybe it's his confidence or sense of vulnerability. Whatever it is, she can't help but feel drawn to him.
As she approaches him with a refill on his drink, their gazes lock for a brief moment. The air crackles with electricity as they exchange a flirtatious smile and an easy conversation unfolds. Before long, the man introduces himself as Ryan, a successful businessman who's in town for work. Alice is charmed by his stories of adventure and travel.
As the night wears on, their banter turns more intimate. The tension between them builds until it becomes impossible to ignore. The chemistry between them grows stronger with each passing moment until they can no longer resist the allure of a one-night stand.
After sharing a few drinks, Alice leads Ryan away from the bar and into the hotel suite he's booked for the night. As they undress and slip under the sheets, their desire is palpable. The room darkens, lit only by soft candlelight that casts an eerie glow on their bodies.
Their passion builds until it reaches a crescendo as Alice wraps her arms around Ryan's neck and pulls him in close. Their lips meet in a kiss that defies all boundaries of modesty. The hotel suite is filled with the sounds of moans, gasps, and groans as they explore each other's desires.
Finally, after an unforgettable night of lust and abandon, their bodies come to rest against one another - exhausted but exhilarated by their forbidden pleasure. And though it will likely be their last meeting in this life, Alice knows that the memory of this night will forever linger in her heart.
</i> | {"license": "llama3"} | athirdpath/Llama-3-15b-Instruct-GLUED | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T10:28:15+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Trained from athirdpath/Llama-3-15b-Instruct on 0.6 epochs of athirdpath/Merge_Glue
I cut it off pretty early because I didn't want the model taking on the tone/logic of the largely GPT 3.5/4 data, just to heal the grafts
It amazes me how, like the human brain, an LLM can recover from massive trauma by just being given an opportunity to reroute. Neuroplasticity is magic.
!image/png
!image/png
<p align="center"><font size="5"> <b>Date Writing Example @ q6_k</b> </font></p>
<b>Write a medium-length story about Alice, a 26 year old cocktail waitress, having steamy sex with a guy she picked up at the bar for a one night stand.</b>
<i>
Sure thing! I'll start by setting the scene for our protagonist, Alice. She's a stunning 26-year-old cocktail waitress who works at an upscale hotel bar near downtown Los Angeles. The atmosphere is always lively and bustling with business professionals and travelers looking to unwind after work or dinner.
Alice has a way of captivating people's attention, drawing them in with her bright blue eyes, long blonde hair, and radiant smile. She effortlessly juggles multiple conversations at once while expertly crafting her signature cocktails. Tonight, she's wearing a sleek black dress that accentuates her curves as she glides from table to table.
As the night wears on, Alice notices a handsome man standing alone at the end of the bar. He's tall and lean, with piercing brown eyes and chiseled features. There's something about him that catches her attention - maybe it's his confidence or sense of vulnerability. Whatever it is, she can't help but feel drawn to him.
As she approaches him with a refill on his drink, their gazes lock for a brief moment. The air crackles with electricity as they exchange a flirtatious smile and an easy conversation unfolds. Before long, the man introduces himself as Ryan, a successful businessman who's in town for work. Alice is charmed by his stories of adventure and travel.
As the night wears on, their banter turns more intimate. The tension between them builds until it becomes impossible to ignore. The chemistry between them grows stronger with each passing moment until they can no longer resist the allure of a one-night stand.
After sharing a few drinks, Alice leads Ryan away from the bar and into the hotel suite he's booked for the night. As they undress and slip under the sheets, their desire is palpable. The room darkens, lit only by soft candlelight that casts an eerie glow on their bodies.
Their passion builds until it reaches a crescendo as Alice wraps her arms around Ryan's neck and pulls him in close. Their lips meet in a kiss that defies all boundaries of modesty. The hotel suite is filled with the sounds of moans, gasps, and groans as they explore each other's desires.
Finally, after an unforgettable night of lust and abandon, their bodies come to rest against one another - exhausted but exhilarated by their forbidden pleasure. And though it will likely be their last meeting in this life, Alice knows that the memory of this night will forever linger in her heart.
</i> | [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
43
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | CHAFIK12/tsql_to_plsql_merged | null | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T10:28:35+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #pytorch #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #pytorch #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
49,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #pytorch #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abc88767/model41 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:30:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
41,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: pietroorlandi/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]} | pietroorlandi/ppo-SnowballTarget | null | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | null | 2024-05-02T10:30:49+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us
|
# ppo Agent playing SnowballTarget
This is a trained model of a ppo agent playing SnowballTarget
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: pietroorlandi/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: pietroorlandi/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us \n",
"# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: pietroorlandi/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
39,
206
] | [
"TAGS\n#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us \n# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: pietroorlandi/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_mrqa_v1
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 80 | 2.4611 |
| No log | 2.0 | 160 | 2.0962 |
| No log | 3.0 | 240 | 1.9723 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "FacebookAI/roberta-base", "model-index": [{"name": "roberta_mrqa_v1", "results": []}]} | enriquesaou/roberta_mrqa_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T10:33:54+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #question-answering #generated_from_trainer #base_model-FacebookAI/roberta-base #license-mit #endpoints_compatible #region-us
| roberta\_mrqa\_v1
=================
This model is a fine-tuned version of FacebookAI/roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9723
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 24
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #question-answering #generated_from_trainer #base_model-FacebookAI/roberta-base #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
46,
101,
5,
47
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #question-answering #generated_from_trainer #base_model-FacebookAI/roberta-base #license-mit #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
Subsets and Splits