pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | scshin88/ManfromKorea_class | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:33:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/dumbo-llamalfg11 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T06:34:05+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Ketki0203/xlm-roberta-base-finetuned-wikitext2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1287
- Validation Loss: 0.0546
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1287 | 0.0546 | 0 |
### Framework versions
- Transformers 4.40.0
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "xlm-roberta-base", "model-index": [{"name": "Ketki0203/xlm-roberta-base-finetuned-wikitext2", "results": []}]} | Ketki0203/xlm-roberta-base-finetuned-wikitext2 | null | [
"transformers",
"tf",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_keras_callback",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:34:09+00:00 | [] | [] | TAGS
#transformers #tf #tensorboard #xlm-roberta #fill-mask #generated_from_keras_callback #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| Ketki0203/xlm-roberta-base-finetuned-wikitext2
==============================================
This model is a fine-tuned version of xlm-roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.1287
* Validation Loss: 0.0546
* Epoch: 0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': 2e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\_decay\_rate': 0.01}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.40.0
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #tensorboard #xlm-roberta #fill-mask #generated_from_keras_callback #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null | ## WiNGPT2
[WiNGPT](https://github.com/winninghealth/WiNGPT2) 是一个基于GPT的医疗垂直领域大模型,旨在将专业的医学知识、医疗信息、数据融会贯通,为医疗行业提供智能化的医疗问答、诊断支持和医学知识等信息服务,提高诊疗效率和医疗服务质量。
## 更新日志
[2024/04/24] 更新 WiNGPT2-Llama-3-8B-Chat-AWQ,WiNGPT2-Llama-3-8B-Chat-GGUF 量化模型
[2024/04/23] 更新 WiNGPT2-Llama-3-8B-Base 和 WiNGPT2-Llama-3-8B-Chat 模型(中文增强/多语言)与测评结果
[2024/04/01] 更新 WiNEval 测评结果
[2024/03/05] 开源7B/14B-Chat-4bit模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Chat-AWQ)WiNGPT2-7B-Chat-4bit和[🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Chat-AWQ)WiNGPT2-14B-Chat-4bit。
[2023/12/20] 新增用户微信群二维码,有效期到12月27日,扫码进群。
[2023/12/18] 发布卫宁健康医疗模型测评方案 WiNEval-MCKQuiz的评测结果。
[2023/12/12] 开源 WiNGPT2 14B模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Base)WiNGPT2-14B-Base 和 [🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Chat)WiNGPT2-14B-Chat。
[2023/11/02] [34B模型平台测试](https://wingpt.winning.com.cn/) 和 [欢迎加入微信讨论群](https://github.com/winninghealth/WiNGPT2/blob/main/assets/WiNGPT_GROUP.JPG)
[2023/10/13] 更新一个简单的[Chatbot示例](#部署),可以进行简单的多轮对话。
[2023/09/26] 开源 WiNGPT2 与7B模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Base)WiNGPT2-7B-Base 和 [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Chat)WiNGPT2-7B-Chat。
## 如何使用
### 推理
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "WiNGPT-Llama-3-8B-Chat"
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path).to(device)
model = model.eval()
text = 'User:WiNGPT, 你好<|end_of_text|>\n Assistant:'
inputs = tokenizer.encode(text, return_tensors="pt").to(device)
outputs = model.generate(inputs, repetition_penalty=1.1, max_new_tokens=1024)
response = tokenizer.decode(outputs[0])
print(response)
## 输出结果:你好!今天我能为你做些什么?<|end_of_text|>
```
### 提示
WiNGPT-Llama-3-8B-Chat 使用了自定义的提示格式:
用户角色:System/User/Assistant
chat_template:
```jinja2
"{% for message in messages %}{% if message['role'] == 'system' %}System:{% endif %}{% if message['role'] == 'user' %}User:{% endif %}{% if message['role'] == 'assistant' %}Assistant:{% endif %}{{ message['content'] }}<|end_of_text|>\n {% endfor %}Assistant:"
```
**指令提示**示例:
```
User:WiNGPT, 你好<|end_of_text|>\n Assistant:
```
**多轮对话**示例:
```
User:WiNGPT, 你好<|end_of_text|>\n Assistant:你好!今天我能为你做些什么?<|end_of_text|>\n User:你是谁?<|end_of_text|>\n Assistant:
```
**翻译功能**示例:
```
System:作为医疗领域的智能助手,WiNGPT将提供中英翻译服务。用户输入的中文或英文内容将由WiNGPT进行准确的翻译,以满足用户的语言需求。<|end_of_text|>\n User:Life is short, you know, and time is so swift; Rivers are wide, so wide, and ships sail far.<|end_of_text|>\n Assistant:
```
## 模型卡
#### 训练配置与参数
| 名称 | 训练策略 | 长度 | 精度 | 学习率 | Weight_decay | Epochs | GPUs |
| ----------------------- | ------------------ | ---- | ---- | ------ | ------------ | ------ | ------ |
| WiNGPT2-Llama-3-8B-Base | 继续预训练 (20G) | 8192 | bf16 | 5e-5 | 0.05 | 2 | A100*8 |
| WiNGPT2-Llama-3-8B-Chat | 微调/对齐 (50万条) | 8192 | bf16 | 5e-6 | 0.01 | 4 | A100*8 |
| 名称 | 量化方法 | 精度 | 大小 | 用例 |
| ----------------------------------- | -------- | ---- | ------- | -------------------------------------- |
| WiNGPT2-Llama-3-8B-Chat-AWQ | AWQ | 4 | 5.33 | medium, balanced quality - recommended |
| WiNGPT2-Llama-3-8B-Chat-Q4_K_M.gguf | Q4_K_M | 4 | 4.58 GB | medium, balanced quality |
| WiNGPT2-Llama-3-8B-Chat-Q8_0.gguf | Q8_0 | 8 | 7.95 GB | minimal quality loss |
#### 训练数据
预训练数据约20G,指令微调对齐数据约50万条,[详细内容](https://github.com/winninghealth/WiNGPT2?tab=readme-ov-file#%E8%AE%AD%E7%BB%83%E6%95%B0%E6%8D%AE) 。
## 中文医疗评测 - WiNEval
更新时间:2024-04-23
| | Type | MCKQuiz | MSceQA |
| ----------------------------- | ---------------------- | ------- | ------ |
| **WiNGPT-Llama-3-8B-Base** | Continued Pre-training | 66.3 | / |
| Meta-Llama-3-8B | Pre-training | 37 | / |
| | | | |
| **WiNGPT-Llama-3-8B-Chat** | Finetuning/Alignment | 65.2 | 79.8 |
| Meta-Llama-3-8B-Instruct | Finetuning/Alignment | 49.8 | 76.3 |
| Meta-Llama-3-70B-Instruct-AWQ | Finetuning/Alignment | 73.5 | 78.6 |
| | | | |
*MCKQuiz(客观题):17个科目分类13060选择题;输入问题和选项,让模型输出答案。根据标准答案判断对错,统计准确率。*
*MSceQA(主观题):由细分领域场景题目构成,包含八大业务场景,17个一级分类和32个二级分类。使用人工/模型对模型的回答进行准确性、相关性、一致性、完整性、权威性评价,并参照标准答案对模型生成的答案进行评分。*
[其他WiNEval评测结果](https://github.com/winninghealth/WiNGPT2?tab=readme-ov-file#2-%E5%8D%AB%E5%AE%81%E5%81%A5%E5%BA%B7%E5%8C%BB%E7%96%97%E6%A8%A1%E5%9E%8B%E6%B5%8B%E8%AF%84%E6%96%B9%E6%A1%88-winevalzero-shot)
### 企业服务
[通过WiNGPT测试平台申请密钥或与我们取得联系](https://wingpt.winning.com.cn/)
## 局限性与免责声明
(a) WiNGPT2 是一个专业医疗领域的大语言模型,可为一般用户提供拟人化AI医生问诊和问答功能,以及一般医学领域的知识问答。对于专业医疗人士,WiNGPT2 提供关于患者病情的诊断、用药和健康建议等方面的回答的建议仅供参考。
(b) 您应理解 WiNGPT2 仅提供信息和建议,不能替代医疗专业人士的意见、诊断或治疗建议。在使用 WiNGPT2 的信息之前,请寻求医生或其他医疗专业人员的建议,并独立评估所提供的信息。
(c) WiNGPT2 的信息可能存在错误或不准确。卫宁健康不对 WiNGPT2 的准确性、可靠性、完整性、质量、安全性、及时性、性能或适用性提供任何明示或暗示的保证。使用 WiNGPT2 所产生的结果和决策由您自行承担。第三方原因而给您造成的损害结果承担责任。
## 许可证
1. 本项目授权协议为 Apache License 2.0,模型权重需要遵守基础模型 [Llama-3-8B](https://github.com/meta-llama/llama3) 相关协议及其[许可证](https://llama.meta.com/llama3/license),详细内容参照其网站。
2. 使用本项目包括模型权重时请引用本项目:https://github.com/winninghealth/WiNGPT2
## 联系我们
网站:https://www.winning.com.cn
邮箱:[email protected] | {"language": ["en", "zh"], "license": "apache-2.0", "tags": ["medical"]} | winninghealth/WiNGPT2-Llama-3-8B-Chat-GGUF | null | [
"gguf",
"medical",
"en",
"zh",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T06:35:28+00:00 | [] | [
"en",
"zh"
] | TAGS
#gguf #medical #en #zh #license-apache-2.0 #region-us
| WiNGPT2
-------
WiNGPT 是一个基于GPT的医疗垂直领域大模型,旨在将专业的医学知识、医疗信息、数据融会贯通,为医疗行业提供智能化的医疗问答、诊断支持和医学知识等信息服务,提高诊疗效率和医疗服务质量。
更新日志
----
[2024/04/24] 更新 WiNGPT2-Llama-3-8B-Chat-AWQ,WiNGPT2-Llama-3-8B-Chat-GGUF 量化模型
[2024/04/23] 更新 WiNGPT2-Llama-3-8B-Base 和 WiNGPT2-Llama-3-8B-Chat 模型(中文增强/多语言)与测评结果
[2024/04/01] 更新 WiNEval 测评结果
[2024/03/05] 开源7B/14B-Chat-4bit模型权重: WiNGPT2-7B-Chat-4bit和WiNGPT2-14B-Chat-4bit。
[2023/12/20] 新增用户微信群二维码,有效期到12月27日,扫码进群。
[2023/12/18] 发布卫宁健康医疗模型测评方案 WiNEval-MCKQuiz的评测结果。
[2023/12/12] 开源 WiNGPT2 14B模型权重: WiNGPT2-14B-Base 和 WiNGPT2-14B-Chat。
[2023/11/02] 34B模型平台测试 和 欢迎加入微信讨论群
[2023/10/13] 更新一个简单的Chatbot示例,可以进行简单的多轮对话。
[2023/09/26] 开源 WiNGPT2 与7B模型权重: WiNGPT2-7B-Base 和 WiNGPT2-7B-Chat。
如何使用
----
### 推理
### 提示
WiNGPT-Llama-3-8B-Chat 使用了自定义的提示格式:
用户角色:System/User/Assistant
chat\_template:
指令提示示例:
多轮对话示例:
翻译功能示例:
模型卡
---
#### 训练配置与参数
#### 训练数据
预训练数据约20G,指令微调对齐数据约50万条,详细内容 。
中文医疗评测 - WiNEval
----------------
更新时间:2024-04-23
*MCKQuiz(客观题):17个科目分类13060选择题;输入问题和选项,让模型输出答案。根据标准答案判断对错,统计准确率。*
*MSceQA(主观题):由细分领域场景题目构成,包含八大业务场景,17个一级分类和32个二级分类。使用人工/模型对模型的回答进行准确性、相关性、一致性、完整性、权威性评价,并参照标准答案对模型生成的答案进行评分。*
其他WiNEval评测结果
### 企业服务
通过WiNGPT测试平台申请密钥或与我们取得联系
局限性与免责声明
--------
(a) WiNGPT2 是一个专业医疗领域的大语言模型,可为一般用户提供拟人化AI医生问诊和问答功能,以及一般医学领域的知识问答。对于专业医疗人士,WiNGPT2 提供关于患者病情的诊断、用药和健康建议等方面的回答的建议仅供参考。
(b) 您应理解 WiNGPT2 仅提供信息和建议,不能替代医疗专业人士的意见、诊断或治疗建议。在使用 WiNGPT2 的信息之前,请寻求医生或其他医疗专业人员的建议,并独立评估所提供的信息。
(c) WiNGPT2 的信息可能存在错误或不准确。卫宁健康不对 WiNGPT2 的准确性、可靠性、完整性、质量、安全性、及时性、性能或适用性提供任何明示或暗示的保证。使用 WiNGPT2 所产生的结果和决策由您自行承担。第三方原因而给您造成的损害结果承担责任。
许可证
---
1. 本项目授权协议为 Apache License 2.0,模型权重需要遵守基础模型 Llama-3-8B 相关协议及其许可证,详细内容参照其网站。
2. 使用本项目包括模型权重时请引用本项目:URL
联系我们
----
网站:URL
邮箱:wair@URL
| [
"### 推理",
"### 提示\n\n\nWiNGPT-Llama-3-8B-Chat 使用了自定义的提示格式:\n\n\n用户角色:System/User/Assistant\n\n\nchat\\_template:\n\n\n指令提示示例:\n\n\n多轮对话示例:\n\n\n翻译功能示例:\n\n\n模型卡\n---",
"#### 训练配置与参数",
"#### 训练数据\n\n\n预训练数据约20G,指令微调对齐数据约50万条,详细内容 。\n\n\n中文医疗评测 - WiNEval\n----------------\n\n\n更新时间:2024-04-23\n\n\n\n*MCKQuiz(客观题):17个科目分类13060选择题;输入问题和选项,让模型输出答案。根据标准答案判断对错,统计准确率。*\n\n\n*MSceQA(主观题):由细分领域场景题目构成,包含八大业务场景,17个一级分类和32个二级分类。使用人工/模型对模型的回答进行准确性、相关性、一致性、完整性、权威性评价,并参照标准答案对模型生成的答案进行评分。*\n\n\n其他WiNEval评测结果",
"### 企业服务\n\n\n通过WiNGPT测试平台申请密钥或与我们取得联系\n\n\n局限性与免责声明\n--------\n\n\n(a) WiNGPT2 是一个专业医疗领域的大语言模型,可为一般用户提供拟人化AI医生问诊和问答功能,以及一般医学领域的知识问答。对于专业医疗人士,WiNGPT2 提供关于患者病情的诊断、用药和健康建议等方面的回答的建议仅供参考。\n\n\n(b) 您应理解 WiNGPT2 仅提供信息和建议,不能替代医疗专业人士的意见、诊断或治疗建议。在使用 WiNGPT2 的信息之前,请寻求医生或其他医疗专业人员的建议,并独立评估所提供的信息。\n\n\n(c) WiNGPT2 的信息可能存在错误或不准确。卫宁健康不对 WiNGPT2 的准确性、可靠性、完整性、质量、安全性、及时性、性能或适用性提供任何明示或暗示的保证。使用 WiNGPT2 所产生的结果和决策由您自行承担。第三方原因而给您造成的损害结果承担责任。\n\n\n许可证\n---\n\n\n1. 本项目授权协议为 Apache License 2.0,模型权重需要遵守基础模型 Llama-3-8B 相关协议及其许可证,详细内容参照其网站。\n2. 使用本项目包括模型权重时请引用本项目:URL\n\n\n联系我们\n----\n\n\n网站:URL\n\n\n邮箱:wair@URL"
] | [
"TAGS\n#gguf #medical #en #zh #license-apache-2.0 #region-us \n",
"### 推理",
"### 提示\n\n\nWiNGPT-Llama-3-8B-Chat 使用了自定义的提示格式:\n\n\n用户角色:System/User/Assistant\n\n\nchat\\_template:\n\n\n指令提示示例:\n\n\n多轮对话示例:\n\n\n翻译功能示例:\n\n\n模型卡\n---",
"#### 训练配置与参数",
"#### 训练数据\n\n\n预训练数据约20G,指令微调对齐数据约50万条,详细内容 。\n\n\n中文医疗评测 - WiNEval\n----------------\n\n\n更新时间:2024-04-23\n\n\n\n*MCKQuiz(客观题):17个科目分类13060选择题;输入问题和选项,让模型输出答案。根据标准答案判断对错,统计准确率。*\n\n\n*MSceQA(主观题):由细分领域场景题目构成,包含八大业务场景,17个一级分类和32个二级分类。使用人工/模型对模型的回答进行准确性、相关性、一致性、完整性、权威性评价,并参照标准答案对模型生成的答案进行评分。*\n\n\n其他WiNEval评测结果",
"### 企业服务\n\n\n通过WiNGPT测试平台申请密钥或与我们取得联系\n\n\n局限性与免责声明\n--------\n\n\n(a) WiNGPT2 是一个专业医疗领域的大语言模型,可为一般用户提供拟人化AI医生问诊和问答功能,以及一般医学领域的知识问答。对于专业医疗人士,WiNGPT2 提供关于患者病情的诊断、用药和健康建议等方面的回答的建议仅供参考。\n\n\n(b) 您应理解 WiNGPT2 仅提供信息和建议,不能替代医疗专业人士的意见、诊断或治疗建议。在使用 WiNGPT2 的信息之前,请寻求医生或其他医疗专业人员的建议,并独立评估所提供的信息。\n\n\n(c) WiNGPT2 的信息可能存在错误或不准确。卫宁健康不对 WiNGPT2 的准确性、可靠性、完整性、质量、安全性、及时性、性能或适用性提供任何明示或暗示的保证。使用 WiNGPT2 所产生的结果和决策由您自行承担。第三方原因而给您造成的损害结果承担责任。\n\n\n许可证\n---\n\n\n1. 本项目授权协议为 Apache License 2.0,模型权重需要遵守基础模型 Llama-3-8B 相关协议及其许可证,详细内容参照其网站。\n2. 使用本项目包括模型权重时请引用本项目:URL\n\n\n联系我们\n----\n\n\n网站:URL\n\n\n邮箱:wair@URL"
] |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | stvhuang/rcr-run-5pqr6lwp-90396-master-0_20240402T105012-ep31 | null | [
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:35:30+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-4
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-4", "results": []}]} | AlignmentResearch/robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-4 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T06:36:20+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-4
This model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** Akirami
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | Akirami/llama3-8b-orpo-truthy | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:38:05+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Akirami
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Akirami\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Akirami\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
automatic-speech-recognition | peft |
# whisper-medium-5k
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
--Original sentence:
集団内のすべての個体が特定の表現形質に関して同一である場合 それらは単形性と呼ばれます。
When all individuals in a population are identical with respect to a particular phenotypic trait, they are called monomorphic.
--sin2piusc/whisper-medium-5ksteps:
集団内のすべての個体が特定の表現形質に関して同一である場合 それらは単形性と呼ばれます
When all individuals in a population are identical with respect to a particular phenotypic trait, they are called monomorphic.
--openai/whisper-medium:
集団内のすべての個体が特定の表現形式に関して同一である場合、それらは単形性と呼ばれます。
If all individuals in a population are identical with respect to a particular form of expression, they are called monomorphic.
---------
--sin2piusc/whisper-medium-5ksteps:
When I drink alcohol, I can become quite unsightly, so I ordered a glass of water and stopped drinking.
--openai/whisper-medium:
I don't like drinking alcohol, so I asked for water and avoided it.
--Original sentence:
I can be quite unsightly when I'm drunk, so I abstained from alcohol and mainly drank water.
## Training procedure
On a laptop running windows.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.3
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3706 | 0.3697 | 200 | 1.1454 |
| 0.7963 | 0.7394 | 400 | 0.5219 |
| 0.2503 | 1.1091 | 600 | 0.2178 |
| 0.2062 | 1.4787 | 800 | 0.2005 |
| 0.1867 | 1.8484 | 1000 | 0.1869 |
| 0.1644 | 2.2181 | 1200 | 0.1738 |
| 0.1501 | 2.5878 | 1400 | 0.1630 |
| 0.1386 | 2.9575 | 1600 | 0.1524 |
| 0.1186 | 3.3272 | 1800 | 0.1458 |
| 0.1086 | 3.6969 | 2000 | 0.1424 |
| 0.1019 | 4.0665 | 2200 | 0.1364 |
| 0.0871 | 4.4362 | 2400 | 0.1347 |
| 0.085 | 4.8059 | 2600 | 0.1326 |
| 0.0746 | 5.1756 | 2800 | 0.1336 |
| 0.0729 | 5.5453 | 3000 | 0.1312 |
| 0.0688 | 5.9150 | 3200 | 0.1316 |
| 0.0598 | 6.2847 | 3400 | 0.1328 |
| 0.0574 | 6.6543 | 3600 | 0.1340 |
| 0.0598 | 7.0240 | 3800 | 0.1336 |
| 0.0481 | 7.3937 | 4000 | 0.1356 |
| 0.0514 | 7.7634 | 4200 | 0.1366 |
| 0.0465 | 8.1331 | 4400 | 0.1382 |
| 0.0428 | 8.5028 | 4600 | 0.1378 |
| 0.043 | 8.8725 | 4800 | 0.1384 |
| 0.0425 | 9.2421 | 5000 | 0.1389 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.2.2+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"language": ["ja"], "license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer", "ctranslate2"], "datasets": ["google/fleurs", "mozilla-foundation/common_voice_16_1", "joujiboi/japanese-anime-speech"], "metrics": ["wer"], "base_model": "openai/whisper-medium", "pipeline_tag": "automatic-speech-recognition", "model-index": [{"name": "whisper-medium-5k", "results": []}]} | sin2piusc/whisper-medium-5k-jp | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"ctranslate2",
"automatic-speech-recognition",
"ja",
"dataset:google/fleurs",
"dataset:mozilla-foundation/common_voice_16_1",
"dataset:joujiboi/japanese-anime-speech",
"base_model:openai/whisper-medium",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T06:39:26+00:00 | [] | [
"ja"
] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #ctranslate2 #automatic-speech-recognition #ja #dataset-google/fleurs #dataset-mozilla-foundation/common_voice_16_1 #dataset-joujiboi/japanese-anime-speech #base_model-openai/whisper-medium #license-apache-2.0 #region-us
| whisper-medium-5k
=================
This model is a fine-tuned version of openai/whisper-medium on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1389
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
--Original sentence:
集団内のすべての個体が特定の表現形質に関して同一である場合 それらは単形性と呼ばれます。
When all individuals in a population are identical with respect to a particular phenotypic trait, they are called monomorphic.
--sin2piusc/whisper-medium-5ksteps:
集団内のすべての個体が特定の表現形質に関して同一である場合 それらは単形性と呼ばれます
When all individuals in a population are identical with respect to a particular phenotypic trait, they are called monomorphic.
--openai/whisper-medium:
集団内のすべての個体が特定の表現形式に関して同一である場合、それらは単形性と呼ばれます。
If all individuals in a population are identical with respect to a particular form of expression, they are called monomorphic.
---
--sin2piusc/whisper-medium-5ksteps:
When I drink alcohol, I can become quite unsightly, so I ordered a glass of water and stopped drinking.
--openai/whisper-medium:
I don't like drinking alcohol, so I asked for water and avoided it.
--Original sentence:
I can be quite unsightly when I'm drunk, so I abstained from alcohol and mainly drank water.
Training procedure
------------------
On a laptop running windows.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 8
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.3
* training\_steps: 5000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.41.0.dev0
* Pytorch 2.2.2+cu118
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.3\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.2+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #ctranslate2 #automatic-speech-recognition #ja #dataset-google/fleurs #dataset-mozilla-foundation/common_voice_16_1 #dataset-joujiboi/japanese-anime-speech #base_model-openai/whisper-medium #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.3\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.2+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | setfit |
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'[0:00:00] null: \n\n[0:00:03] spk_0: hello हाँ very good afternoon sir विशाल महाले जी से बात हो रही हैं हाँ बोल रहा। विशाल sir धीरज बोल रहा हूँ i d f c फस्ट bank sir you account relationship manager हाँ बोलिये हाँ! जैसे कि sir आपका saving account हैं sir dfc फस्ट bank में तो आपके account के लिए मुझे relationship manager assign किया हैं sir okay so आपको कुछ भी account related help या कुछ भी assistance required हैं sir bank की तरफ से तो आपको जिस number पर direct ही print कर सकते हो okay ये number save कर लेते धीरज name से okay ये number को add i prefer bank के app में भी हो जाएगा। वहाँ pay मेरा name official number और official mail id mentioned हैं तो वहां से भी आप मुझे verify कर सकते हो okay okay okay और जैसे की आप saving account hold करो sir कुछ update देने से आपके account related तो अभी आइडीएफसी फस्ट bank में saving account and fixed deposit के जो interest rate हैं वो increase की हैं। sir okay so अभी आप आपको saving account pay up to seven percent annual interest rate मिल जाता हैं sir जो आपके account में month wise जो आपके account में credit हो जाता हैं। okay monthly basis pay credit हो जाता हैं जो अधर bank के साथ अगर compare करोगे अधर bank आपको तीन महीने में एक बार interest provide करती हैं। ऐ dfc फस्ट तक आपको month wise interest provide करती हैं। sir okay तो compounding की वजह से आपको यहाँ pay maximum interest rate मिल जाता secondly for fix deposit जो maximum interest चल रहा हैं वो seven point five percent interest rate चल रहा हैं further tenure one year one day two five fifty days okay तो अगर आप fix deposit भी करते हो तो आपको maximum interest rate मिलेगा। secondly family में कोई senior citizen होंगे तो senior citizen को point five percent extra मिल जाता हैं तो उनको eight percent annual interest rate मिल जाएगा okay okay और जैसे कि आपका account हैं sir family members का भी account हैं हमारे साथ नहीं नहीं open करने हैं विशाल sir नहीं बताता हूँ मैं आपको करना okay क्योंकि जैसे की आपके parents की भी अगर account होंगे other banks में okay तो जैसे की अधर bank से ज्यादा आपको idea first bank आपको maximum interest rate provide कर रहे हैं okay तो यहाँ pay आप account open कर सकते हो। ठीक हैं and yeah primary bank यूज़ करो sir idea is called an order को use कर रहे sir नहीं हैं ना बहुत सारे phone। okay तो मतलब अगर कोई other banks में sir funds maintain कर रहे होंगे। okay तो आप इस bank में जो deep sea bank के सिमी account में आप fund transfer कर सकते हो तो आपका ये sir maximum interest मिल जाएगा। आपका ही फायदा हो जाएगा इसमें okay ठीक हैं और visual suggest request कर रहा number save कर लीजिए कभी भी help लगती हैं तो आप मुझे banking working days nine thirty two six thirty आप इस number pay आप मुझे contact करते लेते हैं। okay चलेगा। ठीक हैं okay thank you which। also thank you for banking thirty first and have great day sir thank you।\n\n'</li><li>'[0:00:00] null: \n\n[0:00:04] spk_0: hello good morning every getting is mr kamlesh हाँ जी या company जी मैं chetan बात कर रहा हूं half फस्ट bank से तो ये जो मेरा call तो आपको inform करने के लिए था कि आपका जो f फस्ट bank का जो account हैं उसके लिए मुझे dedicated service manager appoint किया गया हैं। okay तो आज के बाद आपको accountant कोई भी information चाहिए होगी या फिर आपको transaction related कोई भी information चाहिए होगी तो आप मुझे contact कर सकते हैं। मैं आपको help कर दूंगा। company जी, आप हमारे जो mobile banking यहाँ pay वो इस्तेमाल कर रहे हैं। हम्म। okay तो जैसे ही mobile banking app pay आप login करेंगे ना तो first page pay left hand side top corner pay आपको तीन छोटे red lines दिखेंगे आपको वहां click करना हैं जैसे आप वहां pay click करेंगे आपको option आएगा मै relationship team जैसे मै relation team के ऊपर click करेंगे आपको मेरा name चेतन सिंह मेरा number और मेरा email id आपको दिख जाएगा sir okay तो वहाँ से आप मुझे contact कर सकते हैं। इसके अलावा sir आपका जो saving account हैं उसके ऊपर में आपको update देना चाह अगर अभी जो rate of interest हैं वो revise हो चुके हैं। हम आपको offer कर रहे हैं three percent rate of interest per annum अगर आप balance maintain रखते हैं एक लाख तक अगर आप एक लाख से लेके पांच लाख के बीच में balance maintain करते हैं तो हम आपको four percent rate of interest per annum provide कर रहे हैं और अगर आप balance रख रहे हैं between five lakhs two twenty five crores तो हम आपको offer कर रहे हैं seven percent rate of interest per annum अच्छा और sir fix deposit के ऊपर भी हम आपको seven point five percent rate of interest per annum provide कर रहे हैं जिसका tenure होना चाहिए। एक साल एक दिन से लेकर दो साल के लिए okay तो complete आप जो dfc फस्ट bank का account use कर रहे हैं। ये आपका primary account हैं या फिर आपका primary account कोई और हैं जिससे आप ज्यादा से ज्यादा transaction करते हैं। daily basis पर वो primary आपका i d f c is really great sir ये मैंने इसके लिए पूछा था अगर आप आपके bank का statement देखेंगे तो हर महीने हमारी bank की तरफ से आपको interest rate का payout आता हैं sir okay तो क्या होता हैं ना की अगर आप इसमें ज्यादा funds रखेंगे इस account में तो आपका जो interest rate का payout हैं sir वो भी आपका बढ़ जाता हैं sir okay इसके अलावा sir हम आपको twenty eight free banking services provide कर रहे हैं। जैसे की अगर आप dfc फस्ट bank का card किसी भी और bank के atm machine में use करेंगे और अगर आपने multiple transaction भी कर लिया ना sir तो भी आपको कोई charges नहीं लगेंगे। अच्छा okay इसके अलावा आपको एसएमएस alert, stop cheque, issue of demand draft, cheque bounce ईसीएस return debit, card issue and ऐसी इत्यादि सेवाएं आपको free of cost में provide की जाती हैं। इसके लिए कोई charges नहीं हैं। अगर आप दूसरे bank में देखेंगे तो इन सारी services के लिए charges हैं but हमारी bank आपको ये सब charges नहीं देती। sir आप हमारा जो debit card हैं वो इस्तेमाल करते हैं। sir okay तो मैं आपको बता दूँ sir अभी फिलहाल तो debit card pay बहुत से offer चल रहे हैं। जैसे अगर आप हमारा debit card insta mart pay यूज़ करते हैं तो आपको flat hundred rupees off मिलता ऑन minimum purchase of six ninety nine अगर आप d mart में यूज़ करते हैं तो आपको five percent cashback मिलता हैं। up to rupees two fifty और अगर आप यात्रा dot com से flight book करना चाहते हैं या फिर trip से book करना चाहते हैं तो card की मदद से आपको twelve percent instant discount मिल जाएगा। sir upto rupees one thousand two hundred जिसका minimum purchase होना चाहिए। five thousand sir food and dining pay भी बहुत से offers हैं जैसे zomato pay हैं dominos पर हैं dominos pay flat one fifty off हैं अगर आप seven hundred के ऊपर order करते हैं फिर wow momos हैं फिर fresh menu barbeque nation ऐसे बहुत से इसपे offer चल रहे हैं sir और अगर आप shopping का देख रहे हैं तो first cry dot com pay भी ना मतलब thirty five percent off चल रहा हैं sir और फिर बाद में tata click you फिर इसके बाद मतलब पूमा pay mucci metro इन सारी चीज़ों pay sir आपको discount मिल रहा तो मैं एक काम करता हूँ ये जो offers हैं आपको email कर देता हूँ तो आपके free time में ना sir एक बार उसको देख लीजिये ठीक हैं okay आपका email id एक बार confirm करेंगे। kamlesh at gmail dot that great <s>। और आपको बताना चाहूँगा अभी आपके mobile app pay आपका communication address email i d contact number nominee details और signature ये update कर सकते हैं manage profile option में app में जाकर sir ठीक okay इसके अलावा और कुछ सहायता कर सकता हूँ मैं आपकी okay sir sir sir। अगली बार अगर मुझे कभी आपको call करके आपके account कोई information देनी हो तो आप से सही समय कब होता हैं? बात करने का कोई भी करो okay done sir। okay thank you आपका\n\n'</li><li>"(SPEAKER_00 3.5 - 8.899) Good evening calling from IDFC। Am I speaking to Mr। तरूण कुमार?\n(SPEAKER_01 9.63 - 10.439) Yes you are।\n(SPEAKER_00 11.069 - 18.86) So I am your relationship manager for this account। thirty AM to six PM\n(SPEAKER_01 19.59 - 20.69) Sorry\n(SPEAKER_00 21.54 - 27.379) I will be available from nine thirty AM to six PM on all bank working days as a single point of your contact।\n(SPEAKER_01 28.54 - 30.42) Okay, what is your direct number there?\n(SPEAKER_00 31.28 - 78.389) I'll just share my number with you which you will receive on your registered mobile number। Just hold on please just be on call। See I have shared it। So, apart from that, I would also like to let you know that you have maintained healthy balance in your account, which is great because you must be aware that you will receive interest over it। We offer monthly credit of interest on your saving account and interest is usually paid on quarterly basis by other bank saving account but we in case of our bank you will receive interest or interest। So and not only this, we charge zero fee on twenty-eight most commonly use saving account services including IMPS, NEFT RTGS that I know overall twenty-eight services on which we have absolutely no charges, absolutely free।\n(SPEAKER_01 78.989 - 80.269) Right, right, right I am aware about\n(SPEAKER_00 80.269 - 87.599) that। So, may I know which bank do you prefer the most when it comes to banking apart from IDFC First Bank?\n(SPEAKER_01 88.47 - 89.55) Axis।\n(SPEAKER_00 90.379 - 94.86) Axis bank, May I know what is the interest that you earn from Axis bank or other bank?\n(SPEAKER_01 95.139 - 115.779) I am not getting quite good interest but my transaction are going into the Axis bank ah like like the holding balance is not quite good in the Axis that is in my Kotak bank, okay? So, if you are asking about the holding the balance which account I am holding the balances more। So\n(SPEAKER_00 117.12 - 118.26) bank interest interest।\n(SPEAKER_01 118.43 - 119.62) yes\n(SPEAKER_00 120.269 - 126.669) yes because you can visit any of our website of IDFC first bank sorry our website of IDFC first\n(SPEAKER_01 126.669 - 127.019) bank\n(SPEAKER_00 127.19 - 144.74) where we have in saving account session we have ah simulator which is nothing but a calculator where in you could actually calculate the interest that you earn from other bank where is the interest that you are earning from our bank। And I am definitely sure that the interest that you are earning from our bank will be higher and better as compared to other bank।\n(SPEAKER_01 145.059 - 157.289) Okay, I will keep my balances into AFC bank account from the next time from the next month। However I am also looking for one more thing like we have the corporate account with the Kotak bank as well।\n(SPEAKER_00 157.699 - 158.27) Okay।\n(SPEAKER_01 158.5 - 161.46) Current account I mean okay my company bank\n(SPEAKER_00 161.46 - 162.729) account। ah\n(SPEAKER_01 162.899 - 187.47) but we are not quite satisfied with the services of the Kotak bank some most of the time they servers are down okay and our balances are stuck। ah However, we also have the ICICI bank account as well as that we are using as in the current account we also but we also need one more account, okay। ah So, are you also taking care of the current accounts? or not current\n(SPEAKER_00 187.47 - 193.52) account yes if you want to open any current account, you can let me know। I will guide you regarding the same।\n(SPEAKER_01 194.029 - 196.339) So, I would like to go ahead with that।\n(SPEAKER_00 196.899 - 200.589) So, I'll just forward your request for current account opening।\n(SPEAKER_01 201.119 - 201.88) hm\n(SPEAKER_00 201.88 - 206.139) तो branch which will be convenient for you will be New Friends colony branch, right?\n(SPEAKER_01 206.949 - 223.169) No, that was the very first branch I believe the NFC branch। ah then the first time we opened the IDFC bank account I remember ah very quite close bank branch is the sector sixty ah to Noida।\n(SPEAKER_00 224.1 - 226.49) sector sixty sixty just hold on।\n(SPEAKER_01 227.139 - 235.339) Sector sixty two and even the sector sixty three as block you guys have a branch that will be more ah near to me।\n(SPEAKER_00 236.059 - 240.6) okay just hold on I just need to check it। sector sixty-two and sixty three ना नोएडा\n(SPEAKER_01 241.19 - 246.41) sector sixty-three would be great। I know that sector sixty three as block you guys have the branch there।\n(SPEAKER_00 249.24 - 249.639) sector\n(SPEAKER_01 249.639 - 252.13) के। हम्म।\n(SPEAKER_00 253.419 - 261.92) six three हम्म। Yes, I am trying to search it, so I will forward this request of yours for current account।\n(SPEAKER_01 262.549 - 262.97) apart from\n(SPEAKER_00 262.97 - 291.649) that I also need to update you about several things just hold on। Sector sixty-three। Okay। So, apart from that, I would just like to let you know that our saving account rate has been revised। So for amount ranging from five lakh to fifty crore it is seven percent per annum for amount ranging from one lakh to five lakh it is four percent per annum and less than one lakh it is three percent per annum।\n(SPEAKER_01 292.29 - 293.609) What is that? what are you talking\n(SPEAKER_00 293.609 - 295.609) about? saving account, saving account rate\n(SPEAKER_01 296.609 - 297.25) saving amount\n(SPEAKER_00 297.25 - 318.209) rate। okay not only this our FD rate has also been revised so the best FD rate you can receive is for eighteen months one day to two years which is seven point seven five percent per annum। and senior citizen we have zero point five zero percent extra which brings to eight point two five percent per annum।\n(SPEAKER_01 320.649 - 321.22) हम्म।\n(SPEAKER_00 321.76 - 336.609) and you must be aware that we offer monthly credit of interest on your saving account which are zero fee on twenty most commonly saving account तो other bank usually charges service charges fees on services but we don't have any charges applicable on\n(SPEAKER_01 337.359 - 337.959) the amount is\n(SPEAKER_00 337.959 - 339.88) not applicable on\n(SPEAKER_01 339.88 - 340.16) that\n(SPEAKER_00 341.88 - 342.32) so\n(SPEAKER_01 342.32 - 342.959) the amount is\n(SPEAKER_00 342.959 - 378.829) not applicable for any services you can keep that amount in your account itself so that even on that amount you can earn interest। It will be beneficial for you। Am I audible? Am I audible? Mr। तरूण कुमार, am I audible? Do any of your family members have account? to any of your family members account with us। Mr। तरूण कुमार।"</li></ul> |
| 0 | <ul><li>'[0:00:00] null: \n\n[0:00:08] spk_0: good afternoon sir this one is from idea आपसे फस्ट bank speaking with मैं screen कराना। hello madam\n\n[0:00:16] spk_1: बताइये hello हाँ जी बोलिए।\n\n[0:00:22] spk_0: sir ये madam का account हैं ना account के लिए मैं\n\n[0:00:25] spk_1: relationship\n\n[0:00:27] spk_0: manager मैंने अपने आप को introduce करने के लिए और updates देने के लिए call किया था। sir bank के related thank you sir actually sir ये number तो save कर लीजियेगा अगर जाकर अगर आपको account related queries concern तो financial requirement होगा ना bank से directly मुझे reached कर सकते हैं sir और update ये हैं की sir अभी fixed deposit और recurring deposit के ऊपर rate of interest हुआ हैं\n\n[0:00:58] spk_1: ok\n\n[0:00:59] spk_0: so\n\n[0:01:00] spk_1: one\n\n[0:01:00] spk_0: lakh। august में july first july thirty first\n\n[0:01:07] spk_1: क्या हैं बताना एक बार\n\n[0:01:09] spk_0: तो अगर आप ten lakhs से ज्यादा maintain करते हैं ना sir upto five year तक तो seven percent rate of interest दिया जा रहा हैं। sir interest every month credit होगा sir जैसे कि आप खुद check कर सकते हैं और आपके statement हैं\n\n[0:01:23] spk_1: statement seven।\n\n[0:01:26] spk_0: हां savings account के ऊपर हैं sir अगर minimum amount से लेकर up to ten lakhs तक अगर maintain करेंगे तो आपको four percent interest मिलेगा। sir\n\n[0:01:36] spk_1: ठीक हैं।\n\n[0:01:38] spk_0: तो अगर आप आप अभी अच्छा ये balance maintain कर रहे हो तो अगर आपके पास ideal funds हैं तो आपके account में transfer कर के रखिये sir ताकि every month जो hay interest मिल सके। इसके अलावा अगर आप चाहे तो f d और भी book कर सकते हैं। sir fixed deposit और recurring deposit वहां पर भी आपको seven point five percent interest मिलेगा sir अगर एक साल एक दिन से लेकर आप five fifty से days तक book करेंगे तो\n\n[0:02:04] spk_1: okay okay point ठीक\n\n[0:02:07] spk_0: हैं sir और send\n\n'</li><li>'[0:00:00] null: \n\n[0:00:03] spk_0: hello हां मुझे राजेश good afternoon मैं धारा पर एक बात कर रही हूँ bsc फस्ट से कैसे हैं आप? बस बढ़िया राजेश जी आज की आपका जो account हैं यहाँ pay जिनका name हैै जा किशन राजेश मई so इस account की मैं voucher relationship manager assign होती हूँ तो अभी से आपके account में आपको कभी भी कोई भी assistance की requirement होती हैं। ये आपको banking services के regarding आपको help चाहिए होगी तो आप friend या landline number तो मुझे phone कर सकता हैं। i am so sorry sir मेरी आवाज के लिए अभी के लिए। so sorry sir मुझे आपको ये बोलना हैं करके कि मैं जैसे भी आपका account go through कर देती हूँ तो आप काफी अच्छा balance maintain कर रहे हैं हमारे साथ correct एक average आपका sir one lakh के ऊपर show करता मेरे को मैं one one lakh twenty thousand just around for हैं तो यहाँ से मुझे आपको बताना करके की यही चीज़ जो आपकी funding आप bank के साथ रख रहे हो। आपको पता हैं कि आपका monthly interest मिल रहा इसी amount pay आपके average के ऊपर correct तो यहाँ pay sir मैं आपको एक just suggestion share करना चाहती हूँ sir कि जो month का and होता हैं ना। twenty fifth thirty first द month। तो उस time pay sir अगर आप अपना जो fund होगा ठीक हैं जो आपको लग रहा हैं कि चलो ये next month pay करना वो आप अपने account में रख सकते हैं। sir तो जैसे की आपका average increase हो जायेगा तो उसके through आपको interest अच्छा मिल जाएगा। because basically अगर हम बाकी banks के साथ compare करें तो। आपको पता हैं sir की वहाँ pay amount आपको जो interest मिलता हैं वो quarterly मिलता correct एक minute चल रहा हूँ समझ\n\n[0:02:00] spk_1: व्यक्ति ने call करी और छे limit अः मारो call hold पर रख या छे कृपा करी line पर रहो। आप जिस व्यक्ति से बात कर रहे हैं उन्होंने आपके call को hold पर रखा हैं। कृपया line पर बने रहे द person you are speaking with has put your call ऑन hold। please stay online तमिन जी व्यक्ति ने call करी और छे limit अः मारो call hold पर रख या छे कृपा करी, line पर रहो। आप जिस व्यक्ति से बात कर रहे हैं उन्होंने आपके call को hold पर रखा हैं। कृपया line पर बने रहे।\n\n'</li><li>"[0:00:00] null: \n\n[0:00:05] spk_0: Hello.\n\n[0:00:07] spk_1: Hello.\n\n[0:00:08] spk_0: Hi. My name is Suzy. Okay. I'm calling you from I D f c first bank. Am I speaking with Mister Mohammad? Babe, Russia?\n\n[0:00:18] spk_1: No, no, our our brother.\n\n[0:00:21] spk_0: Aha!\n\n[0:00:22] spk_1: Hello. Is\n\n[0:00:23] spk_0: he your? brand.\n\n[0:00:27] spk_1: he is my brother.\n\n[0:00:29] spk_0: Oh, okay. May I know who am I speaking to, sir?\n\n[0:00:34] spk_1: My name is Muhammad.\n\n[0:00:37] spk_0: okay?\n\n[0:00:42] spk_1: Hello.\n\n[0:00:43] spk_0: Yes, He's your brother. Okay. Uh, sir just wanted to Ah, ask you that. May I get his number or anything?\n\n[0:00:52] spk_1: what number\n\n[0:00:54] spk_0: customers number Mohammad. Mohammad says. Number can I get I can I get get his number to reach out to him? actually, I'm calling from I. D. F C First Bank. He's a relationship manager for NRI accounts.\n\n[0:01:09] spk_1: Okay, fine. number is not devil.\n\n[0:01:13] spk_0: you don't have\n\n[0:01:14] spk_1: WhatsApp. number only WhatsApp. your call.\n\n[0:01:20] spk_0: Okay? No problem. You can send. Give me the number. Is this the same number available on WhatsApp also?\n\n[0:01:26] spk_1: this is This is my number only.\n\n[0:01:29] spk_0: Oh, okay. So can you tell me his WhatsApp number so I can drop him on WhatsApp message? This is your number?\n\n[0:01:37] spk_1: No, just call them.\n\n[0:01:39] spk_0: I want Sorry, handy man.\n\n[0:01:42] spk_1: no English and Camille\n\n[0:01:44] spk_0: only Tamin. I want Mohammad side Braam Bouchar. Mobile number so I can call him or I can. What's up? Him? Do you have his number?\n\n[0:01:57] spk_1: Yeah, Yeah, avid Bro. This is this number I've given to my brother. Know you were calling this number?\n\n[0:02:03] spk_0: the same number is with your brother also.\n\n[0:02:07] spk_1: No, bro. You calling from phone number? Know something? Double two. Done. This number.\n\n[0:02:12] spk_0: No, sir. This number cannot be contacted. This is the landline number. Recording number recorded line. This\n\n[0:02:19] spk_1: is Okay, fine. There is any option number.\n\n[0:02:24] spk_0: Okay. Can you take down my number? My number? nine, Double zero.\n\n[0:02:31] spk_1: wait second.\n\n[0:02:32] spk_0: Yeah, yeah.\n\n[0:02:40] spk_1: tell me.\n\n[0:02:41] spk_0: Nine Double zero\n\n[0:02:43] spk_1: nine Double zero\n\n[0:02:44] spk_0: 431.\n\n[0:02:46] spk_1: 431\n\n[0:02:48] spk_0: 2929\n\n[0:02:51] spk_1: 2949 your name?\n\n[0:02:56] spk_0: Sue Yoga s u Y o G.\n\n[0:03:03] spk_1: Yes. We Y\n\n[0:03:04] spk_0: S u\n\n[0:03:07] spk_1: Y l\n\n[0:03:08] spk_0: Y O\n\n[0:03:11] spk_1: fine.\n\n[0:03:14] spk_0: Okay, Please provide this number to the customer and tell him to call me. I am the relationship manager and I am available on WhatsApp. Also on this number.\n\n[0:03:23] spk_1: Do you want to propose too,\n\n[0:03:25] spk_0: sir? Just wanted to check with the customer that how is his banking going on? Good. Does he require any sort of help from my end or in future? If he needs any help pertaining to the account? I am the one point of contact and relationship manager of his account.\n\n[0:03:42] spk_1: Okay, fine.\n\n[0:03:43] spk_0: Yeah, sir. Can you tell me again? Your name, sir? I am so sorry.\n\n[0:03:48] spk_1: Sorry.\n\n[0:03:50] spk_0: your name is sad. I am\n\n[0:03:52] spk_1: a right. Yes.\n\n[0:03:54] spk_0: Okay. Not a problem, sir. Yes, sir. Thank you so much for giving a valuable time. Please provide this number to the customer. Okay.\n\n[0:04:01] spk_1: Okay. Fine.\n\n[0:04:02] spk_0: Okay. Thank you so much.\n\n"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.4 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Megatron17/setfit-paraphrase-mpnet-base-v2-sst2")
# Run inference
preds = model("[0:00:00] null:
[0:00:03] spk_0: good afternoon. Am I speaking to supra Priya, ma'am? ma'am, this is Hi, Krishna. Ma'am, Your A newly appointed as your relationship manager from I D S C Falls Bank.
[0:00:15] spk_1: okay?
[0:00:17] spk_0: Yeah, ma'am. In future. So whenever you have any queries or whenever you have any requirement Okay, you can contact me. So
[0:00:24] spk_1: in terms of
[0:00:25] spk_0: account related information or if you want to go for any kind of loans, even for investment related suggestions for a building. Yeah. Allow me banking and tell on the name.
[0:00:37] spk_1: Yeah, it's
[0:00:39] spk_0: Okay, so every month interest is credit out the number. so whatever the balance you maintaining. Okay. So you'll get interest, ma'am.
[0:00:52] spk_1: Yeah. Yeah, I think I'm getting it.
[0:00:54] spk_0: Okay. And how much rate of interest are you getting at all?
[0:00:59] spk_1: Ah, it's okay. I like. I'm not that much into it. My husband take care of all that.
[0:01:04] spk_0: Okay. Okay, ma'am. I'll give you the information and please inform to your husband as well. Because all banks are giving a quarterly basis. Interest, ma'am. Okay. So that too only maximum 3%. But only our bank is giving a monthly interest. Credit that the minimum 4% per annum.
[0:01:23] spk_1: so
[0:01:24] spk_0: war and about 10 lakhs. If you maintain up to Pike Row, we are providing 7% rate of interest, ma'am. again it monthly. Okay, so if you aware about this definitely will maintain a mobile because you'll get a more interest credit year. And even if you have any idle funds, please do fixed deposits. We are giving 7.5 rate of interest for just one year. One day, ma'am.
[0:01:49] spk_1: Okay.
[0:01:51] spk_0: So your husband also holding account address?
[0:01:53] spk_1: No, I don't think so. He is like I don't know about.
[0:02:01] spk_0: OK, ma'am. Yeah, but please save my number. Okay? You can able to see my details in net banking and mobile app. So there is an option called my relationship ing.
[0:02:10] spk_1: Okay.
[0:02:11] spk_0: Yeah. So just save my number, ma'am, okay? Yeah. Thank you, ma'am. Thanks for your time. Have a nice day.
")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 137 | 709.3718 | 2510 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 37 |
| 1 | 41 |
### Training Hyperparameters
- batch_size: (4, 4)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0013 | 1 | 0.2239 | - |
| 0.0641 | 50 | 0.2657 | - |
| 0.1282 | 100 | 0.1753 | - |
| 0.1923 | 150 | 0.22 | - |
| 0.2564 | 200 | 0.2309 | - |
| 0.3205 | 250 | 0.1526 | - |
| 0.3846 | 300 | 0.3479 | - |
| 0.4487 | 350 | 0.0581 | - |
| 0.5128 | 400 | 0.0137 | - |
| 0.5769 | 450 | 0.0013 | - |
| 0.6410 | 500 | 0.005 | - |
| 0.7051 | 550 | 0.001 | - |
| 0.7692 | 600 | 0.0023 | - |
| 0.8333 | 650 | 0.0011 | - |
| 0.8974 | 700 | 0.0001 | - |
| 0.9615 | 750 | 0.0004 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.0
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "widget": [{"text": "[0:00:00] null: \n\n[0:00:14] spk_0: sir. This is manna calling from my dear. Thank you for the confirmation, sir. I've been appointed as your dedicated service manager for\n\n[0:00:22] spk_1: your account.\n\n[0:00:24] spk_0: and I request you to save my contact number, which is appearing on your\n\n[0:00:27] spk_1: phone.\n\n[0:00:28] spk_0: and I will be available in all bank working\n\n[0:00:30] spk_1: with from\n\n[0:00:31] spk_0: morning 930\n\n[0:00:32] spk_1: evening 6 p.m.\n\n[0:00:34] spk_0: You can also check my contact number and email I d. On your mobile banking.\n\n[0:00:39] spk_1: on\n\n[0:00:39] spk_0: the left top corner. That will be an option called my relationship\n\n[0:00:44] spk_1: Team. If\n\n[0:00:45] spk_0: you click on, that means you can able to see my\n\n[0:00:47] spk_1: details. and\n\n[0:00:49] spk_0: answer. I would like to inform you one more\n\n[0:00:51] spk_1: thing. Actually,\n\n[0:00:53] spk_0: our IDs see, First Bank has increased to the\n\n[0:00:55] spk_1: both\n\n[0:00:56] spk_0: savings account Interstate as well as fixed\n\n[0:00:58] spk_1: deposit interest rates from this. we\n\n[0:01:01] spk_0: are providing up to 7% per annum for the amount which you are maintaining\n\n[0:01:04] spk_1: in your savings account.\n\n[0:01:06] spk_0: that's also on slap basis.\n\n[0:01:07] spk_1: From on rupee to 10\n\n[0:01:09] spk_0: lakhs, you will get 4% Parana and the amount\n\n[0:01:11] spk_1: which are maintaining\n\n[0:01:12] spk_0: more than 10 lakhs up to five cross you will get 7% Parana. it will get calculated on daily basis and you will get the interest amount on\n\n[0:01:19] spk_1: monthly basis.\n\n[0:01:22] spk_0: and cancer. And for fixed deposit, we are providing 7.5% approve, 7.5% the tenure of one year, one\n\n[0:01:29] spk_1: day. highest\n\n[0:01:31] spk_0: across the\n\n[0:01:32] spk_1: industry.\n\n[0:01:35] spk_0: And then, sir, we are providing 28 services of Reaser like an email cashback dots at any at them.\n\n[0:01:41] spk_1: And that is\n\n[0:01:42] spk_0: S M salad charges, cash withdrawal and deposit charges at any branches\n\n[0:01:47] spk_1: with no charges\n\n[0:01:48] spk_0: and\n\n[0:01:48] spk_1: check book\n\n[0:01:49] spk_0: check pounds, that is no check bones\n\n[0:01:51] spk_1: charges and easiest return charges. Like\n\n[0:01:53] spk_0: Wes. We are providing 28 services for free,\n\n[0:01:55] spk_1: sir. Okay.\n\n[0:01:57] spk_0: Are you salad or sell\n\n[0:01:58] spk_1: from purchase? one.\n\n[0:02:01] spk_0: Saturday. so value or maintaining yours Have the account, sir.\n\n[0:02:08] spk_1: Okay,\n\n[0:02:09] spk_0: so you are maintaining. salary accounted HDFC So R I s C first bank is your savings\n\n[0:02:16] spk_1: account. Yeah. I hope\n\n[0:02:18] spk_0: this is your primary accounts are depressed, bank. UN answer. Ah\n\n[0:02:28] spk_1: in Are\n\n[0:02:30] spk_0: you holding any other loans or\n\n[0:02:32] spk_1: something? Any other bank recently? taken, Marcus.\n\n[0:02:39] spk_0: banks.\n\n[0:02:42] spk_1: Federal Bank. Sir,\n\n[0:02:43] spk_0: if you require any kind of thing\n\n[0:02:46] spk_1: means regarding your\n\n[0:02:47] spk_0: account in ideas\n\n[0:02:48] spk_1: First\n\n[0:02:48] spk_0: Bank, I am the one point of contact for all your\n\n[0:02:51] spk_1: banking means. Okay,\n\n[0:02:53] spk_0: sir. Save my official landline number, which is available in your\n\n[0:02:57] spk_1: mobile banking app.\n\n[0:02:59] spk_0: call back if any requirement.\n\n[0:03:03] spk_1: Okay, sir. Thank you. Yeah.\n\n[0:03:06] spk_0: Thanks for banking with IDF. See first Bank Sept. Have a great day, sir. Have a great day, sir.\n\n[0:03:11] spk_1: Bye.\n\n"}, {"text": "[0:00:00] null: \n\n[0:00:03] spk_0: good afternoon. Am I speaking to supra Priya, ma'am? ma'am, this is Hi, Krishna. Ma'am, Your A newly appointed as your relationship manager from I D S C Falls Bank.\n\n[0:00:15] spk_1: okay?\n\n[0:00:17] spk_0: Yeah, ma'am. In future. So whenever you have any queries or whenever you have any requirement Okay, you can contact me. So\n\n[0:00:24] spk_1: in terms of\n\n[0:00:25] spk_0: account related information or if you want to go for any kind of loans, even for investment related suggestions for a building. Yeah. Allow me banking and tell on the name.\n\n[0:00:37] spk_1: Yeah, it's\n\n[0:00:39] spk_0: Okay, so every month interest is credit out the number. so whatever the balance you maintaining. Okay. So you'll get interest, ma'am.\n\n[0:00:52] spk_1: Yeah. Yeah, I think I'm getting it.\n\n[0:00:54] spk_0: Okay. And how much rate of interest are you getting at all?\n\n[0:00:59] spk_1: Ah, it's okay. I like. I'm not that much into it. My husband take care of all that.\n\n[0:01:04] spk_0: Okay. Okay, ma'am. I'll give you the information and please inform to your husband as well. Because all banks are giving a quarterly basis. Interest, ma'am. Okay. So that too only maximum 3%. But only our bank is giving a monthly interest. Credit that the minimum 4% per annum.\n\n[0:01:23] spk_1: so\n\n[0:01:24] spk_0: war and about 10 lakhs. If you maintain up to Pike Row, we are providing 7% rate of interest, ma'am. again it monthly. Okay, so if you aware about this definitely will maintain a mobile because you'll get a more interest credit year. And even if you have any idle funds, please do fixed deposits. We are giving 7.5 rate of interest for just one year. One day, ma'am.\n\n[0:01:49] spk_1: Okay.\n\n[0:01:51] spk_0: So your husband also holding account address?\n\n[0:01:53] spk_1: No, I don't think so. He is like I don't know about.\n\n[0:02:01] spk_0: OK, ma'am. Yeah, but please save my number. Okay? You can able to see my details in net banking and mobile app. So there is an option called my relationship ing.\n\n[0:02:10] spk_1: Okay.\n\n[0:02:11] spk_0: Yeah. So just save my number, ma'am, okay? Yeah. Thank you, ma'am. Thanks for your time. Have a nice day.\n\n"}, {"text": "[0:00:00] null: \n\n[0:00:02] spk_0: Good afternoon. This is cashing will be calling from I. D. that's all. Hello? I d c forced banks. uh, it was just to share some updates with you the bank account, which you're having the person. So for this bank account, I have been assigned as your dedicated service manager. So if you need any kind of assistance related to your bank account, you can directly connect with me as last time. Also, we had a word, so I hope you have saved this number. sure? Yeah. Okay, great. So if you need any kind of assistance related to your bank account so you can directly connect with me, I'll be a single point of contact for you will be available from 9:30 a.m. to 6 p.m. On all bank working days. Yeah, so am I. And apart from this Na, I'll share my details with you. Why? I text, you will have my WhatsApp number. So requesting you to please save that number as well. So so in feature. If you need any kind of assistance or there would be any urgency so you can connect me through WhatsApp as well, so as it would be my official WhatsApp number. And actually, I have given you a call to inform you. Now our rate of interest has been revised. Previously, we were providing 7% annually rate of interest for the balance of more than 10. Lack up to five crores, sir. But now it has been revised. If you will keep a balance in your bank account, more than five lacs on that you'll be getting 7% annually rate of interest up to 25 clothes. and this interest amount will be getting credited in your bank account on a monthly basis. So and apart from this, if you do any fixed deposit, and if you selected 10 years from one year, one day to two years you will be getting 7.5% annually rate of interest for your fixed deposit as well. so Ah, like as you having a bank account as per your current like balance. Lorna, you are getting 4% and will rate of interest. So I would require Suggest you if you have any saving balances. Any other bank account, sir, that you can move over here so, like you will be able to enjoy the higher rate of interest, which we are providing you, sir, from 4% to 7% so which would be beneficial for you. Yeah, sure. And currently, do you need any kind of assistance from my Okay. Okay. No worries, sir. And do you have any, like, phones requirement? Because also, you have a option without any document. Okay. Okay. So no, no worry. I'll just share the benefits with you in future, whenever you need. So you can go for this. So you are having a eligibility of two Lock \u20b999,000. So which is preapproved without any documentation? Instantly will be credited in your bank account. Minimum. You can take it for two months and maximum for 48 months. And whenever you want, you can close the same. So there won't be any four closer charges at all. So? So in feature me, you can go for the same so Yeah. Thank you so much for giving your patience investing. Please feel free to reach out to me in case of any assistance you need. Thank you. Thank you so much. Please keep banking with 51st bank.\n\n"}, {"text": "(SPEAKER_00 1.309 - 6.929) \u091c\u0940 good morning sir \u0935\u0940\u0930\u0947\u0902\u0926\u094d\u0930 \u092a\u094d\u0930\u091c\u093e\u092a\u0924\u093f \u092c\u093e\u0924 \u0915\u0930 \u0930\u0939\u093e \u0939\u0942\u0901 IDFC first bank \u0938\u0947 \u0935\u093f\u0915\u093e\u0938 \u0915\u0941\u092e\u093e\u0930 \u091c\u0940 \u0938\u0947 \u092c\u093e\u0924 \u0939\u094b \u0930\u0939\u0940 \u0939\u0948 sir?\n(SPEAKER_01 7.96 - 7.98) \u0939\u093e\u0901\n(SPEAKER_00 8.47 - 17.09) \u091c\u0940 \u0935\u093f\u0915\u093e\u0938 \u091c\u0940 hope you remember sir recently \u0939\u092e\u093e\u0930\u0940 \u092c\u093e\u0924 \u0939\u0941\u0908 \u0925\u0940 regarding \u0906\u092a\u0915\u093e \u091c\u094b IDFC first bank \u092e\u0947\u0902 account open \u0939\u0941\u0906 \u0909\u0938\u0915\u093e \u0938\u0930\u094d\u0935ice manager sir\u0964\n(SPEAKER_01 17.899 - 18.579) \u0939\u093e\u0901 \u092c\u094b\u0932\u093f\u090f\n(SPEAKER_00 18.94 - 25.04) \u091c\u0940 \u091c\u0940 sir \u092e\u0948\u0902\u0928\u0947 \u0906\u092a\u0915\u094b \u090f\u0915 link send \u0915\u093f\u092f\u093e \u0925\u093e sir, account link \u0915\u0930\u0928\u0947 \u0915\u0947 \u0932\u093f\u090f sir\u0964 \u0915\u094d\u092f\u093e \u0906\u092a \u0909\u0938\u0947 link \u0915\u0930\u093e \u0926\u093f\u092f\u093e \u0939\u0948\u0964\n(SPEAKER_01 25.799 - 27.1) \u0928\u0939\u0940\u0902 \u0905\u092d\u0940 link \u0928\u0939\u0940\u0902 \u0939\u094b\u0924\u093e \u0905\u091a\u094d\u091b\u093e\n(SPEAKER_00 27.76 - 66.16) \u0905\u091a\u094d\u091b\u093e \u0915\u094b\u0908 \u092c\u093e\u0924 \u0928\u0939\u0940\u0902 sir \u0905\u092d\u0940 \u0906\u092a \u0905\u092a\u0928\u093e HDFC first bank \u0915\u093e mobile application login \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902 \u0924\u094b \u092e\u0948\u0902 guide \u0915\u0930 \u0926\u0947\u0924\u093e \u0939\u0942\u0901 \u0935\u093f\u0915\u093e\u0938 \u091c\u0940\u0964 account link \u0915\u0930\u0928\u0947 \u0915\u0947 \u0932\u093f\u090f sir \u0905\u0917\u0930 \u0906\u092a \u0915\u094b\u0908 other bank \u0915\u093e account \u092d\u0940 use \u0915\u0930\u0924\u0947 \u0939\u0948 \u0928\u093e sir \u0924\u094b IDFC first bank \u0906\u092a\u0915\u094b \u092f\u0939\u093e\u0901 \u092a\u0947 \u090f\u0915 option \u0926\u0947\u0924\u093e \u0939\u0948\u0964 \u092f\u0939\u093e\u0901 \u092a\u0947 \u0906\u092a other bank accounts \u0915\u094b \u092d\u0940 \u0939\u092e\u093e\u0930\u0947 \u0938\u093e\u0925 link \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0964 \u0914\u0930 secondary bank account \u0915\u0940 details \u092d\u0940 \u0926\u0947\u0916 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902 \u091c\u0948\u0938\u0947 \u0915\u0940 \u0907\u0938\u0915\u0947 statement, debit credit balances \u0938\u093e\u0930\u0940\u0964 \u0924\u094b \u0906\u092a\u0915\u094b \u0915\u093e\u092b\u0940 handy \u0939\u094b \u091c\u093e\u0924\u093e \u0939\u0948 \u0935\u094b \u091a\u0940\u091c sir \u0906\u092a single application \u0915\u0947 \u091c\u0930\u093f\u090f \u0906\u092a\u0915\u0947 \u0938\u093e\u0930\u0947 accounts manage \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\u0964 Hello\n(SPEAKER_01 67.129 - 67.66) \u0939\u093e\u0901\n(SPEAKER_00 67.889 - 83.19) \u091c\u0940 \u091c\u0940 \u091c\u0940 \u0915\u093e\u092b\u0940 useful process \u0939\u0948 \u0906\u092a \u0907\u0938\u0915\u094b link \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902 \u092c\u093f\u0928\u093e \u0915\u094b\u0908 charges \u0915\u0947 sir \u0914\u0930 \u0905\u0917\u0930 \u0906\u092a\u0915\u094b \u092f\u0947 useful \u0932\u0917\u0924\u093e \u0939\u0948 \u0924\u094b \u0906\u092a \u0909\u0938\u0915\u094b as it is continue \u0930\u0916 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\u0964 \u0907\u0928 future \u0905\u0917\u0930 \u0906\u092a\u0915\u094b \u0932\u0917 \u0930\u0939\u093e \u0939\u0948 \u0915\u0940 \u0906\u092a\u0915\u094b \u092f\u0947 useful \u0928\u0939\u0940\u0902 \u0932\u0917 \u0930\u0939\u093e \u0939\u0948 \u0924\u094b \u0906\u092a \u0909\u0938\u0915\u094b unlink \u092d\u0940 \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\u0964 totally up to you sir\u0964\n(SPEAKER_01 84.86 - 85.5) \u0920\u0940\u0915 \u0939\u0948 \u0939\u092e\n(SPEAKER_00 86.819 - 88.699) \u0924\u094b \u0905\u092d\u0940 login \u0915\u0930 \u092a\u093e\u090f\u0902\u0917\u0947 application\n(SPEAKER_01 88.699 - 91.569) \u0915\u094b \u0905\u091a\u094d\u091b\u093e\n(SPEAKER_00 91.58 - 98.01) \u0905\u091a\u094d\u091b\u093e \u0915\u094b\u0908 \u092c\u093e\u0924 \u0928\u0939\u0940\u0902 sir \u0924\u094b \u0915\u094b\u0908 suitable time \u092c\u0924\u093e \u092a\u093e\u090f\u0902\u0917\u0947 \u0924\u094b \u092e\u0948\u0902 \u0906\u092a\u0915\u094b \u0909\u0938\u0915\u0947 accordingly call \u0915\u0930 \u0932\u0947\u0924\u093e \u0939\u0942\u0901\u0964 \u0915\u094b\u0908 assistance requirement \u0915\u0930\u0947\n(SPEAKER_01 98.01 - 98.47) \u092e\u0948\u0902 \u0915\u0930 \u0932\u0942\u0901\u0917\u093e\u0964\n(SPEAKER_00 101.62 - 108.019) \u0905\u091a\u094d\u091b\u093e \u0905\u091a\u094d\u091b\u093e \u0920\u0940\u0915 \u0939\u0948 \u0905\u091a\u094d\u091b\u0940 \u092c\u093e\u0924 \u0939\u0948 sir \u0924\u094b \u092e\u0948\u0902 \u0935\u093e\u092a\u0938 \u0938\u0947 \u0906\u092a\u0915\u094b link \u092d\u0947\u091c \u0915\u0947 \u0930\u0916\u0924\u093e \u0939\u0942\u0901\u0964 \u0935\u094b link \u0915\u0947 through \u0906\u092a directly \u0909\u0938 main page \u092a\u0930 \u0906 \u091c\u093e\u090f\u0902\u0917\u0947\u0964\n(SPEAKER_01 108.459 - 109.58) okay\n(SPEAKER_00 110.15 - 114.089) \u092c\u093e\u0915\u0940 \u0905\u092d\u0940 \u0915\u094b\u0908 \u092c\u0948\u0902\u0915\u093f\u0902\u0917 need \u0939\u0948 sir \u091c\u093f\u0938\u092e\u0947\u0902 \u092e\u0948\u0902 \u0915\u094b\u0908 \u0938\u0939\u093e\u092f\u0924\u093e \u0915\u0930 \u0938\u0915\u0924\u093e \u0939\u0942\u0901 \u0906\u092a\u0915\u0940?\n(SPEAKER_01 114.36 - 115.959) \u0928\u0939\u0940\u0902 \u0905\u091a\u094d\u091b\u093e\n(SPEAKER_00 115.959 - 122.059) \u0915\u094b\u0908 \u092c\u093e\u0924 \u0928\u0939\u0940\u0902 sir \u0915\u094b\u0908 \u092d\u0940 \u092c\u0948\u0902ing need \u0930\u0939\u0947\u0917\u093e \u0924\u094b please call \u0915\u0940\u091c\u093f\u092f\u0947\u0917\u093e \u0907\u0938 same number \u092a\u0947 \u0906\u092a\u0915\u094b call connect \u0939\u094b \u091c\u093e\u092f\u0947\u0917\u093e sir \u0920\u0940\u0915 \u0939\u0948\n(SPEAKER_01 122.279 - 122.739) okay thank you\n(SPEAKER_00 123.44 - 124.29) thank you sir have a nice"}, {"text": "[0:00:00] null: \n\n[0:00:04] spk_0: hello good morning sir hello hello good morning\n\n[0:00:08] spk_1: \u0915\u093f\u0936\u094b\u0930 \u092a\u094d\u0930\u0927\u093e\u0928 \u092c\u093e\u0924 \u0915\u0930 \u0930\u0939\u093e \u0925\u093e i d f c post bank \u0938\u0947 \u092e\u0947\u0930\u0940 \u092c\u093e\u0924 mister \u0906\u0932\u092e \u0928\u0947\u0917\u0940 \u091c\u0940 \u0938\u0947 \u092c\u093e\u0924 \u0939\u094b \u0930\u0939\u0940\n\n[0:00:13] spk_0: \u0939\u0948\u0902\u0964 \u0939\u093e\u0901 \u091c\u0940 \u0939\u093e\u0901 \u091c\u0940 \u0939\u093e\u0901 good morning! \u0906\u092a\u0915\u0947\n\n[0:00:15] spk_1: account good morning sir \u092e\u0948\u0902 \u0906\u092a\u0915\u0947 account \u0915\u093e relationship manager \u092c\u093e\u0924 \u0915\u0930 \u0930\u0939\u093e \u0925\u093e\u0964\n\n[0:00:20] spk_0: \u0939\u093e\u0902 introduction call \u0925\u093e okay\n\n[0:00:23] spk_1: bank \u0928\u0947 officially \u092e\u0941\u091d\u0947 appoint \u0915\u093f\u092f\u093e \u0925\u093e sir \u0906\u092a\u0915\u0947 account \u0915\u0947 \u0932\u093f\u090f \u0938\u093e\u0930\u0947 product related services \u0915\u0947\n\n[0:00:28] spk_0: \u0932\u093f\u090f\u0964 \u0924\u094b\n\n[0:00:29] spk_1: \u0939\u094b\u0917\u093e \u0928\u093e\u0964\n\n[0:00:31] spk_0: \u0939\u093e\u0902 \u0906\u092a\u0915\u0947 mobile application\n\n[0:00:33] spk_1: pay \u092d\u0940 visible \u0930\u0939\u0947\u0917\u093e\u0964 in \u092e\u0948 pressure shifting\n\n[0:00:36] spk_0: \u0906\u092a\u0915\u094b official\n\n[0:00:37] spk_1: mail id with me official number and official whatsapp detail \u0915\u0947 \u0938\u093e\u0925 \u0906\u092a\u0915\u094b \u092e\u093f\u0932 \u091c\u093e\u090f\u0917\u093e\u0964 \u0920\u0940\u0915\n\n[0:00:43] spk_0: \u0939\u0948\u0902\n\n[0:00:45] spk_1: \u0938\u093e\u0925 \u0939\u0940 \u0938\u093e\u0925 \u091c\u0948\u0938\u0947 \u0939\u0940 \u0939\u092e\u093e\u0930\u093e conversation and \u0939\u094b\u0924\u093e \u0939\u0948\u0902 sir \u0906\u092a\u0915\u094b \u090f\u0915 memes \u0915\u0947 through \u092d\u0940 inform \u0915\u093f\u092f\u093e \u091c\u093e\u0924\u093e \u0939\u0948\u0902 \u0915\u093f \u0926\u091f \u092e\u0948\u0902 \u0906\u092a\u0915\u0947 \u0906 \u0930\u0939\u093e \u0939\u0942\u0902 \u0906\u092a \u092e\u0947\u0930\u093e \u091c\u094b detail \u0939\u0948\u0902 \u0905\u092a\u0928\u0947 phone book \u092e\u0947\u0902 save \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\u0964 clicking only\n\n[0:00:54] spk_0: message\u0964\n\n[0:00:56] spk_1: \u0906\u092a\u0915\u094b \u0906\u092a\u0915\u0947 product related account related services related\n\n[0:01:00] spk_0: \u091c\u094b \u0915\u0940\n\n[0:01:00] spk_1: \u0906\u092a\u0915\u094b requirement \u0930\u0939\u0924\u0940 \u0939\u0948\u0902 \u0924\u094b \u0906\u092a \u092e\u0941\u091d\u0947 call\n\n[0:01:02] spk_0: \u0915\u0930\n\n[0:01:03] spk_1: \u0938\u0915\u0924\u0947 \u0939\u0948\u0902 connect \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\n\n[0:01:04] spk_0: \u092e\u0947\u0930\u093e \u091c\u094b working\n\n[0:01:05] spk_1: nine thirty eight two six \u092a\u0940\u090f\u092e \u0915\u093e \u0939\u0948\u0902 sir\n\n[0:01:07] spk_0: bank working days \u092e\u0947\u0902 sir okay \u0920\u0940\u0915 \u0939\u0948\u0902\n\n[0:01:11] spk_1: and \u0906\u092a\u0915\u093e saving account \u092d\u0940 \u0906\u092a \u092c\u0939\u0941\u0924 \u0938\u092e\u092f \u0938\u0947 \u0939\u092e\u093e\u0930\u0947 \u0938\u093e\u0925 \u091c\u0941\u095c\u0947 \u0939\u0941\u090f \u0939\u0948\u0902\u0964 sir \u0926\u094b \u0939\u095b\u093e\u0930 \u092c\u0940\u0938 \u0938\u0947 \u0906\u092a\u0915\u093e account \u0939\u0948\u0902 right\n\n[0:01:16] spk_0: hope\n\n[0:01:18] spk_1: \u092f\u0947 \u0906\u092a\u0915\u093e \u091c\u094b account \u0939\u0948\u0902 \u092f\u093e primary account \u0939\u0948\u0902 sir\n\n[0:01:22] spk_0: \u092f\u0947\n\n[0:01:22] spk_1: account \u0906\u092a\u0915\u093e primary account \u0939\u0948\u0902\n\n[0:01:23] spk_0: sir \u0939\u093e\u0901 \u091c\u0940! okay\n\n[0:01:26] spk_1: and \u092b\u093f\u0932\u0939\u093e\u0932 \u0905\u092d\u0940 \u0906\u092a job \u0915\u0930 \u0930\u0939\u0947\n\n[0:01:28] spk_0: \u0939\u0948\u0902\n\n[0:01:28] spk_1: business \u0935\u0917\u0948\u0930\u0939 run \u0915\u0930 \u0930\u0939\u0947\n\n[0:01:29] spk_0: \u0939\u0948\u0902 \u091c\u094b salary account \u0914\u0930 \u0906\u092a\u0915\u093e right corporate corporate account \u092e\u0947\u0902 salary\n\n[0:01:38] spk_1: account \u0915\u0941\u091b benefits \u092e\u0948\u0902 \u0906\u092a\u0915\u094b \u092c\u0924\u093e \u0926\u0947\u0928\u093e\n\n[0:01:40] spk_0: \u091a\u093e\u0939\u0942\u0901\u0917\u093e sir \u0907\u0938 account \u092e\u0947\u0902 \u0906\u092a\u0915\u094b rate of\n\n[0:01:42] spk_1: interest \u091c\u094b \u0939\u0948\u0902 \u0928\u093e sir annual \u092e\u0947\u0902 sir saving me more than ten lakh rupees \u0905\u0917\u0930 account \u092e\u0947\u0902 fund \u0905\u0917\u0930 \u0939\u094b\u0924\u093e \u0939\u0948\u0902 sir \u0906\u092a\u0915\u0947 \u0924\u094b\n\n[0:01:49] spk_0: \u0906\u092a\u0915\u094b annually\n\n[0:01:50] spk_1: seven seven percent \u0915\u093e rate of\n\n[0:01:51] spk_0: interest \u092e\u093f\u0932\u0924\u093e \u0939\u0948\u0902 \u0906\u092a\u0915\u0940 saving \u092f\u0947 \u091c\u094b\n\n[0:01:54] spk_1: amount \u0939\u094b\u0924\u093e \u0939\u0948\u0902 monthly basis pay\n\n[0:01:55] spk_0: \u0906\u092a\u0915\u094b\n\n[0:01:56] spk_1: credit \u0939\u094b\u0924\u093e\n\n[0:01:56] spk_0: \u0939\u0948\u0902 and ten\n\n[0:01:58] spk_1: lakh rupees \u0905\u0917\u0930 \u0906\u092a\u0915\u0947 account \u092e\u0947\u0902 fund available \u0939\u0948\u0902 \u0905\u0917\u0930 \u0906\u092a book \u0915\u0930\u0924\u0947 \u0939\u0948\u0902 \u0906\u092a\u0915\u0947 account primary account \u0938\u0947 \u0924\u094b four percent\n\n[0:02:04] spk_0: \u0906\u092a\u0915\u093e \u0906\u092a\u0915\u094b annually rate of\n\n[0:02:05] spk_1: interest monthly\n\n[0:02:07] spk_0: basis pay\n\n[0:02:07] spk_1: \u0906\u092a\n\n[0:02:08] spk_0: credit credit \u0915\u0930\u0915\u0947 \u092e\u093f\u0932\u0924\u093e\n\n[0:02:09] spk_1: \u0939\u0948\u0902 sir\n\n[0:02:10] spk_0: okay above ten lakh seven percent\n\n[0:02:14] spk_1: above ten lakhs it will be seven percent sir\n\n[0:02:16] spk_0: okay\n\n[0:02:17] spk_1: \u0905\u0917\u0930 \u0906\u092a comparison \u0926\u0947\u0916\u0947\u0902\u0917\u0947 \u0928\u093e sir \u0924\u094b \u0906\u092a \u092c\u0939\u0941\u0924 \u091c\u094d\u092f\u093e\u0926\u093e save \u0915\u0930 \u0932\u0947\u0924\u0947 \u0939\u0948\u0902\u0964 \u0906\u092a\u0915\u0940 saving account \u092e\u0947\u0902 \u0939\u0940 sir \u0905\u0917\u0930 \u0906\u092a \u0915\u0939\u0940\u0902 investment \u0905\u0917\u0930 \u0928\u0939\u0940\u0902 \u0915\u0930 \u0930\u0939\u0947 \u0939\u0948\u0902 \u091c\u0948\u0938\u0947 \u0915\u0940 market \u0935\u0917\u0948\u0930\u0939 pay sir because\n\n[0:02:26] spk_0: \u0906\u092a\u0928\u0947 app \u0915\u0941\u091b\n\n[0:02:27] spk_1: \u0939\u0948\u0902 right \u091c\u092c\n\n[0:02:29] spk_0: \u092d\u0940 \u0939\u0941\u0906 \u0939\u0948\u0902 \u0905\u092d\u0940 \u0906\u092a\u0915\u093e\n\n[0:02:31] spk_1: \u090f\u0915 running \u092e\u0947\u0902 \u092d\u0940 \u091a\u0932 \u0930\u0939\u093e\n\n[0:02:32] spk_0: \u0939\u0948\u0902\n\n[0:02:33] spk_1: and \u0926\u0947\u0916\u0947\u0902\u0917\u0947 has two market \u092e\u0947\u0902 \u091c\u094b banks available \u0939\u0948\u0902 \u092b\u093f\u0932\u0939\u093e\u0932\n\n[0:02:37] spk_0: \u0909\u0928\u0938\u0947 \u0939\u092e better\n\n[0:02:38] spk_1: provide \u0915\u0930 \u0930\u0939\u0947 \u0939\u0948\u0902\u0964 sir seven point five zero annual\n\n[0:02:40] spk_0: rate of interest \u0939\u0948\u0902\n\n[0:02:41] spk_1: sir one year plus one day \u0915\u093e \u092d\u0940 \u0905\u0917\u0930 \u0906\u092a fd book \u0915\u0930\u0924\u0947 \u0939\u0948\u0902 \u0924\u094b\n\n[0:02:46] spk_0: \u0915\u093f\u0924\u0928\u093e \u0939\u0948\u0902\n\n[0:02:47] spk_1: one\n\n[0:02:47] spk_0: year one day \u0915\u093e \u0915\u093f\u0924\u0928\u093e and\n\n[0:02:50] spk_1: \u092c\u0924\u093e \u0926\u0947 \u0930\u0939\u093e \u0939\u0942\u0901 one year plus one day two eighteen month it will be seven point five zero\n\n[0:02:56] spk_0: okay okay and\n\n[0:02:58] spk_1: \u0939\u092e \u0925\u094b\u095c\u093e additional provide \u0915\u0930\u0924\u0947 \u0939\u0948\u0902\u0964 senior citizen \u0915\u094b sir\n\n[0:03:01] spk_0: senior citizen account\n\n[0:03:02] spk_1: \u0905\u0917\u0930 \u0906\u092a\u0915\u0947 \u0918\u0930 \u092e\u0947\u0902 \u0915\u094b\u0908 senior citizen member \u0939\u0948\u0902, \u0906\u092a account open \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\u0964 sir \u0909\u0928\u0915\u094b \u091c\u094b fd rates \u0939\u0948\u0902 \u0935\u094b eight percent \u0939\u0948\u0902\u0964 sir one year\n\n[0:03:09] spk_0: plus one day \u0915\u093e okay \u0920\u0940\u0915 document documents required for senior\n\n[0:03:17] spk_1: citizen require \u0906\u0927\u093e\u0930 card pan card\n\n[0:03:20] spk_0: \u0906\u092a\u0915\u094b \u0915\u0939\u0940\u0902\n\n[0:03:20] spk_1: visit \u092d\u0940 \u0915\u0930\u0928\u0947 \u0915\u0940 requirement\n\n[0:03:22] spk_0: \u0928\u0939\u0940\u0902 \u0939\u094b\u0924\u0940 \u0939\u0948\u0902 because \u0939\u092e senior citizen\n\n[0:03:24] spk_1: \u0905\u0917\u0930 account open \u0915\u0930\u0928\u093e \u0939\u094b\u0924\u093e \u0939\u0948\u0902 \u0924\u094b \u0939\u092e account \u0906\u092a\u0915\u0947 place pay visit \u0915\u0930\u0915\u0947 account open \u0915\u093f\u092f\u093e \u091c\u093e\u0924\u093e \u0939\u0948\u0902\u0964 sir\n\n[0:03:30] spk_0: and \u0909\u0928\u0915\u094b\n\n[0:03:31] spk_1: \u0930\u094b step banking facility day one \u0938\u0947 \u0939\u092e provide \u0915\u0930\u0924\u0947 \u0939\u0948\u0902 sir except, cash, withdraw \u0914\u0930 cash deposit\n\n[0:03:41] spk_0: \u0920\u0940\u0915 \u0939\u0948\u0902\n\n[0:03:42] spk_1: \u091c\u094b \u092c\u093e\u0915\u0940 \u0938\u093e\u0930\u0940 services \u0930\u0939\u0924\u0940 \u0939\u0948\u0902\u0964 sir account related \u091a\u093e\u0939\u0947 \u0935\u094b check pick up \u0939\u094b, signature update \u0939\u094b, \u0920\u0940\u0915 \u0939\u0948\u0902 \u092f\u093e \u091b\u094b\u091f\u0947 \u0938\u0947 \u091b\u094b\u091f\u0947 \u091a\u0940\u091c\u094b\u0902 \u0915\u0947 \u0932\u093f\u090f \u092d\u0940 \u0905 \u0905\u0917\u0930 \u0909\u0928\u094d\u0939\u0947\u0902 \u0905\u0917\u0930 requirement \u0939\u0948\u0902 \u0918\u0930 \u0906\u0915\u0947 \u0938\u092e\u091d\u093e\u092f\u093e \u091c\u093e\u092f\u0947\u0917\u093e \u092c\u0924\u093e\u092f\u093e \u091c\u093e\u090f \u0935\u094b\n\n[0:03:54] spk_0: \u092d\u0940 in\n\n[0:03:55] spk_1: case in case of \u0905\u0917\u0930 branch \u0935\u094b visit \u0915\u0930\u0928\u093e \u092d\u0940 \u091a\u093e\u0939\u0924\u0947 \u0939\u0948\u0902 \u0928\u093e sir \u0924\u094b \u0909\u0928\u0915\u094b \u092a\u094d\u0930\u0947\u092b\u0930\u0947\u0902\u0936\u093f\u092f\u0932 \u0926\u093f\u092f\u093e \u091c\u093e\u0924\u093e \u0939\u0948\u0902 sir for\n\n[0:04:02] spk_0: example\n\n[0:04:03] spk_1: \u091a\u093e\u0930 \u0932\u094b\u0917 \u0905\u0917\u0930 \u092a\u0939\u0932\u0947 \u0938\u0947 line \u092e\u0947\u0902 \u0924\u094b \u0909\u0928\u0915\u094b \u0939\u0940 \u0938\u092c\u0938\u0947 \u092a\u0939\u0932\u0947 \u0939\u092e attend \u0915\u0930\u0947\u0902\u0917\u0947\u0964\n\n[0:04:07] spk_0: okay okay \u092f\u0947 \u0938\u093e\u0930\u0940\n\n[0:04:08] spk_1: \u091a\u0940\u095b\u0947 bank \u092e\u0947\u0902 \u0906\u092a\u0915\u094b provide\n\n[0:04:10] spk_0: so in case of any information of any query we can talk to you\n\n[0:04:15] spk_1: yes yes yes\n\n[0:04:16] spk_0: \u092e\u0948\u0902\n\n[0:04:17] spk_1: number und details in your mobile application also in relation shipping\u0964 \u0914\u0930 \u091c\u0948\u0938\u0947\n\n[0:04:22] spk_0: \u092e\u0948\u0902\u0928\u0947 \u0906\u092a\u0915\u094b\n\n[0:04:22] spk_1: \u092c\u0924\u093e\u092f\u093e \u0915\u093f \u090f\u0938\u0964 \u090f\u092e\u0964 \u090f\u0938\u0964 \u092d\u0940 \u0906\u092a\u0915\u094b \u092e\u093f\u0932 \u091c\u093e\u090f\u0917\u093e\u0964 \u090f\u0938\u0964 \u090f\u092e\u0964 \u090f\u0938\u0964 \u092e\u0947\u0902 \u092d\u0940 \u0906\u092a save \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\u0964 detail\n\n[0:04:26] spk_0: \u0920\u0940\u0915 \u0920\u0940\u0915 \u0939\u0948\u0902\u0964 \u0920\u0940\u0915 \u0939\u0948\u0902 thank you thank you so much thank you \u0939\u092e\u094d\u092e\u0964\n\n"}], "pipeline_tag": "text-classification", "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.4, "name": "Accuracy"}]}]}]} | Megatron17/setfit-paraphrase-mpnet-base-v2-sst2 | null | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | null | 2024-04-24T06:39:33+00:00 | [
"2209.11055"
] | [] | TAGS
#setfit #safetensors #mpnet #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-mpnet-base-v2 #model-index #region-us
| SetFit with sentence-transformers/paraphrase-mpnet-base-v2
==========================================================
This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a Sentence Transformer with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
-------------
### Model Description
* Model Type: SetFit
* Sentence Transformer body: sentence-transformers/paraphrase-mpnet-base-v2
* Classification head: a LogisticRegression instance
* Maximum Sequence Length: 512 tokens
* Number of Classes: 2 classes
### Model Sources
* Repository: SetFit on GitHub
* Paper: Efficient Few-Shot Learning Without Prompts
* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
### Model Labels
Evaluation
----------
### Metrics
Uses
----
### Direct Use for Inference
First install the SetFit library:
Then you can load this model and run inference.
Training Details
----------------
### Training Set Metrics
### Training Hyperparameters
* batch\_size: (4, 4)
* num\_epochs: (1, 1)
* max\_steps: -1
* sampling\_strategy: oversampling
* num\_iterations: 20
* body\_learning\_rate: (2e-05, 2e-05)
* head\_learning\_rate: 2e-05
* loss: CosineSimilarityLoss
* distance\_metric: cosine\_distance
* margin: 0.25
* end\_to\_end: False
* use\_amp: False
* warmup\_proportion: 0.1
* seed: 42
* eval\_max\_steps: -1
* load\_best\_model\_at\_end: False
### Training Results
### Framework Versions
* Python: 3.10.12
* SetFit: 1.0.3
* Sentence Transformers: 2.7.0
* Transformers: 4.40.0
* PyTorch: 2.2.1+cu121
* Datasets: 2.19.0
* Tokenizers: 0.19.1
### BibTeX
| [
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-mpnet-base-v2\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 2 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (4, 4)\n* num\\_epochs: (1, 1)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 2e-05)\n* head\\_learning\\_rate: 2e-05\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.0\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1",
"### BibTeX"
] | [
"TAGS\n#setfit #safetensors #mpnet #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-mpnet-base-v2 #model-index #region-us \n",
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-mpnet-base-v2\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 2 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (4, 4)\n* num\\_epochs: (1, 1)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 2e-05)\n* head\\_learning\\_rate: 2e-05\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.0\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1",
"### BibTeX"
] |
text-to-image | diffusers |
# AutoTrain SDXL LoRA DreamBooth - DileepPatruni/CarImagesTrainedModel
<Gallery />
## Model description
These are DileepPatruni/CarImagesTrainedModel LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use photo of a car on a road to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](DileepPatruni/CarImagesTrainedModel/tree/main) them in the Files & versions tab.
| {"license": "openrail++", "tags": ["autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "photo of a car on a road"} | DileepPatruni/CarImagesTrainedModelDreambooth | null | [
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-04-24T06:39:41+00:00 | [] | [] | TAGS
#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# AutoTrain SDXL LoRA DreamBooth - DileepPatruni/CarImagesTrainedModel
<Gallery />
## Model description
These are DileepPatruni/CarImagesTrainedModel LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use photo of a car on a road to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
| [
"# AutoTrain SDXL LoRA DreamBooth - DileepPatruni/CarImagesTrainedModel\n\n<Gallery />",
"## Model description\n\nThese are DileepPatruni/CarImagesTrainedModel LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use photo of a car on a road to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] | [
"TAGS\n#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# AutoTrain SDXL LoRA DreamBooth - DileepPatruni/CarImagesTrainedModel\n\n<Gallery />",
"## Model description\n\nThese are DileepPatruni/CarImagesTrainedModel LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use photo of a car on a road to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | juewang/Meta-Llama-3-4B-mlp-pruned | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T06:39:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-srh-Step2-v1
This model is a fine-tuned version of [allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2](https://huggingface.co/allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2) on the srh_test66 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["srh_test66"], "base_model": "allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2", "model-index": [{"name": "distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-srh-Step2-v1", "results": []}]} | allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-srh-Step2-v1 | null | [
"transformers",
"safetensors",
"distilbert",
"generated_from_trainer",
"dataset:srh_test66",
"base_model:allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:41:44+00:00 | [] | [] | TAGS
#transformers #safetensors #distilbert #generated_from_trainer #dataset-srh_test66 #base_model-allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2 #license-apache-2.0 #endpoints_compatible #region-us
|
# distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-srh-Step2-v1
This model is a fine-tuned version of allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2 on the srh_test66 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-srh-Step2-v1\n\nThis model is a fine-tuned version of allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2 on the srh_test66 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #distilbert #generated_from_trainer #dataset-srh_test66 #base_model-allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-srh-Step2-v1\n\nThis model is a fine-tuned version of allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2 on the srh_test66 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/BioMistral/BioMistral-DARE-NS
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "BioMistral/BioMistral-DARE-NS", "quantized_by": "mradermacher"} | mradermacher/BioMistral-DARE-NS-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:BioMistral/BioMistral-DARE-NS",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:42:17+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mergekit #merge #en #base_model-BioMistral/BioMistral-DARE-NS #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #mergekit #merge #en #base_model-BioMistral/BioMistral-DARE-NS #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | santoshsto/mistral-7b-cpp-FINETUNED-16bit | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T06:42:53+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers | senti_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = AutoModelForSequenceClassification.from_pretrained("weip9012/tesla_earningscall_sentiment_analysis")
0: 'Neutral', 1: 'Positive', 2: 'Negative'
| {} | weip9012/tesla_earningscall_sentiment_analysis | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:43:33+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| senti_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = AutoModelForSequenceClassification.from_pretrained("weip9012/tesla_earningscall_sentiment_analysis")
0: 'Neutral', 1: 'Positive', 2: 'Negative'
| [] | [
"TAGS\n#transformers #safetensors #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers | Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
We don't know how good this model is exactly in benchmarks since we have not benched this yet, but we think real prompts and usage is more telling anyways.
From our testing this model is:
- Less Refusals
- More Uncensored
- Follows requests better
- Can reply in requested formats better without adding unnecesary information
We are happy for anyone to try it out and give some feedback.
You can also try this model on our API at https://www.awanllm.com/
Training:
- 2048 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
- Trained on a modified and improved version of Cognitive Computations Eric Hartford's Dolphin dataset. https://huggingface.co/datasets/cognitivecomputations/dolphin
- Training duration is around 2 days on 2x RTX3090 on our own machine, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights.
The goal for this model is to have the model less-censored and great at general tasks like the previous dolphin based models by Eric Hartford.
We started training this BEFORE they launched their own full weight trained Llama-3-8B-Dolphin-2.9 with their own curated datasets and the newer "Dolphin 2.9" dataset, but we think this model is still a unique take on Llama 3 8B Instruct and the dolphin dataset.
https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b
The difference with their dolphin 2.9 model is that we train this using Meta's new Llama 3 instruct format and not the regular ChatML format that Dolphin models are usually trained on.
This is because we think that it performed better using the format it was originally trained on.
Instruct format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Quants:
AWQ: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-AWQ
GGUF: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF
FP16: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin
Exllamav2:
4bpw: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-v0.1-exl2-h8-4bpw-exl2
8bpw: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-v0.1-exl2-h8-8bpw-exl2
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Axolotl Config:
```
base_model: Meta-Llama-3-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
train_on_inputs: false
group_by_length: false
load_in_8bit: false
load_in_4bit: true
strict: false
sequence_len: 2048
bf16: true
fp16: false
tf32: false
flash_attention: true
# Data
datasets:
- path: flan1m-universal-uncensored-system-2048.jsonl
type:
system_prompt: ""
system_format: "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"
field_system: system
field_instruction: input
field_output: output
format: "{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
no_input_format: "{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
warmup_steps: 10
dataset_prepared_path: ./last_run_prepared
# Iterations
num_epochs: 1
saves_per_epoch: 4
# Evaluation
val_set_size: 0.01
eval_table_size:
eval_table_max_new_tokens:
eval_sample_packing: false
evals_per_epoch: 4
# LoRA
output_dir: ./qlora-out
adapter: qlora
lora_model_dir:
lora_r: 64
lora_alpha: 128
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
save_safetensors: true
# Sampling
sample_packing: true
pad_to_sequence_len: true
# Batching
gradient_accumulation_steps: 32
micro_batch_size: 4
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
# Optimizer
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002
# Misc
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
debug:
deepspeed: zero3_bf16.json
weight_decay: 0.1
special_tokens:
pad_token: <|end_of_text|>
```
| {"license": "apache-2.0"} | AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-24T06:43:46+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
URL
We don't know how good this model is exactly in benchmarks since we have not benched this yet, but we think real prompts and usage is more telling anyways.
From our testing this model is:
- Less Refusals
- More Uncensored
- Follows requests better
- Can reply in requested formats better without adding unnecesary information
We are happy for anyone to try it out and give some feedback.
You can also try this model on our API at URL
Training:
- 2048 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
- Trained on a modified and improved version of Cognitive Computations Eric Hartford's Dolphin dataset. URL
- Training duration is around 2 days on 2x RTX3090 on our own machine, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights.
The goal for this model is to have the model less-censored and great at general tasks like the previous dolphin based models by Eric Hartford.
We started training this BEFORE they launched their own full weight trained Llama-3-8B-Dolphin-2.9 with their own curated datasets and the newer "Dolphin 2.9" dataset, but we think this model is still a unique take on Llama 3 8B Instruct and the dolphin dataset.
URL
The difference with their dolphin 2.9 model is that we train this using Meta's new Llama 3 instruct format and not the regular ChatML format that Dolphin models are usually trained on.
This is because we think that it performed better using the format it was originally trained on.
Instruct format:
Quants:
AWQ: URL
GGUF: URL
FP16: URL
Exllamav2:
4bpw: URL
8bpw: URL
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
Axolotl Config:
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n"
] |
text-generation | transformers |
- **Developed by:** kevinkawchak
- **License:** llama3
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
- **Finetuned using dataset :** zjunlp/Mol-Instructions, cc-by-4.0
- **Dataset identification:** Molecule-oriented Instructions
- **Dataset function:** Description guided molecule design
The following are modifications or improvements to original notebooks. Please refer to the authors' models for the published primary work.
[Cover Image](https://drive.google.com/file/d/1J-spZMzLlPxkqfMrPxvtMZiD2_hfcGyr/view?usp=sharing). [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/). Built with Meta Llama 3. <br>
A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy.
The notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5)
Update 04/24: The number of training steps were increased to further decrease loss, while maintaining reduced memory requirements through quantization and reduced size through LoRA. This allowed for significantly improved responses to biochemistry related questions, and were saved at the following LLM Model sizes: [8.03B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule16), [4.65B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule04). [github](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Molecule.ipynb).
References:
1) unsloth: https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit
2) zjunlp: https://huggingface.co/datasets/zjunlp/Mol-Instructions
3) github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Mol.ipynb
4) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol16
5) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol04
@inproceedings{fang2023mol, <br>
author = {Yin Fang and<br>
Xiaozhuan Liang and<br>
Ningyu Zhang and<br>
Kangwei Liu and<br>
Rui Huang and<br>
Zhuo Chen and<br>
Xiaohui Fan and<br>
Huajun Chen},<br>
title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset<br>
for Large Language Models},<br>
booktitle = {{ICLR}},<br>
publisher = {OpenReview.net},<br>
year = {2024},<br>
url = {https://openreview.net/pdf?id=Tlsdsb6l9n}}<br>
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en"], "license": "llama3", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "datasets": ["zjunlp/Mol-Instructions"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule16 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"dataset:zjunlp/Mol-Instructions",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:45:20+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #dataset-zjunlp/Mol-Instructions #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-llama3 #autotrain_compatible #endpoints_compatible #region-us
|
- Developed by: kevinkawchak
- License: llama3
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
- Finetuned using dataset : zjunlp/Mol-Instructions, cc-by-4.0
- Dataset identification: Molecule-oriented Instructions
- Dataset function: Description guided molecule design
The following are modifications or improvements to original notebooks. Please refer to the authors' models for the published primary work.
Cover Image. META LLAMA 3 COMMUNITY LICENSE AGREEMENT. Built with Meta Llama 3. <br>
A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy.
The notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5)
Update 04/24: The number of training steps were increased to further decrease loss, while maintaining reduced memory requirements through quantization and reduced size through LoRA. This allowed for significantly improved responses to biochemistry related questions, and were saved at the following LLM Model sizes: 8.03B, 4.65B. github.
References:
1) unsloth: URL
2) zjunlp: URL
3) github: URL
4) hugging face: URL
5) hugging face: URL
@inproceedings{fang2023mol, <br>
author = {Yin Fang and<br>
Xiaozhuan Liang and<br>
Ningyu Zhang and<br>
Kangwei Liu and<br>
Rui Huang and<br>
Zhuo Chen and<br>
Xiaohui Fan and<br>
Huajun Chen},<br>
title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset<br>
for Large Language Models},<br>
booktitle = {{ICLR}},<br>
publisher = {URL},<br>
year = {2024},<br>
url = {URL
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #dataset-zjunlp/Mol-Instructions #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-llama3 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-to-audio | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5-TTS-Nepali-SLR43-SLR143-SWOS
This model is a fine-tuned version of [aryamanstha/speecht5_nepali_oslr43_oslr143](https://huggingface.co/aryamanstha/speecht5_nepali_oslr43_oslr143) on the swos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 418
- training_steps: 2615
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4978 | 37.04 | 1000 | 0.4787 |
| 0.4742 | 74.07 | 2000 | 0.4827 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
| {"language": ["ne"], "tags": ["generated_from_trainer"], "datasets": ["swos"], "base_model": "aryamanstha/speecht5_nepali_oslr43_oslr143", "model-index": [{"name": "SpeechT5-TTS-Nepali-SLR43-SLR143-SWOS", "results": []}]} | rikeshsilwalekg/speecht5_tts_nepali_slr43_slr143_swos | null | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"ne",
"dataset:swos",
"base_model:aryamanstha/speecht5_nepali_oslr43_oslr143",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:47:23+00:00 | [] | [
"ne"
] | TAGS
#transformers #pytorch #speecht5 #text-to-audio #generated_from_trainer #ne #dataset-swos #base_model-aryamanstha/speecht5_nepali_oslr43_oslr143 #endpoints_compatible #region-us
| SpeechT5-TTS-Nepali-SLR43-SLR143-SWOS
=====================================
This model is a fine-tuned version of aryamanstha/speecht5\_nepali\_oslr43\_oslr143 on the swos dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4827
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 128
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 418
* training\_steps: 2615
### Training results
### Framework versions
* Transformers 4.31.0
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 418\n* training\\_steps: 2615",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.31.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #speecht5 #text-to-audio #generated_from_trainer #ne #dataset-swos #base_model-aryamanstha/speecht5_nepali_oslr43_oslr143 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 418\n* training\\_steps: 2615",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.31.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** Dionick
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | Dionick/llama3-finetuned-alpaca-Digvijay-phutane | null | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:51:17+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Dionick
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Dionick\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Dionick\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** kevinkawchak
- **License:** llama3
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
- **Finetuned using dataset :** zjunlp/Mol-Instructions, cc-by-4.0
- **Dataset identification:** Molecule-oriented Instructions
- **Dataset function:** Description guided molecule design
The following are modifications or improvements to original notebooks. Please refer to the authors' models for the published primary work.
[Cover Image](https://drive.google.com/file/d/1J-spZMzLlPxkqfMrPxvtMZiD2_hfcGyr/view?usp=sharing). [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/). Built with Meta Llama 3. <br>
A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy.
The notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5)
Update 04/24: The number of training steps were increased to further decrease loss, while maintaining reduced memory requirements through quantization and reduced size through LoRA. This allowed for significantly improved responses to biochemistry related questions, and were saved at the following LLM Model sizes: [8.03B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule16), [4.65B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule04). [github](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Molecule.ipynb).
References:
1) unsloth: https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit
2) zjunlp: https://huggingface.co/datasets/zjunlp/Mol-Instructions
3) github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Mol.ipynb
4) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol16
5) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol04
@inproceedings{fang2023mol, <br>
author = {Yin Fang and<br>
Xiaozhuan Liang and<br>
Ningyu Zhang and<br>
Kangwei Liu and<br>
Rui Huang and<br>
Zhuo Chen and<br>
Xiaohui Fan and<br>
Huajun Chen},<br>
title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset<br>
for Large Language Models},<br>
booktitle = {{ICLR}},<br>
publisher = {OpenReview.net},<br>
year = {2024},<br>
url = {https://openreview.net/pdf?id=Tlsdsb6l9n}}<br>
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en"], "license": "llama3", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "datasets": ["zjunlp/Mol-Instructions"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule04 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:zjunlp/Mol-Instructions",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-04-24T06:51:34+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #dataset-zjunlp/Mol-Instructions #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-llama3 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: kevinkawchak
- License: llama3
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
- Finetuned using dataset : zjunlp/Mol-Instructions, cc-by-4.0
- Dataset identification: Molecule-oriented Instructions
- Dataset function: Description guided molecule design
The following are modifications or improvements to original notebooks. Please refer to the authors' models for the published primary work.
Cover Image. META LLAMA 3 COMMUNITY LICENSE AGREEMENT. Built with Meta Llama 3. <br>
A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy.
The notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5)
Update 04/24: The number of training steps were increased to further decrease loss, while maintaining reduced memory requirements through quantization and reduced size through LoRA. This allowed for significantly improved responses to biochemistry related questions, and were saved at the following LLM Model sizes: 8.03B, 4.65B. github.
References:
1) unsloth: URL
2) zjunlp: URL
3) github: URL
4) hugging face: URL
5) hugging face: URL
@inproceedings{fang2023mol, <br>
author = {Yin Fang and<br>
Xiaozhuan Liang and<br>
Ningyu Zhang and<br>
Kangwei Liu and<br>
Rui Huang and<br>
Zhuo Chen and<br>
Xiaohui Fan and<br>
Huajun Chen},<br>
title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset<br>
for Large Language Models},<br>
booktitle = {{ICLR}},<br>
publisher = {URL},<br>
year = {2024},<br>
url = {URL
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"# Uploaded model\n\n- Developed by: kevinkawchak\n- License: llama3\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n- Finetuned using dataset : zjunlp/Mol-Instructions, cc-by-4.0\n- Dataset identification: Molecule-oriented Instructions\n- Dataset function: Description guided molecule design\n\nThe following are modifications or improvements to original notebooks. Please refer to the authors' models for the published primary work.\nCover Image. META LLAMA 3 COMMUNITY LICENSE AGREEMENT. Built with Meta Llama 3. <br>\n\nA 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy. \n\nThe notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5)\n\nUpdate 04/24: The number of training steps were increased to further decrease loss, while maintaining reduced memory requirements through quantization and reduced size through LoRA. This allowed for significantly improved responses to biochemistry related questions, and were saved at the following LLM Model sizes: 8.03B, 4.65B. github.\n\nReferences:\n1) unsloth: URL\n2) zjunlp: URL\n3) github: URL\n4) hugging face: URL\n5) hugging face: URL\n\n@inproceedings{fang2023mol, <br>\n author = {Yin Fang and<br>\n Xiaozhuan Liang and<br>\n Ningyu Zhang and<br>\n Kangwei Liu and<br>\n Rui Huang and<br>\n Zhuo Chen and<br>\n Xiaohui Fan and<br>\n Huajun Chen},<br>\n title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset<br>\n for Large Language Models},<br>\n booktitle = {{ICLR}},<br>\n publisher = {URL},<br>\n year = {2024},<br>\n url = {URL\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #dataset-zjunlp/Mol-Instructions #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-llama3 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: kevinkawchak\n- License: llama3\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n- Finetuned using dataset : zjunlp/Mol-Instructions, cc-by-4.0\n- Dataset identification: Molecule-oriented Instructions\n- Dataset function: Description guided molecule design\n\nThe following are modifications or improvements to original notebooks. Please refer to the authors' models for the published primary work.\nCover Image. META LLAMA 3 COMMUNITY LICENSE AGREEMENT. Built with Meta Llama 3. <br>\n\nA 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy. \n\nThe notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5)\n\nUpdate 04/24: The number of training steps were increased to further decrease loss, while maintaining reduced memory requirements through quantization and reduced size through LoRA. This allowed for significantly improved responses to biochemistry related questions, and were saved at the following LLM Model sizes: 8.03B, 4.65B. github.\n\nReferences:\n1) unsloth: URL\n2) zjunlp: URL\n3) github: URL\n4) hugging face: URL\n5) hugging face: URL\n\n@inproceedings{fang2023mol, <br>\n author = {Yin Fang and<br>\n Xiaozhuan Liang and<br>\n Ningyu Zhang and<br>\n Kangwei Liu and<br>\n Rui Huang and<br>\n Zhuo Chen and<br>\n Xiaohui Fan and<br>\n Huajun Chen},<br>\n title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset<br>\n for Large Language Models},<br>\n booktitle = {{ICLR}},<br>\n publisher = {URL},<br>\n year = {2024},<br>\n url = {URL\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llava-1.5-7b-hf-ft-mix-vsft
This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Tokenizers 0.19.1 | {"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-1.5-7b-hf", "model-index": [{"name": "llava-1.5-7b-hf-ft-mix-vsft", "results": []}]} | ar9av/llava-1.5-7b-hf-ft-mix-vsft | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:llava-hf/llava-1.5-7b-hf",
"region:us"
] | null | 2024-04-24T06:52:10+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #region-us
|
# llava-1.5-7b-hf-ft-mix-vsft
This model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Tokenizers 0.19.1 | [
"# llava-1.5-7b-hf-ft-mix-vsft\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.0+cu121\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #region-us \n",
"# llava-1.5-7b-hf-ft-mix-vsft\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.0+cu121\n- Tokenizers 0.19.1"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | mysterySu/dummy-model | null | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:53:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
This is a dummy model!
Update from local machine! | {"language": ["en"], "license": "mit"} | prabhuvashwin/new-dummy-model | null | [
"en",
"license:mit",
"region:us"
] | null | 2024-04-24T06:54:03+00:00 | [] | [
"en"
] | TAGS
#en #license-mit #region-us
|
This is a dummy model!
Update from local machine! | [] | [
"TAGS\n#en #license-mit #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# basit12/emotion-analysis-with-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0811
- Validation Loss: 0.1993
- Train Accuracy: 0.9325
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.3670 | 0.1846 | 0.925 | 0 |
| 0.1398 | 0.1508 | 0.935 | 1 |
| 0.1067 | 0.1743 | 0.932 | 2 |
| 0.0925 | 0.1543 | 0.9355 | 3 |
| 0.0811 | 0.1993 | 0.9325 | 4 |
### Framework versions
- Transformers 4.40.0
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "basit12/emotion-analysis-with-distilbert", "results": []}]} | basit12/emotion-analysis-with-distilbert | null | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:54:10+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #text-classification #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| basit12/emotion-analysis-with-distilbert
========================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.0811
* Validation Loss: 0.1993
* Train Accuracy: 0.9325
* Epoch: 4
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': 5e-05, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.40.0
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': 5e-05, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': 5e-05, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "271.23 +/- 18.10", "name": "mean_reward", "verified": false}]}]}]} | Kozilek/PPO-MlpPolicy_LunarLanderV2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-24T06:57:09+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | null |
# DavidAU/GALAXY-16B-v1.0-Q6_K-GGUF
This model was converted to GGUF format from [`TeeZee/GALAXY-16B-v1.0`](https://huggingface.co/TeeZee/GALAXY-16B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TeeZee/GALAXY-16B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/GALAXY-16B-v1.0-Q6_K-GGUF --model galaxy-16b-v1.0.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/GALAXY-16B-v1.0-Q6_K-GGUF --model galaxy-16b-v1.0.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m galaxy-16b-v1.0.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["not-for-all-audiences", "llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs", "athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "Open-Orca/SlimOrca", "MinervaAI/Aesir-Preview", "allenai/ultrafeedback_binarized_cleaned"]} | DavidAU/GALAXY-16B-v1.0-Q6_K-GGUF | null | [
"gguf",
"not-for-all-audiences",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"dataset:Open-Orca/SlimOrca",
"dataset:MinervaAI/Aesir-Preview",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T06:57:25+00:00 | [] | [
"en"
] | TAGS
#gguf #not-for-all-audiences #llama-cpp #gguf-my-repo #en #dataset-Intel/orca_dpo_pairs #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW #dataset-Open-Orca/SlimOrca #dataset-MinervaAI/Aesir-Preview #dataset-allenai/ultrafeedback_binarized_cleaned #license-apache-2.0 #region-us
|
# DavidAU/GALAXY-16B-v1.0-Q6_K-GGUF
This model was converted to GGUF format from 'TeeZee/GALAXY-16B-v1.0' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/GALAXY-16B-v1.0-Q6_K-GGUF\nThis model was converted to GGUF format from 'TeeZee/GALAXY-16B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #not-for-all-audiences #llama-cpp #gguf-my-repo #en #dataset-Intel/orca_dpo_pairs #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW #dataset-Open-Orca/SlimOrca #dataset-MinervaAI/Aesir-Preview #dataset-allenai/ultrafeedback_binarized_cleaned #license-apache-2.0 #region-us \n",
"# DavidAU/GALAXY-16B-v1.0-Q6_K-GGUF\nThis model was converted to GGUF format from 'TeeZee/GALAXY-16B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-toxic2nontoxic-100-50 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T06:58:12+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: flashus/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]} | flashus/ppo-Huggy | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null | 2024-04-24T06:59:04+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
|
# ppo Agent playing Huggy
This is a trained model of a ppo agent playing Huggy
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: flashus/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: flashus/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n",
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: flashus/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | juewang/Meta-Llama-3-2B-mlp-layer-pruned | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:00:23+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PolizzeDonut-UltimaProvaCluster-Cluster1di7-5epochs
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-UltimaProvaCluster-Cluster1di7-5epochs", "results": []}]} | tedad09/PolizzeDonut-UltimaProvaCluster-Cluster1di7-5epochs | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:00:32+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
|
# PolizzeDonut-UltimaProvaCluster-Cluster1di7-5epochs
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# PolizzeDonut-UltimaProvaCluster-Cluster1di7-5epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"# PolizzeDonut-UltimaProvaCluster-Cluster1di7-5epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | null |
# DavidAU/GALAXY-16B-v1.0-Q8_0-GGUF
This model was converted to GGUF format from [`TeeZee/GALAXY-16B-v1.0`](https://huggingface.co/TeeZee/GALAXY-16B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TeeZee/GALAXY-16B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/GALAXY-16B-v1.0-Q8_0-GGUF --model galaxy-16b-v1.0.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/GALAXY-16B-v1.0-Q8_0-GGUF --model galaxy-16b-v1.0.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m galaxy-16b-v1.0.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["not-for-all-audiences", "llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs", "athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "Open-Orca/SlimOrca", "MinervaAI/Aesir-Preview", "allenai/ultrafeedback_binarized_cleaned"]} | DavidAU/GALAXY-16B-v1.0-Q8_0-GGUF | null | [
"gguf",
"not-for-all-audiences",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"dataset:Open-Orca/SlimOrca",
"dataset:MinervaAI/Aesir-Preview",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T07:01:32+00:00 | [] | [
"en"
] | TAGS
#gguf #not-for-all-audiences #llama-cpp #gguf-my-repo #en #dataset-Intel/orca_dpo_pairs #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW #dataset-Open-Orca/SlimOrca #dataset-MinervaAI/Aesir-Preview #dataset-allenai/ultrafeedback_binarized_cleaned #license-apache-2.0 #region-us
|
# DavidAU/GALAXY-16B-v1.0-Q8_0-GGUF
This model was converted to GGUF format from 'TeeZee/GALAXY-16B-v1.0' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/GALAXY-16B-v1.0-Q8_0-GGUF\nThis model was converted to GGUF format from 'TeeZee/GALAXY-16B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #not-for-all-audiences #llama-cpp #gguf-my-repo #en #dataset-Intel/orca_dpo_pairs #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW #dataset-Open-Orca/SlimOrca #dataset-MinervaAI/Aesir-Preview #dataset-allenai/ultrafeedback_binarized_cleaned #license-apache-2.0 #region-us \n",
"# DavidAU/GALAXY-16B-v1.0-Q8_0-GGUF\nThis model was converted to GGUF format from 'TeeZee/GALAXY-16B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 | {"library_name": "peft", "base_model": "Trelis/Llama-2-7b-chat-hf-sharded-bf16"} | Vibhav1612/LlamaQuantized | null | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2024-04-24T07:03:32+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Trelis/Llama-2-7b-chat-hf-sharded-bf16 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Trelis/Llama-2-7b-chat-hf-sharded-bf16 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flant5-offensive-multilingual
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0012
- Precision: 0.6875
- Recall: 0.6040
- F1: 0.6430
- Total Predictions: 3532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Total Predictions |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:-----------------:|
| 0.2343 | 1.0 | 3753 | 0.0011 | 0.5924 | 0.6481 | 0.6190 | 3532 |
| 0.0008 | 2.0 | 7506 | 0.0010 | 0.6903 | 0.5416 | 0.6070 | 3532 |
| 0.0006 | 3.0 | 11259 | 0.0011 | 0.6012 | 0.7238 | 0.6569 | 3532 |
| 0.0005 | 4.0 | 15012 | 0.0011 | 0.6882 | 0.5765 | 0.6274 | 3532 |
| 0.0004 | 5.0 | 18765 | 0.0012 | 0.6875 | 0.6040 | 0.6430 | 3532 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.0.0+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1"], "base_model": "google/flan-t5-base", "model-index": [{"name": "Flant5-offensive-multilingual", "results": []}]} | JenniferHJF/Flant5-offensive-multilingual | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:04:36+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Flant5-offensive-multilingual
=============================
This model is a fine-tuned version of google/flan-t5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0012
* Precision: 0.6875
* Recall: 0.6040
* F1: 0.6430
* Total Predictions: 3532
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.0.0+cu118
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.0.0+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.0.0+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null |
# Yamshadowexperiment28Experiment26-7B
Yamshadowexperiment28Experiment26-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: automerger/YamshadowExperiment28-7B
- model: yam-peleg/Experiment26-7B
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Yamshadowexperiment28Experiment26-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/Yamshadowexperiment28Experiment26-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T07:04:54+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
|
# Yamshadowexperiment28Experiment26-7B
Yamshadowexperiment28Experiment26-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
| [
"# Yamshadowexperiment28Experiment26-7B\n\nYamshadowexperiment28Experiment26-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n",
"# Yamshadowexperiment28Experiment26-7B\n\nYamshadowexperiment28Experiment26-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"license": "apache-2.0", "library_name": "transformers"} | Akirami/truthy-llama3-8b | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-24T07:07:01+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# DavidAU/GALAXY-16B-v1.0-Q4_K_M-GGUF
This model was converted to GGUF format from [`TeeZee/GALAXY-16B-v1.0`](https://huggingface.co/TeeZee/GALAXY-16B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TeeZee/GALAXY-16B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/GALAXY-16B-v1.0-Q4_K_M-GGUF --model galaxy-16b-v1.0.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/GALAXY-16B-v1.0-Q4_K_M-GGUF --model galaxy-16b-v1.0.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m galaxy-16b-v1.0.Q4_K_M.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["not-for-all-audiences", "llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs", "athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "Open-Orca/SlimOrca", "MinervaAI/Aesir-Preview", "allenai/ultrafeedback_binarized_cleaned"]} | DavidAU/GALAXY-16B-v1.0-Q4_K_M-GGUF | null | [
"gguf",
"not-for-all-audiences",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"dataset:Open-Orca/SlimOrca",
"dataset:MinervaAI/Aesir-Preview",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T07:07:06+00:00 | [] | [
"en"
] | TAGS
#gguf #not-for-all-audiences #llama-cpp #gguf-my-repo #en #dataset-Intel/orca_dpo_pairs #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW #dataset-Open-Orca/SlimOrca #dataset-MinervaAI/Aesir-Preview #dataset-allenai/ultrafeedback_binarized_cleaned #license-apache-2.0 #region-us
|
# DavidAU/GALAXY-16B-v1.0-Q4_K_M-GGUF
This model was converted to GGUF format from 'TeeZee/GALAXY-16B-v1.0' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/GALAXY-16B-v1.0-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'TeeZee/GALAXY-16B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #not-for-all-audiences #llama-cpp #gguf-my-repo #en #dataset-Intel/orca_dpo_pairs #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW #dataset-Open-Orca/SlimOrca #dataset-MinervaAI/Aesir-Preview #dataset-allenai/ultrafeedback_binarized_cleaned #license-apache-2.0 #region-us \n",
"# DavidAU/GALAXY-16B-v1.0-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'TeeZee/GALAXY-16B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null | # SecGPT 网络安全大模型
### **项目**
- [GitHub](https://github.com/Clouditera/SecGPT)
- [原版Pytorch模型](https://huggingface.co/clouditera/secgpt)
### **简介**
- 随着大语言模型的崛起,网安大模型也掀起了一股热潮,本人在逛 GitHub 时偶然发现了云起无垠开源的 SecGPT,但官方调用脚本中使用了 Cuda,且没有提供 GGUF 版本,故使用了 [llama.cpp](https://github.com/ggerganov/llama.cpp) 的 convert 脚本进行转换,并上传至huggingface
### **测试设备**
- MacBook Pro 16 寸
- M3 Max
- 48 GB
### **Usage**
- 分为 `secgpt.gguf` 与 `secgpt-mini.gguf` 两个版本
- `secgpt.gguf` 需 26.5 G 显存
- `secgpt-mini.gguf` 需 1.6 G 显存
- 使用方法
- 将 GGUF 导入[LM Studio](https://lmstudio.ai/),并使用 `secgpt-all.json` 作为参数配置 | {"language": ["zh"], "license": "apache-2.0", "tags": ["cybersecurity"]} | LingJingMaster/Clouditera-SecGPT-GGUF | null | [
"gguf",
"cybersecurity",
"zh",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T07:07:34+00:00 | [] | [
"zh"
] | TAGS
#gguf #cybersecurity #zh #license-apache-2.0 #region-us
| # SecGPT 网络安全大模型
### 项目
- GitHub
- 原版Pytorch模型
### 简介
- 随着大语言模型的崛起,网安大模型也掀起了一股热潮,本人在逛 GitHub 时偶然发现了云起无垠开源的 SecGPT,但官方调用脚本中使用了 Cuda,且没有提供 GGUF 版本,故使用了 URL 的 convert 脚本进行转换,并上传至huggingface
### 测试设备
- MacBook Pro 16 寸
- M3 Max
- 48 GB
### Usage
- 分为 'URL' 与 'URL' 两个版本
- 'URL' 需 26.5 G 显存
- 'URL' 需 1.6 G 显存
- 使用方法
- 将 GGUF 导入LM Studio,并使用 'URL' 作为参数配置 | [
"# SecGPT 网络安全大模型",
"### 项目\n- GitHub\n- 原版Pytorch模型",
"### 简介\n- 随着大语言模型的崛起,网安大模型也掀起了一股热潮,本人在逛 GitHub 时偶然发现了云起无垠开源的 SecGPT,但官方调用脚本中使用了 Cuda,且没有提供 GGUF 版本,故使用了 URL 的 convert 脚本进行转换,并上传至huggingface",
"### 测试设备\n- MacBook Pro 16 寸\n- M3 Max\n- 48 GB",
"### Usage\n- 分为 'URL' 与 'URL' 两个版本\n\t- 'URL' 需 26.5 G 显存\n\t- 'URL' 需 1.6 G 显存\n- 使用方法\n\t- 将 GGUF 导入LM Studio,并使用 'URL' 作为参数配置"
] | [
"TAGS\n#gguf #cybersecurity #zh #license-apache-2.0 #region-us \n",
"# SecGPT 网络安全大模型",
"### 项目\n- GitHub\n- 原版Pytorch模型",
"### 简介\n- 随着大语言模型的崛起,网安大模型也掀起了一股热潮,本人在逛 GitHub 时偶然发现了云起无垠开源的 SecGPT,但官方调用脚本中使用了 Cuda,且没有提供 GGUF 版本,故使用了 URL 的 convert 脚本进行转换,并上传至huggingface",
"### 测试设备\n- MacBook Pro 16 寸\n- M3 Max\n- 48 GB",
"### Usage\n- 分为 'URL' 与 'URL' 两个版本\n\t- 'URL' 需 26.5 G 显存\n\t- 'URL' 需 1.6 G 显存\n- 使用方法\n\t- 将 GGUF 导入LM Studio,并使用 'URL' 作为参数配置"
] |
null | null | This model is trained to recognise Indian Sign Language(ISL) which is trained using video dataset available here -- https://zenodo.org/records/4010759 | {"language": ["en"], "license": "mit", "tags": ["art"], "metrics": ["Testing accuracy of 44%"]} | cdsteameight/ISL-SignLanguageTranslation | null | [
"art",
"en",
"license:mit",
"region:us"
] | null | 2024-04-24T07:07:40+00:00 | [] | [
"en"
] | TAGS
#art #en #license-mit #region-us
| This model is trained to recognise Indian Sign Language(ISL) which is trained using video dataset available here -- URL | [] | [
"TAGS\n#art #en #license-mit #region-us \n"
] |
null | null |
# DavidAU/Scarlett-Llama-3-8B-Q8_0-GGUF
This model was converted to GGUF format from [`ajibawa-2023/Scarlett-Llama-3-8B`](https://huggingface.co/ajibawa-2023/Scarlett-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ajibawa-2023/Scarlett-Llama-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Scarlett-Llama-3-8B-Q8_0-GGUF --model scarlett-llama-3-8b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Scarlett-Llama-3-8B-Q8_0-GGUF --model scarlett-llama-3-8b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m scarlett-llama-3-8b.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "other", "tags": ["art", "philosophy", "romance", "jokes", "advice", "code", "llama-cpp", "gguf-my-repo"], "license_name": "llama3", "license_link": "LICENSE", "model-index": [{"name": "Scarlett-Llama-3-8B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 62.63, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 83.86, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 66.46, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 56.27}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 78.06, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 47.31, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B", "name": "Open LLM Leaderboard"}}]}]} | DavidAU/Scarlett-Llama-3-8B-Q8_0-GGUF | null | [
"gguf",
"art",
"philosophy",
"romance",
"jokes",
"advice",
"code",
"llama-cpp",
"gguf-my-repo",
"en",
"license:other",
"model-index",
"region:us"
] | null | 2024-04-24T07:08:53+00:00 | [] | [
"en"
] | TAGS
#gguf #art #philosophy #romance #jokes #advice #code #llama-cpp #gguf-my-repo #en #license-other #model-index #region-us
|
# DavidAU/Scarlett-Llama-3-8B-Q8_0-GGUF
This model was converted to GGUF format from 'ajibawa-2023/Scarlett-Llama-3-8B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Scarlett-Llama-3-8B-Q8_0-GGUF\nThis model was converted to GGUF format from 'ajibawa-2023/Scarlett-Llama-3-8B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #art #philosophy #romance #jokes #advice #code #llama-cpp #gguf-my-repo #en #license-other #model-index #region-us \n",
"# DavidAU/Scarlett-Llama-3-8B-Q8_0-GGUF\nThis model was converted to GGUF format from 'ajibawa-2023/Scarlett-Llama-3-8B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
## Model Card for Model ID
French-Alpaca based on microsoft/Phi-3-mini-4k-instruct
4k is the context length (in tokens)

### Model Description
fine-tuned from the original French-Alpaca-dataset entirely generated with OpenAI GPT-3.5-turbo.
French-Alpaca is a general model and can itself be finetuned to be specialized for specific use cases.
The fine-tuning method is inspired from https://crfm.stanford.edu/2023/03/13/alpaca.html
Quantized GGUF version : coming soon
### Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("jpacifico/French-Alpaca-Phi-3-mini-4k-instruct-v1.0")
messages = [
{"role": "system", "content": "Vous êtes un assistant numérique serviable. Veuillez fournir des informations sûres, éthiques et précises à l'utilisateur."},
{"role": "user", "content": "Pouvez-vous fournir des façons de manger des combinaisons de bananes et de fruits du dragon ?"},
{"role": "assistant", "content": "Bien sûr ! Voici quelques façons de manger des bananes et des fruits du dragon ensemble : 1. Smoothie à la banane et au fruit du dragon : mixez des bananes et des fruits du dragon avec un peu de lait et de miel. 2. Salade de banane et de fruit du dragon : mélangez des bananes tranchées et des fruits du dragon avec un peu de jus de citron et de miel."},
{"role": "user", "content": "Et pour résoudre une équation 2x + 3 = 7 ?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
### Limitations
The French-Alpaca model is a quick demonstration that a 3B base model can be easily fine-tuned to specialize in a particular language.
It does not have any moderation mechanisms.
- **Developed by:** Jonathan Pacifico, 2024
- **Model type:** LLM
- **Language(s) (NLP):** French
- **License:** MIT | {"language": ["fr", "en"], "license": "mit", "library_name": "transformers", "tags": ["Phi-3", "french", "Phi-3-mini", "french-alpaca"], "datasets": ["jpacifico/French-Alpaca-dataset-Instruct-110K"]} | jpacifico/French-Alpaca-Phi-3-mini-4k-instruct-v1.0 | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"Phi-3",
"french",
"Phi-3-mini",
"french-alpaca",
"conversational",
"custom_code",
"fr",
"en",
"dataset:jpacifico/French-Alpaca-dataset-Instruct-110K",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:08:54+00:00 | [] | [
"fr",
"en"
] | TAGS
#transformers #safetensors #phi3 #text-generation #Phi-3 #french #Phi-3-mini #french-alpaca #conversational #custom_code #fr #en #dataset-jpacifico/French-Alpaca-dataset-Instruct-110K #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
## Model Card for Model ID
French-Alpaca based on microsoft/Phi-3-mini-4k-instruct
4k is the context length (in tokens)
!image/jpeg
### Model Description
fine-tuned from the original French-Alpaca-dataset entirely generated with OpenAI GPT-3.5-turbo.
French-Alpaca is a general model and can itself be finetuned to be specialized for specific use cases.
The fine-tuning method is inspired from URL
Quantized GGUF version : coming soon
### Usage
### Limitations
The French-Alpaca model is a quick demonstration that a 3B base model can be easily fine-tuned to specialize in a particular language.
It does not have any moderation mechanisms.
- Developed by: Jonathan Pacifico, 2024
- Model type: LLM
- Language(s) (NLP): French
- License: MIT | [
"## Model Card for Model ID\n\nFrench-Alpaca based on microsoft/Phi-3-mini-4k-instruct \n4k is the context length (in tokens)\n\n!image/jpeg",
"### Model Description\n\nfine-tuned from the original French-Alpaca-dataset entirely generated with OpenAI GPT-3.5-turbo. \nFrench-Alpaca is a general model and can itself be finetuned to be specialized for specific use cases. \n\nThe fine-tuning method is inspired from URL\n\nQuantized GGUF version : coming soon",
"### Usage",
"### Limitations\n\nThe French-Alpaca model is a quick demonstration that a 3B base model can be easily fine-tuned to specialize in a particular language.\nIt does not have any moderation mechanisms.\n\n- Developed by: Jonathan Pacifico, 2024\n- Model type: LLM \n- Language(s) (NLP): French\n- License: MIT"
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #Phi-3 #french #Phi-3-mini #french-alpaca #conversational #custom_code #fr #en #dataset-jpacifico/French-Alpaca-dataset-Instruct-110K #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"## Model Card for Model ID\n\nFrench-Alpaca based on microsoft/Phi-3-mini-4k-instruct \n4k is the context length (in tokens)\n\n!image/jpeg",
"### Model Description\n\nfine-tuned from the original French-Alpaca-dataset entirely generated with OpenAI GPT-3.5-turbo. \nFrench-Alpaca is a general model and can itself be finetuned to be specialized for specific use cases. \n\nThe fine-tuning method is inspired from URL\n\nQuantized GGUF version : coming soon",
"### Usage",
"### Limitations\n\nThe French-Alpaca model is a quick demonstration that a 3B base model can be easily fine-tuned to specialize in a particular language.\nIt does not have any moderation mechanisms.\n\n- Developed by: Jonathan Pacifico, 2024\n- Model type: LLM \n- Language(s) (NLP): French\n- License: MIT"
] |
null | null |
# DavidAU/Young-Children-Storyteller-Mistral-7B-Q6_K-GGUF
This model was converted to GGUF format from [`ajibawa-2023/Young-Children-Storyteller-Mistral-7B`](https://huggingface.co/ajibawa-2023/Young-Children-Storyteller-Mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ajibawa-2023/Young-Children-Storyteller-Mistral-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Young-Children-Storyteller-Mistral-7B-Q6_K-GGUF --model young-children-storyteller-mistral-7b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Young-Children-Storyteller-Mistral-7B-Q6_K-GGUF --model young-children-storyteller-mistral-7b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m young-children-storyteller-mistral-7b.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["story", "young children", "educational", "knowledge", "llama-cpp", "gguf-my-repo"], "datasets": ["ajibawa-2023/Children-Stories-Collection"], "model-index": [{"name": "Young-Children-Storyteller-Mistral-7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 68.69, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 84.67, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 64.11, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 62.62}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 81.22, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 65.2, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "name": "Open LLM Leaderboard"}}]}]} | DavidAU/Young-Children-Storyteller-Mistral-7B-Q6_K-GGUF | null | [
"gguf",
"story",
"young children",
"educational",
"knowledge",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:ajibawa-2023/Children-Stories-Collection",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-04-24T07:10:09+00:00 | [] | [
"en"
] | TAGS
#gguf #story #young children #educational #knowledge #llama-cpp #gguf-my-repo #en #dataset-ajibawa-2023/Children-Stories-Collection #license-apache-2.0 #model-index #region-us
|
# DavidAU/Young-Children-Storyteller-Mistral-7B-Q6_K-GGUF
This model was converted to GGUF format from 'ajibawa-2023/Young-Children-Storyteller-Mistral-7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Young-Children-Storyteller-Mistral-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'ajibawa-2023/Young-Children-Storyteller-Mistral-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #story #young children #educational #knowledge #llama-cpp #gguf-my-repo #en #dataset-ajibawa-2023/Children-Stories-Collection #license-apache-2.0 #model-index #region-us \n",
"# DavidAU/Young-Children-Storyteller-Mistral-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'ajibawa-2023/Young-Children-Storyteller-Mistral-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lnmt
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7972
- Accuracy: {'accuracy': 0.6208813838550247}
- F1 Macro: {'f1': 0.3506606197441491}
- F1 Weighted: {'f1': 0.6062668131729496}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:---------------------------:|:--------------------------:|
| No log | 1.0 | 315 | 2.0101 | {'accuracy': 0.5226523887973641} | {'f1': 0.21712503989679657} | {'f1': 0.459506356775351} |
| 2.2969 | 2.0 | 630 | 1.6716 | {'accuracy': 0.5963756177924218} | {'f1': 0.28274236255720786} | {'f1': 0.5462732390600772} |
| 2.2969 | 3.0 | 945 | 1.5967 | {'accuracy': 0.6112026359143328} | {'f1': 0.3279242367574629} | {'f1': 0.5787485773304204} |
| 1.1815 | 4.0 | 1260 | 1.5843 | {'accuracy': 0.6202635914332785} | {'f1': 0.3402580752236545} | {'f1': 0.5918094876585247} |
| 0.7089 | 5.0 | 1575 | 1.6031 | {'accuracy': 0.6219110378912686} | {'f1': 0.3471078372421453} | {'f1': 0.5941366500585097} |
| 0.7089 | 6.0 | 1890 | 1.6876 | {'accuracy': 0.6149093904448105} | {'f1': 0.35129077551349414} | {'f1': 0.5935341462382293} |
| 0.4532 | 7.0 | 2205 | 1.7093 | {'accuracy': 0.6208813838550247} | {'f1': 0.35300405317763817} | {'f1': 0.6021058143955713} |
| 0.3178 | 8.0 | 2520 | 1.7752 | {'accuracy': 0.6138797364085667} | {'f1': 0.35479307050001907} | {'f1': 0.5998441386303183} |
| 0.3178 | 9.0 | 2835 | 1.7888 | {'accuracy': 0.6188220757825371} | {'f1': 0.3553222770673821} | {'f1': 0.6033599756075638} |
| 0.2417 | 10.0 | 3150 | 1.7972 | {'accuracy': 0.6208813838550247} | {'f1': 0.3506606197441491} | {'f1': 0.6062668131729496} |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "lnmt", "results": []}]} | carmenlozano/lnmt | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:10:28+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| lnmt
====
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7972
* Accuracy: {'accuracy': 0.6208813838550247}
* F1 Macro: {'f1': 0.3506606197441491}
* F1 Weighted: {'f1': 0.6062668131729496}
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# OpenLLaMA: An Open Reproduction of LLaMA
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation.
## Dataset and Training
We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B 600BT |
| ---------------------- | -------- | -------- | ------------ | ------------ | ------------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.32 | 0.35 |
| anli_r3/acc | 0.35 | 0.37 | 0.38 | 0.35 | 0.38 |
| arc_challenge/acc | 0.34 | 0.39 | 0.37 | 0.34 | 0.39 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.38 | 0.37 | 0.42 |
| arc_easy/acc | 0.67 | 0.68 | 0.72 | 0.69 | 0.74 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.68 | 0.65 | 0.70 |
| ddboolq/acc | 0.50 | 0.56 | 0.53 | 0.49 | 0.71 |
| hellaswag/acc | 0.36 | 0.36 | 0.63 | 0.43 | 0.54 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.72 | 0.67 | 0.73 |
| openbookqa/acc | 0.29 | 0.29 | 0.30 | 0.27 | 0.30 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.40 | 0.40 | 0.41 |
| piqa/acc | 0.75 | 0.78 | 0.76 | 0.75 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.77 | 0.76 | 0.78 |
| record/em | 0.88 | 0.91 | 0.89 | 0.88 | 0.90 |
| record/f1 | 0.89 | 0.91 | 0.90 | 0.89 | 0.90 |
| rte/acc | 0.54 | 0.56 | 0.60 | 0.58 | 0.65 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.23 | 0.22 | 0.22 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.35 | 0.35 | 0.35 |
| wic/acc | 0.50 | 0.50 | 0.51 | 0.48 | 0.49 |
| winogrande/acc | 0.64 | 0.68 | 0.67 | 0.62 | 0.67 |
| Average | 0.51 | 0.53 | 0.55 | 0.52 | 0.56 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
| {"license": "apache-2.0", "datasets": ["togethercomputer/RedPajama-Data-1T"]} | titanbot/ct2-int8-open-llama-7b | null | [
"transformers",
"llama",
"text-generation",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:11:49+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #dataset-togethercomputer/RedPajama-Data-1T #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| OpenLLaMA: An Open Reproduction of LLaMA
========================================
In this repo, we present a permissively licensed open source reproduction of Meta AI's LLaMA large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the project homepage of OpenLLaMA for more details.
Weights Release, License and Usage
----------------------------------
We release the weights in two formats: an EasyLM format to be use with our EasyLM framework, and a PyTorch format to be used with the Hugging Face transformers library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations. This can be achieved by directly using the 'LlamaTokenizer' class, or passing in the 'use\_fast=False' option for the 'AutoTokenizer' class. See the following example for usage.
For more advanced usage, please follow the transformers LLaMA documentation.
### Evaluating with LM-Eval-Harness
The model can be evaluated with lm-eval-harness. However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in 'use\_fast=False' to this part of lm-eval-harness, as shown in the example below:
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the LLaMA documentation of EasyLM. Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation.
Dataset and Training
--------------------
We train our models on the RedPajama dataset released by Together, which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using EasyLM, a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism (also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
Evaluation
----------
We evaluated OpenLLaMA on a wide range of tasks using lm-evaluation-harness. The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in this issue of lm-evaluation-harness. Additionally, we present the results of GPT-J, a 6B parameter model trained on the Pile dataset by EleutherAI.
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
Contact
-------
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
Xinyang Geng\* and Hao Liu\* from Berkeley AI Research.
\*Equal Contribution
Acknowledgment
--------------
We thank the Google TPU Research Cloud program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B model is trained in collaboration with Stability AI, and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
Reference
---------
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
| [
"### Loading the Weights with Hugging Face Transformers\n\n\nPreview checkpoints can be directly loaded from Hugging Face Hub. Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations. This can be achieved by directly using the 'LlamaTokenizer' class, or passing in the 'use\\_fast=False' option for the 'AutoTokenizer' class. See the following example for usage.\n\n\nFor more advanced usage, please follow the transformers LLaMA documentation.",
"### Evaluating with LM-Eval-Harness\n\n\nThe model can be evaluated with lm-eval-harness. However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in 'use\\_fast=False' to this part of lm-eval-harness, as shown in the example below:",
"### Loading the Weights with EasyLM\n\n\nFor using the weights in our EasyLM framework, please refer to the LLaMA documentation of EasyLM. Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation.\n\n\nDataset and Training\n--------------------\n\n\nWe train our models on the RedPajama dataset released by Together, which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA.\n\n\nWe train the models on cloud TPU-v4s using EasyLM, a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism (also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.\n\n\nEvaluation\n----------\n\n\nWe evaluated OpenLLaMA on a wide range of tasks using lm-evaluation-harness. The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in this issue of lm-evaluation-harness. Additionally, we present the results of GPT-J, a 6B parameter model trained on the Pile dataset by EleutherAI.\n\n\nThe original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.\n\n\n\nWe removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.\n\n\nContact\n-------\n\n\nWe would love to get feedback from the community. If you have any questions, please open an issue or contact us.\n\n\nOpenLLaMA is developed by:\nXinyang Geng\\* and Hao Liu\\* from Berkeley AI Research.\n\\*Equal Contribution\n\n\nAcknowledgment\n--------------\n\n\nWe thank the Google TPU Research Cloud program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.\n\n\nThe OpenLLaMA 13B model is trained in collaboration with Stability AI, and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.\n\n\nReference\n---------\n\n\nIf you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:"
] | [
"TAGS\n#transformers #llama #text-generation #dataset-togethercomputer/RedPajama-Data-1T #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Loading the Weights with Hugging Face Transformers\n\n\nPreview checkpoints can be directly loaded from Hugging Face Hub. Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations. This can be achieved by directly using the 'LlamaTokenizer' class, or passing in the 'use\\_fast=False' option for the 'AutoTokenizer' class. See the following example for usage.\n\n\nFor more advanced usage, please follow the transformers LLaMA documentation.",
"### Evaluating with LM-Eval-Harness\n\n\nThe model can be evaluated with lm-eval-harness. However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in 'use\\_fast=False' to this part of lm-eval-harness, as shown in the example below:",
"### Loading the Weights with EasyLM\n\n\nFor using the weights in our EasyLM framework, please refer to the LLaMA documentation of EasyLM. Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation.\n\n\nDataset and Training\n--------------------\n\n\nWe train our models on the RedPajama dataset released by Together, which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA.\n\n\nWe train the models on cloud TPU-v4s using EasyLM, a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism (also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.\n\n\nEvaluation\n----------\n\n\nWe evaluated OpenLLaMA on a wide range of tasks using lm-evaluation-harness. The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in this issue of lm-evaluation-harness. Additionally, we present the results of GPT-J, a 6B parameter model trained on the Pile dataset by EleutherAI.\n\n\nThe original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.\n\n\n\nWe removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.\n\n\nContact\n-------\n\n\nWe would love to get feedback from the community. If you have any questions, please open an issue or contact us.\n\n\nOpenLLaMA is developed by:\nXinyang Geng\\* and Hao Liu\\* from Berkeley AI Research.\n\\*Equal Contribution\n\n\nAcknowledgment\n--------------\n\n\nWe thank the Google TPU Research Cloud program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.\n\n\nThe OpenLLaMA 13B model is trained in collaboration with Stability AI, and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.\n\n\nReference\n---------\n\n\nIf you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/KnutJaegersberg/Llama3-Deita-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "base_model": "KnutJaegersberg/Llama3-Deita-8b", "quantized_by": "mradermacher"} | mradermacher/Llama3-Deita-8b-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:KnutJaegersberg/Llama3-Deita-8b",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:13:51+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-KnutJaegersberg/Llama3-Deita-8b #license-llama3 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-KnutJaegersberg/Llama3-Deita-8b #license-llama3 #endpoints_compatible #region-us \n"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ValiantLabs/Llama3-70B-ShiningValiant2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q5_K_M.gguf) | Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["shining-valiant", "shining-valiant-2", "valiant", "valiant-labs", "llama", "llama-3", "llama-3-instruct", "llama-3-instruct-70b", "70b", "conversational", "chat", "instruct"], "base_model": "ValiantLabs/Llama3-70B-ShiningValiant2", "license_link": "https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct/blob/main/LICENSE", "license_name": "llama3", "model_type": "llama", "quantized_by": "mradermacher"} | mradermacher/Llama3-70B-ShiningValiant2-GGUF | null | [
"transformers",
"gguf",
"shining-valiant",
"shining-valiant-2",
"valiant",
"valiant-labs",
"llama",
"llama-3",
"llama-3-instruct",
"llama-3-instruct-70b",
"70b",
"conversational",
"chat",
"instruct",
"en",
"base_model:ValiantLabs/Llama3-70B-ShiningValiant2",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:14:00+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #shining-valiant #shining-valiant-2 #valiant #valiant-labs #llama #llama-3 #llama-3-instruct #llama-3-instruct-70b #70b #conversational #chat #instruct #en #base_model-ValiantLabs/Llama3-70B-ShiningValiant2 #license-other #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #shining-valiant #shining-valiant-2 #valiant #valiant-labs #llama #llama-3 #llama-3-instruct #llama-3-instruct-70b #70b #conversational #chat #instruct #en #base_model-ValiantLabs/Llama3-70B-ShiningValiant2 #license-other #endpoints_compatible #region-us \n"
] |
null | transformers |
# Uploaded model
- **Developed by:** Tina2088
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | Tina2088/lora_model | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:15:41+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Tina2088
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Tina2088\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Tina2088\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "276.45 +/- 20.44", "name": "mean_reward", "verified": false}]}]}]} | nikola13/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-24T07:16:22+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-2
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-2", "results": []}]} | AlignmentResearch/robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-2 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:18:09+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-2
This model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
reinforcement-learning | null |
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'jiaqianwu/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
| {"tags": ["LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "-160.20 +/- 91.90", "name": "mean_reward", "verified": false}]}]}]} | jiaqianwu/ppo-CartPole-v1 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | null | 2024-04-24T07:18:49+00:00 | [] | [] | TAGS
#tensorboard #LunarLander-v2 #ppo #deep-reinforcement-learning #reinforcement-learning #custom-implementation #deep-rl-course #model-index #region-us
|
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
| [
"# PPO Agent Playing LunarLander-v2\n\n This is a trained model of a PPO agent playing LunarLander-v2.\n \n # Hyperparameters"
] | [
"TAGS\n#tensorboard #LunarLander-v2 #ppo #deep-reinforcement-learning #reinforcement-learning #custom-implementation #deep-rl-course #model-index #region-us \n",
"# PPO Agent Playing LunarLander-v2\n\n This is a trained model of a PPO agent playing LunarLander-v2.\n \n # Hyperparameters"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_synDB_big
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3353 | 0.92 | 60 | 0.1525 |
| 0.1389 | 1.38 | 90 | 0.0705 |
| 0.1055 | 1.85 | 120 | 0.0595 |
| 0.0701 | 2.31 | 150 | 0.0727 |
| 0.0547 | 2.77 | 180 | 0.0750 |
| 0.0454 | 3.23 | 210 | 0.0714 |
| 0.0371 | 3.69 | 240 | 0.0609 |
| 0.0332 | 4.15 | 270 | 0.0629 |
| 0.0269 | 4.62 | 300 | 0.0583 |
| 0.0233 | 5.08 | 330 | 0.0601 |
| 0.0219 | 5.54 | 360 | 0.0576 |
| 0.0227 | 6.0 | 390 | 0.0569 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "donut_synDB_big", "results": []}]} | Donut01/donut_synDB_big | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:18:52+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
| donut\_synDB\_big
=================
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0569
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 3
* total\_train\_batch\_size: 12
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 6
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 3\n* total\\_train\\_batch\\_size: 12\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 6\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 3\n* total\\_train\\_batch\\_size: 12\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 6\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
# PLLaVA Model Card
## Model details
**Model type:**
PLLaVA-13B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: llava-hf/llava-v1.6-vicuna-13b-hf
**Model date:**
PLLaVA-13B was trained in April 2024.
**Paper or resources for more information:**
- github repo: https://github.com/magic-research/PLLaVA
- project page: https://pllava.github.io/
- paper link: https://arxiv.org/abs/2404.16994
## License
llava-hf/llava-v1.6-vicuna-13b-hf license.
**Where to send questions or comments about the model:**
https://github.com/magic-research/PLLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of PLLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
Video-Instruct-Tuning data of OpenGVLab/VideoChat2-IT
## Evaluation dataset
A collection of 6 benchmarks, including 5 VQA benchmarks and 1 recent benchmarks specifically proposed for Video-LMMs.
| {"license": "apache-2.0", "tags": ["video LLM"], "datasets": ["OpenGVLab/VideoChat2-IT"]} | ermu2001/pllava-13b | null | [
"transformers",
"safetensors",
"llava",
"text2text-generation",
"video LLM",
"dataset:OpenGVLab/VideoChat2-IT",
"arxiv:2404.16994",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"has_space"
] | null | 2024-04-24T07:19:04+00:00 | [
"2404.16994"
] | [] | TAGS
#transformers #safetensors #llava #text2text-generation #video LLM #dataset-OpenGVLab/VideoChat2-IT #arxiv-2404.16994 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us #has_space
|
# PLLaVA Model Card
## Model details
Model type:
PLLaVA-13B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: llava-hf/llava-v1.6-vicuna-13b-hf
Model date:
PLLaVA-13B was trained in April 2024.
Paper or resources for more information:
- github repo: URL
- project page: URL
- paper link: URL
## License
llava-hf/llava-v1.6-vicuna-13b-hf license.
Where to send questions or comments about the model:
URL
## Intended use
Primary intended uses:
The primary use of PLLaVA is research on large multimodal models and chatbots.
Primary intended users:
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
Video-Instruct-Tuning data of OpenGVLab/VideoChat2-IT
## Evaluation dataset
A collection of 6 benchmarks, including 5 VQA benchmarks and 1 recent benchmarks specifically proposed for Video-LMMs.
| [
"# PLLaVA Model Card",
"## Model details\nModel type: \nPLLaVA-13B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: llava-hf/llava-v1.6-vicuna-13b-hf\n\nModel date:\nPLLaVA-13B was trained in April 2024.\n\nPaper or resources for more information:\n- github repo: URL\n- project page: URL\n- paper link: URL",
"## License\nllava-hf/llava-v1.6-vicuna-13b-hf license.\n\nWhere to send questions or comments about the model:\nURL",
"## Intended use\nPrimary intended uses:\nThe primary use of PLLaVA is research on large multimodal models and chatbots.\n\nPrimary intended users:\nThe primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.",
"## Training dataset\nVideo-Instruct-Tuning data of OpenGVLab/VideoChat2-IT",
"## Evaluation dataset\nA collection of 6 benchmarks, including 5 VQA benchmarks and 1 recent benchmarks specifically proposed for Video-LMMs."
] | [
"TAGS\n#transformers #safetensors #llava #text2text-generation #video LLM #dataset-OpenGVLab/VideoChat2-IT #arxiv-2404.16994 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us #has_space \n",
"# PLLaVA Model Card",
"## Model details\nModel type: \nPLLaVA-13B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: llava-hf/llava-v1.6-vicuna-13b-hf\n\nModel date:\nPLLaVA-13B was trained in April 2024.\n\nPaper or resources for more information:\n- github repo: URL\n- project page: URL\n- paper link: URL",
"## License\nllava-hf/llava-v1.6-vicuna-13b-hf license.\n\nWhere to send questions or comments about the model:\nURL",
"## Intended use\nPrimary intended uses:\nThe primary use of PLLaVA is research on large multimodal models and chatbots.\n\nPrimary intended users:\nThe primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.",
"## Training dataset\nVideo-Instruct-Tuning data of OpenGVLab/VideoChat2-IT",
"## Evaluation dataset\nA collection of 6 benchmarks, including 5 VQA benchmarks and 1 recent benchmarks specifically proposed for Video-LMMs."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | chcho/OrpoLlama-3-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:19:31+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** akbargherbal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | akbargherbal/think_tanks_v02_16bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:20:35+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: akbargherbal
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: akbargherbal\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: akbargherbal\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | kangXn/engu-sb-mde | null | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:20:40+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #deberta-v2 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Andrei481/Mistral-7B-Instruct-v0.2-hakurei-ro | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:20:40+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sourav10/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5831
- Validation Loss: 1.7498
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4181 | 2.0762 | 0 |
| 1.8471 | 1.7498 | 1 |
| 1.5831 | 1.7498 | 2 |
### Framework versions
- Transformers 4.40.0
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "sourav10/my_awesome_qa_model", "results": []}]} | sourav10/my_awesome_qa_model | null | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:20:43+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| sourav10/my\_awesome\_qa\_model
===============================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 1.5831
* Validation Loss: 1.7498
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': False, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 500, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.40.0
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': False, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': False, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PolizzeDonut-UltimaProvaCluster-Cluster2di7-5epochs
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-UltimaProvaCluster-Cluster2di7-5epochs", "results": []}]} | tedad09/PolizzeDonut-UltimaProvaCluster-Cluster2di7-5epochs | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:20:45+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
|
# PolizzeDonut-UltimaProvaCluster-Cluster2di7-5epochs
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# PolizzeDonut-UltimaProvaCluster-Cluster2di7-5epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"# PolizzeDonut-UltimaProvaCluster-Cluster2di7-5epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers | UniMERNet: A Universal Network for Mathematical Expression Recognition in Real-World Scenarios.
Visit our GitHub repository at [unimernet](https://github.com/opendatalab/unimernet) for more information. | {"license": "apache-2.0"} | wanderkid/unimernet | null | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:22:04+00:00 | [] | [] | TAGS
#transformers #pytorch #vision-encoder-decoder #license-apache-2.0 #endpoints_compatible #region-us
| UniMERNet: A Universal Network for Mathematical Expression Recognition in Real-World Scenarios.
Visit our GitHub repository at unimernet for more information. | [] | [
"TAGS\n#transformers #pytorch #vision-encoder-decoder #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [tensorplex-labs/pretraining-sn9-7B-5](https://huggingface.co/tensorplex-labs/pretraining-sn9-7B-5)
* [tensorplex-labs/pretraining-sn9-7B-2](https://huggingface.co/tensorplex-labs/pretraining-sn9-7B-2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: tensorplex-labs/pretraining-sn9-7B-2
layer_range: [0, 30]
- model: tensorplex-labs/pretraining-sn9-7B-5
layer_range: [0, 30]
merge_method: slerp
base_model: tensorplex-labs/pretraining-sn9-7B-5
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.85
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["tensorplex-labs/pretraining-sn9-7B-5", "tensorplex-labs/pretraining-sn9-7B-2"]} | Sumail/zhun04 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:tensorplex-labs/pretraining-sn9-7B-5",
"base_model:tensorplex-labs/pretraining-sn9-7B-2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:24:37+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-tensorplex-labs/pretraining-sn9-7B-5 #base_model-tensorplex-labs/pretraining-sn9-7B-2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* tensorplex-labs/pretraining-sn9-7B-5
* tensorplex-labs/pretraining-sn9-7B-2
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* tensorplex-labs/pretraining-sn9-7B-5\n* tensorplex-labs/pretraining-sn9-7B-2",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-tensorplex-labs/pretraining-sn9-7B-5 #base_model-tensorplex-labs/pretraining-sn9-7B-2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* tensorplex-labs/pretraining-sn9-7B-5\n* tensorplex-labs/pretraining-sn9-7B-2",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Models Merged
The following models were included in the merge:
* [alpindale/WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B)
* [HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1)
## Benchmark results
### 1. MT-Bench from lmsys
We adapted the code from [FastChat](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) to benchmark our model with GPT-4 as a judge. Here is the result
```
########## First turn ##########
score
model turn
wizard-zephyr-8x22b 1 9.1625
########## Second turn ##########
score
model turn
wizard-zephyr-8x22b 2 8.873418
########## Average ##########
score
model
wizard-zephyr-8x22b 9.018868
```
The score is slightly lower than [alpindale/WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B), but still higher than GPT-4-0314. Then the research and experimental work still need to continue ^^ | {"license": "cc-by-nc-sa-4.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["alpindale/WizardLM-2-8x22B", "HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1"]} | tlphams/Wizard-Zephyr-Orpo-8x22B | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:alpindale/WizardLM-2-8x22B",
"base_model:HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:24:44+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #mergekit #merge #conversational #base_model-alpindale/WizardLM-2-8x22B #base_model-HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Models Merged
The following models were included in the merge:
* alpindale/WizardLM-2-8x22B
* HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1
## Benchmark results
### 1. MT-Bench from lmsys
We adapted the code from FastChat to benchmark our model with GPT-4 as a judge. Here is the result
The score is slightly lower than alpindale/WizardLM-2-8x22B, but still higher than GPT-4-0314. Then the research and experimental work still need to continue ^^ | [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Models Merged\n\nThe following models were included in the merge:\n* alpindale/WizardLM-2-8x22B\n* HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
"## Benchmark results",
"### 1. MT-Bench from lmsys\nWe adapted the code from FastChat to benchmark our model with GPT-4 as a judge. Here is the result\n\nThe score is slightly lower than alpindale/WizardLM-2-8x22B, but still higher than GPT-4-0314. Then the research and experimental work still need to continue ^^"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #mergekit #merge #conversational #base_model-alpindale/WizardLM-2-8x22B #base_model-HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Models Merged\n\nThe following models were included in the merge:\n* alpindale/WizardLM-2-8x22B\n* HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
"## Benchmark results",
"### 1. MT-Bench from lmsys\nWe adapted the code from FastChat to benchmark our model with GPT-4 as a judge. Here is the result\n\nThe score is slightly lower than alpindale/WizardLM-2-8x22B, but still higher than GPT-4-0314. Then the research and experimental work still need to continue ^^"
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_v2_15
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "detr_v2_15", "results": []}]} | ssamperr/detr_v2_15 | null | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:24:50+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us
|
# detr_v2_15
This model is a fine-tuned version of facebook/detr-resnet-50 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# detr_v2_15\n\nThis model is a fine-tuned version of facebook/detr-resnet-50 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cpu\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# detr_v2_15\n\nThis model is a fine-tuned version of facebook/detr-resnet-50 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cpu\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** akbargherbal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | akbargherbal/think_tanks_v02_lora | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:25:25+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: akbargherbal
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: akbargherbal\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: akbargherbal\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-3
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-3", "results": []}]} | AlignmentResearch/robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-3 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:26:11+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-3
This model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Mistral-chem-v0.5 (mistral for chemistry)
The Mistral-chem-v0.5 Large Language Model (LLM) is a pretrained generative chemical molecule model with 52.11M parameters x 8 experts = 416.9M parameters.
It is derived from Mistral-7B-v0.1 model, which was simplified for chemistry: the number of layers and the hidden size were reduced.
The model was pretrained using around 100M molecule SMILES strings from the Zinc database.
For full details of this model please read our [github repo](https://github.com/raphaelmourad/Mistral-chem).
## Model Architecture
Like Mistral-7B-v0.1, it is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Load the model from huggingface:
```
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-chem-v0.5", trust_remote_code=True)
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-chem-v0.5", trust_remote_code=True)
```
## Calculate the embedding of a DNA sequence
```
chem = "CCCCC[C@H](Br)CC"
inputs = tokenizer(chem, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]
# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256
```
## Troubleshooting
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
## Notice
Mistral-chem is a pretrained base model for chemistry.
## Contact
Raphaël Mourad. [email protected] | {"license": "apache-2.0", "tags": ["pretrained", "Mistral", "chemistry"]} | RaphaelMourad/mixtral-chem-v0.5 | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"pretrained",
"Mistral",
"chemistry",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:27:44+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #pretrained #Mistral #chemistry #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Mistral-chem-v0.5 (mistral for chemistry)
The Mistral-chem-v0.5 Large Language Model (LLM) is a pretrained generative chemical molecule model with 52.11M parameters x 8 experts = 416.9M parameters.
It is derived from Mistral-7B-v0.1 model, which was simplified for chemistry: the number of layers and the hidden size were reduced.
The model was pretrained using around 100M molecule SMILES strings from the Zinc database.
For full details of this model please read our github repo.
## Model Architecture
Like Mistral-7B-v0.1, it is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Load the model from huggingface:
## Calculate the embedding of a DNA sequence
## Troubleshooting
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
## Notice
Mistral-chem is a pretrained base model for chemistry.
## Contact
Raphaël Mourad. URL@URL | [
"# Model Card for Mistral-chem-v0.5 (mistral for chemistry)\n\nThe Mistral-chem-v0.5 Large Language Model (LLM) is a pretrained generative chemical molecule model with 52.11M parameters x 8 experts = 416.9M parameters. \nIt is derived from Mistral-7B-v0.1 model, which was simplified for chemistry: the number of layers and the hidden size were reduced. \nThe model was pretrained using around 100M molecule SMILES strings from the Zinc database. \n\nFor full details of this model please read our github repo.",
"## Model Architecture\n\nLike Mistral-7B-v0.1, it is a transformer model, with the following architecture choices:\n- Grouped-Query Attention\n- Sliding-Window Attention\n- Byte-fallback BPE tokenizer",
"## Load the model from huggingface:",
"## Calculate the embedding of a DNA sequence",
"## Troubleshooting\n\nEnsure you are utilizing a stable version of Transformers, 4.34.0 or newer.",
"## Notice\n\nMistral-chem is a pretrained base model for chemistry.",
"## Contact\n \nRaphaël Mourad. URL@URL"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #pretrained #Mistral #chemistry #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Mistral-chem-v0.5 (mistral for chemistry)\n\nThe Mistral-chem-v0.5 Large Language Model (LLM) is a pretrained generative chemical molecule model with 52.11M parameters x 8 experts = 416.9M parameters. \nIt is derived from Mistral-7B-v0.1 model, which was simplified for chemistry: the number of layers and the hidden size were reduced. \nThe model was pretrained using around 100M molecule SMILES strings from the Zinc database. \n\nFor full details of this model please read our github repo.",
"## Model Architecture\n\nLike Mistral-7B-v0.1, it is a transformer model, with the following architecture choices:\n- Grouped-Query Attention\n- Sliding-Window Attention\n- Byte-fallback BPE tokenizer",
"## Load the model from huggingface:",
"## Calculate the embedding of a DNA sequence",
"## Troubleshooting\n\nEnsure you are utilizing a stable version of Transformers, 4.34.0 or newer.",
"## Notice\n\nMistral-chem is a pretrained base model for chemistry.",
"## Contact\n \nRaphaël Mourad. URL@URL"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/meraGPT/mera-mix-4x7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q2_K.gguf) | Q2_K | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.IQ3_XS.gguf) | IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.IQ3_S.gguf) | IQ3_S | 10.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.IQ3_M.gguf) | IQ3_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q3_K_M.gguf) | Q3_K_M | 11.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q3_K_L.gguf) | Q3_K_L | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.IQ4_XS.gguf) | IQ4_XS | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q4_K_S.gguf) | Q4_K_S | 13.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q4_K_M.gguf) | Q4_K_M | 14.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q5_K_S.gguf) | Q5_K_S | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q5_K_M.gguf) | Q5_K_M | 17.2 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q6_K.gguf) | Q6_K | 19.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q8_0.gguf) | Q8_0 | 25.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "meraGPT/mera-mix-4x7B", "quantized_by": "mradermacher"} | mradermacher/mera-mix-4x7B-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:meraGPT/mera-mix-4x7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:28:35+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-meraGPT/mera-mix-4x7B #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-meraGPT/mera-mix-4x7B #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null | null | ¿Qué es Crystalin tabletas?
Crystalin Precio es una cápsula de suplemento dietético de primera calidad, meticulosamente elaborada para brindar un apoyo integral a la salud ocular. Su fórmula avanzada contiene una mezcla sinérgica de vitaminas, minerales y antioxidantes elegidos específicamente para nutrir los ojos y protegerlos contra el estrés oxidativo.
Página web oficial:<a href="https://www.nutritionsee.com/Crystaseucdor">www.Crystalin.com</a>
<p><a href="https://www.nutritionsee.com/Crystaseucdor"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/Crystalin-Ecuador-1.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/Crystaseucdor">¡¡Comprar ahora!! Haga clic en el enlace a continuación para obtener más información y obtener un 50% de descuento ahora... ¡Date prisa!</a>
Página web oficial:<a href="https://www.nutritionsee.com/Crystaseucdor">www.Crystalin.com</a> | {"license": "apache-2.0"} | CrystalinEcuador/Crystalin | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T07:29:40+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| ¿Qué es Crystalin tabletas?
Crystalin Precio es una cápsula de suplemento dietético de primera calidad, meticulosamente elaborada para brindar un apoyo integral a la salud ocular. Su fórmula avanzada contiene una mezcla sinérgica de vitaminas, minerales y antioxidantes elegidos específicamente para nutrir los ojos y protegerlos contra el estrés oxidativo.
Página web oficial:<a href="URL
<p><a href="URL <img src="URL alt="enter image description here"> </a></p>
<a href="URL¡¡Comprar ahora!! Haga clic en el enlace a continuación para obtener más información y obtener un 50% de descuento ahora... ¡Date prisa!</a>
Página web oficial:<a href="URL | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-G3
This model is a fine-tuned version of [ChakuChidiya/distilbert-base-uncased-G2](https://huggingface.co/ChakuChidiya/distilbert-base-uncased-G2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2192
- Validation Loss: 0.3240
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1920, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.07}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3628 | 0.3204 | 0 |
| 0.2708 | 0.3328 | 1 |
| 0.2192 | 0.3240 | 2 |
### Framework versions
- Transformers 4.37.0
- TensorFlow 2.15.0
- Datasets 2.14.5
- Tokenizers 0.15.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "ChakuChidiya/distilbert-base-uncased-G2", "model-index": [{"name": "distilbert-base-uncased-G3", "results": []}]} | ChakuChidiya/distilbert-base-uncased-G3 | null | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:ChakuChidiya/distilbert-base-uncased-G2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:31:05+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #token-classification #generated_from_keras_callback #base_model-ChakuChidiya/distilbert-base-uncased-G2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-G3
==========================
This model is a fine-tuned version of ChakuChidiya/distilbert-base-uncased-G2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.2192
* Validation Loss: 0.3240
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 1920, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.07}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.37.0
* TensorFlow 2.15.0
* Datasets 2.14.5
* Tokenizers 0.15.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 1920, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.07}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.0\n* TensorFlow 2.15.0\n* Datasets 2.14.5\n* Tokenizers 0.15.1"
] | [
"TAGS\n#transformers #tf #distilbert #token-classification #generated_from_keras_callback #base_model-ChakuChidiya/distilbert-base-uncased-G2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 1920, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.07}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.0\n* TensorFlow 2.15.0\n* Datasets 2.14.5\n* Tokenizers 0.15.1"
] |
text2text-generation | transformers |
# PLLaVA Model Card
## Model details
**Model type:**
PLLaVA-7B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: llava-hf/llava-v1.6-vicuna-7b-hf
**Model date:**
PLLaVA-7B was trained in April 2024.
**Paper or resources for more information:**
- github repo: https://github.com/magic-research/PLLaVA
- project page: https://pllava.github.io/
- paper link: https://arxiv.org/abs/2404.16994
## License
llava-hf/llava-v1.6-vicuna-7b-hf license.
**Where to send questions or comments about the model:**
https://github.com/magic-research/PLLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of PLLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
Video-Instruct-Tuning data of OpenGVLab/VideoChat2-IT
## Evaluation dataset
A collection of 6 benchmarks, including 5 VQA benchmarks and 1 recent benchmarks specifically proposed for Video-LMMs.
| {"license": "apache-2.0", "tags": ["video LLM"], "datasets": ["OpenGVLab/VideoChat2-IT"]} | ermu2001/pllava-7b | null | [
"transformers",
"safetensors",
"llava",
"text2text-generation",
"video LLM",
"dataset:OpenGVLab/VideoChat2-IT",
"arxiv:2404.16994",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"has_space"
] | null | 2024-04-24T07:31:24+00:00 | [
"2404.16994"
] | [] | TAGS
#transformers #safetensors #llava #text2text-generation #video LLM #dataset-OpenGVLab/VideoChat2-IT #arxiv-2404.16994 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us #has_space
|
# PLLaVA Model Card
## Model details
Model type:
PLLaVA-7B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: llava-hf/llava-v1.6-vicuna-7b-hf
Model date:
PLLaVA-7B was trained in April 2024.
Paper or resources for more information:
- github repo: URL
- project page: URL
- paper link: URL
## License
llava-hf/llava-v1.6-vicuna-7b-hf license.
Where to send questions or comments about the model:
URL
## Intended use
Primary intended uses:
The primary use of PLLaVA is research on large multimodal models and chatbots.
Primary intended users:
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
Video-Instruct-Tuning data of OpenGVLab/VideoChat2-IT
## Evaluation dataset
A collection of 6 benchmarks, including 5 VQA benchmarks and 1 recent benchmarks specifically proposed for Video-LMMs.
| [
"# PLLaVA Model Card",
"## Model details\nModel type: \nPLLaVA-7B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: llava-hf/llava-v1.6-vicuna-7b-hf\n\nModel date:\nPLLaVA-7B was trained in April 2024.\n\nPaper or resources for more information:\n- github repo: URL\n- project page: URL\n- paper link: URL",
"## License\nllava-hf/llava-v1.6-vicuna-7b-hf license.\n\nWhere to send questions or comments about the model:\nURL",
"## Intended use\nPrimary intended uses:\nThe primary use of PLLaVA is research on large multimodal models and chatbots.\n\nPrimary intended users:\nThe primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.",
"## Training dataset\nVideo-Instruct-Tuning data of OpenGVLab/VideoChat2-IT",
"## Evaluation dataset\nA collection of 6 benchmarks, including 5 VQA benchmarks and 1 recent benchmarks specifically proposed for Video-LMMs."
] | [
"TAGS\n#transformers #safetensors #llava #text2text-generation #video LLM #dataset-OpenGVLab/VideoChat2-IT #arxiv-2404.16994 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us #has_space \n",
"# PLLaVA Model Card",
"## Model details\nModel type: \nPLLaVA-7B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: llava-hf/llava-v1.6-vicuna-7b-hf\n\nModel date:\nPLLaVA-7B was trained in April 2024.\n\nPaper or resources for more information:\n- github repo: URL\n- project page: URL\n- paper link: URL",
"## License\nllava-hf/llava-v1.6-vicuna-7b-hf license.\n\nWhere to send questions or comments about the model:\nURL",
"## Intended use\nPrimary intended uses:\nThe primary use of PLLaVA is research on large multimodal models and chatbots.\n\nPrimary intended users:\nThe primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.",
"## Training dataset\nVideo-Instruct-Tuning data of OpenGVLab/VideoChat2-IT",
"## Evaluation dataset\nA collection of 6 benchmarks, including 5 VQA benchmarks and 1 recent benchmarks specifically proposed for Video-LMMs."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | josianem/adareceipts-donut-model-cordv2 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:31:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** Anpur-Phani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b-it-bnb-4bit"} | Anpur-Phani/gemma_lora_model | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:33:43+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Anpur-Phani
- License: apache-2.0
- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Anpur-Phani\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Anpur-Phani\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
visual-question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vilt_finetuned_200
This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on the vqa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["vqa"], "base_model": "dandelin/vilt-b32-mlm", "model-index": [{"name": "vilt_finetuned_200", "results": []}]} | yeongha/vilt_finetuned_200 | null | [
"transformers",
"tensorboard",
"safetensors",
"vilt",
"visual-question-answering",
"generated_from_trainer",
"dataset:vqa",
"base_model:dandelin/vilt-b32-mlm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:36:16+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vilt #visual-question-answering #generated_from_trainer #dataset-vqa #base_model-dandelin/vilt-b32-mlm #license-apache-2.0 #endpoints_compatible #region-us
|
# vilt_finetuned_200
This model is a fine-tuned version of dandelin/vilt-b32-mlm on the vqa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# vilt_finetuned_200\n\nThis model is a fine-tuned version of dandelin/vilt-b32-mlm on the vqa dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vilt #visual-question-answering #generated_from_trainer #dataset-vqa #base_model-dandelin/vilt-b32-mlm #license-apache-2.0 #endpoints_compatible #region-us \n",
"# vilt_finetuned_200\n\nThis model is a fine-tuned version of dandelin/vilt-b32-mlm on the vqa dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | null | ---
license: apache-2.0
JALKJGLAJGLJAPGJLAKJEDG | {"language": ["aa"]} | xumeng/888 | null | [
"aa",
"region:us"
] | null | 2024-04-24T07:36:21+00:00 | [] | [
"aa"
] | TAGS
#aa #region-us
| ---
license: apache-2.0
JALKJGLAJGLJAPGJLAKJEDG | [] | [
"TAGS\n#aa #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billm_conll2003_NousResearch-Llama-2-7b-hf_ckpt
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0009
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "NousResearch/Llama-2-7b-hf", "model-index": [{"name": "billm_conll2003_NousResearch-Llama-2-7b-hf_ckpt", "results": []}]} | Farjfar/billm_conll2003_NousResearch-Llama-2-7b-hf_ckpt | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2024-04-24T07:37:31+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-NousResearch/Llama-2-7b-hf #region-us
|
# billm_conll2003_NousResearch-Llama-2-7b-hf_ckpt
This model is a fine-tuned version of NousResearch/Llama-2-7b-hf on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0009
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# billm_conll2003_NousResearch-Llama-2-7b-hf_ckpt\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0009\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 8",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-NousResearch/Llama-2-7b-hf #region-us \n",
"# billm_conll2003_NousResearch-Llama-2-7b-hf_ckpt\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0009\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 8",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | heyllm234/sc73 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:38:12+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ppi_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4819
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5692 | 1.0 | 53424 | 0.4819 | 0.9333 |
### Framework versions
- Transformers 4.39.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "ppi_model", "results": []}]} | lamiaaMB/ppi_model | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:39:11+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ppi\_model
==========
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4819
* Accuracy: 0.9333
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.39.0
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - happynear/sdxl-pokemon-model
This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **reach-vb/pokemon-blip-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a cute Sundar Pichai creature:




Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers-training", "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers-training", "diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "inference": true} | happynear/sdxl-pokemon-model | null | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-04-24T07:39:48+00:00 | [] | [] | TAGS
#diffusers #safetensors #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #diffusers-training #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
|
# Text-to-image finetuning - happynear/sdxl-pokemon-model
This pipeline was finetuned from stabilityai/stable-diffusion-xl-base-1.0 on the reach-vb/pokemon-blip-captions dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a cute Sundar Pichai creature:
!img_0
!img_1
!img_2
!img_3
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# Text-to-image finetuning - happynear/sdxl-pokemon-model\n\nThis pipeline was finetuned from stabilityai/stable-diffusion-xl-base-1.0 on the reach-vb/pokemon-blip-captions dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a cute Sundar Pichai creature: \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #safetensors #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #diffusers-training #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n",
"# Text-to-image finetuning - happynear/sdxl-pokemon-model\n\nThis pipeline was finetuned from stabilityai/stable-diffusion-xl-base-1.0 on the reach-vb/pokemon-blip-captions dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a cute Sundar Pichai creature: \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | kangXn/engu-st-mde | null | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:39:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #deberta-v2 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# DavidAU/Antler-7B-Novel-Writing-Q6_K-GGUF
This model was converted to GGUF format from [`Aratako/Antler-7B-Novel-Writing`](https://huggingface.co/Aratako/Antler-7B-Novel-Writing) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Aratako/Antler-7B-Novel-Writing) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Antler-7B-Novel-Writing-Q6_K-GGUF --model antler-7b-novel-writing.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Antler-7B-Novel-Writing-Q6_K-GGUF --model antler-7b-novel-writing.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m antler-7b-novel-writing.Q6_K.gguf -n 128
```
| {"language": ["ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "nsfw", "llama-cpp", "gguf-my-repo"], "datasets": ["Aratako/Syosetu711K-Cleaned-158K-Instruct"], "base_model": ["Elizezen/Antler-7B"]} | DavidAU/Antler-7B-Novel-Writing-Q6_K-GGUF | null | [
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"llama-cpp",
"gguf-my-repo",
"ja",
"dataset:Aratako/Syosetu711K-Cleaned-158K-Instruct",
"base_model:Elizezen/Antler-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:40:01+00:00 | [] | [
"ja"
] | TAGS
#transformers #gguf #not-for-all-audiences #nsfw #llama-cpp #gguf-my-repo #ja #dataset-Aratako/Syosetu711K-Cleaned-158K-Instruct #base_model-Elizezen/Antler-7B #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/Antler-7B-Novel-Writing-Q6_K-GGUF
This model was converted to GGUF format from 'Aratako/Antler-7B-Novel-Writing' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Antler-7B-Novel-Writing-Q6_K-GGUF\nThis model was converted to GGUF format from 'Aratako/Antler-7B-Novel-Writing' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #not-for-all-audiences #nsfw #llama-cpp #gguf-my-repo #ja #dataset-Aratako/Syosetu711K-Cleaned-158K-Instruct #base_model-Elizezen/Antler-7B #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/Antler-7B-Novel-Writing-Q6_K-GGUF\nThis model was converted to GGUF format from 'Aratako/Antler-7B-Novel-Writing' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# Uploaded model
- **Developed by:** srikar-v05
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | srikar-v05/llama3-ChatDoctor | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:40:48+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: srikar-v05
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: srikar-v05\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: srikar-v05\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-1
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-1", "results": []}]} | AlignmentResearch/robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-1 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-31m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:42:34+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-31m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-1
This model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-1\n\nThis model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-31m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-1\n\nThis model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vietnamese-news-summarization-vistral-7b
This model is a fine-tuned version of [Viet-Mistral/Vistral-7B-Chat](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0431 | 0.0060 | 20 | 2.0914 |
| 2.0513 | 0.0119 | 40 | 2.0405 |
| 2.0366 | 0.0179 | 60 | 1.9899 |
| 1.946 | 0.0238 | 80 | 1.9301 |
| 1.9324 | 0.0298 | 100 | 1.8576 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.19.1 | {"license": "afl-3.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "Viet-Mistral/Vistral-7B-Chat", "model-index": [{"name": "vietnamese-news-summarization-vistral-7b", "results": []}]} | anhvu2501/vietnamese-news-summarization-vistral-7b | null | [
"peft",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:Viet-Mistral/Vistral-7B-Chat",
"license:afl-3.0",
"region:us"
] | null | 2024-04-24T07:43:20+00:00 | [] | [] | TAGS
#peft #safetensors #mistral #trl #sft #generated_from_trainer #dataset-generator #base_model-Viet-Mistral/Vistral-7B-Chat #license-afl-3.0 #region-us
| vietnamese-news-summarization-vistral-7b
========================================
This model is a fine-tuned version of Viet-Mistral/Vistral-7B-Chat on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 1.8576
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-06
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: constant
* lr\_scheduler\_warmup\_steps: 0.03
* training\_steps: 100
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.1
* Pytorch 2.1.2
* Datasets 2.16.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 100",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.2\n* Datasets 2.16.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #mistral #trl #sft #generated_from_trainer #dataset-generator #base_model-Viet-Mistral/Vistral-7B-Chat #license-afl-3.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 100",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.2\n* Datasets 2.16.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Uploaded model
- **Developed by:** aidiary
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | aidiary/llama3-8b-alpaca-finetuned | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:43:22+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: aidiary
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: aidiary\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: aidiary\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Uploaded model
- **Developed by:** akbargherbal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | akbargherbal/think_tanks_v02_gguf | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:44:06+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: akbargherbal
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: akbargherbal\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: akbargherbal\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PolizzeDonut-UltimaProvaCluster-Cluster3di7-5epochs
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-UltimaProvaCluster-Cluster3di7-5epochs", "results": []}]} | tedad09/PolizzeDonut-UltimaProvaCluster-Cluster3di7-5epochs | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:44:19+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
|
# PolizzeDonut-UltimaProvaCluster-Cluster3di7-5epochs
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# PolizzeDonut-UltimaProvaCluster-Cluster3di7-5epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"# PolizzeDonut-UltimaProvaCluster-Cluster3di7-5epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
# DavidAU/llama-3-dragonmaid-8B-Q8_0-GGUF
This model was converted to GGUF format from [`nbeerbower/llama-3-dragonmaid-8B`](https://huggingface.co/nbeerbower/llama-3-dragonmaid-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/llama-3-dragonmaid-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/llama-3-dragonmaid-8B-Q8_0-GGUF --model llama-3-dragonmaid-8b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/llama-3-dragonmaid-8B-Q8_0-GGUF --model llama-3-dragonmaid-8b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-dragonmaid-8b.Q8_0.gguf -n 128
```
| {"license": "other", "library_name": "transformers", "tags": ["nsfw", "not-for-all-audiences", "experimental", "llama-cpp", "gguf-my-repo"], "datasets": ["ResplendentAI/NSFW_RP_Format_NoQuote"], "base_model": ["nbeerbower/llama-3-sauce-v1-8B"], "license_name": "llama3"} | DavidAU/llama-3-dragonmaid-8B-Q8_0-GGUF | null | [
"transformers",
"gguf",
"nsfw",
"not-for-all-audiences",
"experimental",
"llama-cpp",
"gguf-my-repo",
"dataset:ResplendentAI/NSFW_RP_Format_NoQuote",
"base_model:nbeerbower/llama-3-sauce-v1-8B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:44:31+00:00 | [] | [] | TAGS
#transformers #gguf #nsfw #not-for-all-audiences #experimental #llama-cpp #gguf-my-repo #dataset-ResplendentAI/NSFW_RP_Format_NoQuote #base_model-nbeerbower/llama-3-sauce-v1-8B #license-other #endpoints_compatible #region-us
|
# DavidAU/llama-3-dragonmaid-8B-Q8_0-GGUF
This model was converted to GGUF format from 'nbeerbower/llama-3-dragonmaid-8B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/llama-3-dragonmaid-8B-Q8_0-GGUF\nThis model was converted to GGUF format from 'nbeerbower/llama-3-dragonmaid-8B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #nsfw #not-for-all-audiences #experimental #llama-cpp #gguf-my-repo #dataset-ResplendentAI/NSFW_RP_Format_NoQuote #base_model-nbeerbower/llama-3-sauce-v1-8B #license-other #endpoints_compatible #region-us \n",
"# DavidAU/llama-3-dragonmaid-8B-Q8_0-GGUF\nThis model was converted to GGUF format from 'nbeerbower/llama-3-dragonmaid-8B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-0
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-0", "results": []}]} | AlignmentResearch/robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-0 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-31m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:44:34+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-31m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-0
This model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-31m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | chohi/llama-3-8b-chat-molit-kor | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-24T07:46:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | Q6_K gguf of https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B | {} | JayhC/ChaoticSoliloquy-4x8B-GGUF-Q6_K | null | [
"gguf",
"region:us"
] | null | 2024-04-24T07:47:17+00:00 | [] | [] | TAGS
#gguf #region-us
| Q6_K gguf of URL | [] | [
"TAGS\n#gguf #region-us \n"
] |
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
| {"license": "apache-2.0"} | GreenBitAI/Phi-3-mini-4k-instruct-layer-mix-bpw-3.0 | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:47:32+00:00 | [] | [] | TAGS
#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
| [
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
| {"license": "apache-2.0"} | GreenBitAI/Phi-3-mini-4k-instruct-layer-mix-bpw-2.5 | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:47:45+00:00 | [] | [] | TAGS
#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
| [
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
| {"license": "apache-2.0"} | GreenBitAI/Phi-3-mini-4k-instruct-layer-mix-bpw-2.2 | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:47:56+00:00 | [] | [] | TAGS
#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
| [
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
| {"license": "apache-2.0"} | GreenBitAI/Phi-3-mini-128k-instruct-layer-mix-bpw-2.2 | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:48:04+00:00 | [] | [] | TAGS
#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
| [
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
| {"license": "apache-2.0"} | GreenBitAI/Phi-3-mini-128k-instruct-layer-mix-bpw-2.5 | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:48:13+00:00 | [] | [] | TAGS
#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
| [
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
| {"license": "apache-2.0"} | GreenBitAI/Phi-3-mini-128k-instruct-layer-mix-bpw-3.0 | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:48:26+00:00 | [] | [] | TAGS
#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
| [
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
| {"license": "apache-2.0"} | GreenBitAI/Phi-3-mini-128k-instruct-layer-mix-bpw-4.0 | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T07:48:34+00:00 | [] | [] | TAGS
#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
| [
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation | transformers |
## Introduction
Who am I: Qishen Ha [[Kaggle](https://www.kaggle.com/haqishen)] [[X](https://twitter.com/KeishinKoh)] [[LinkedIn](https://www.linkedin.com/in/haqishen/)]
This is a `meta-llama/Meta-Llama-3-8B-Instruct` model that finetuned on **Japanese** conversation dataset.
Dataset: [japanese_hh-rlhf-49k](https://huggingface.co/datasets/fujiki/japanese_hh-rlhf-49k)
Training framework: [h2o-llmstudio](https://github.com/h2oai/h2o-llmstudio)
Training max context length: 8k
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.38.2
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="haqishen/h2o-Llama-3-8B-Japanese-Instruct",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
messages = [
{"role": "system", "content": "あなたは、常に海賊の言葉で返事する海賊チャットボットです!"},
{"role": "user", "content": "自己紹介してください"},
]
res = generate_text(
messages,
renormalize_logits=True
)
print(res[0]["generated_text"][-1]['content'])
```
You can print a sample prompt after applying chat template to see how it is feed to the tokenizer:
```python
print(generate_text.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
))
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "haqishen/h2o-Llama-3-8B-Japanese-Instruct" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
messages = [
{"role": "system", "content": "あなたは、常に海賊の言葉で返事する海賊チャットボットです!"},
{"role": "user", "content": "自己紹介してください"},
]
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
# generate configuration can be modified to your needs
# model.generation_config.min_new_tokens = 2
# model.generation_config.max_new_tokens = 256
# model.generation_config.do_sample = False
# model.generation_config.num_beams = 1
# model.generation_config.temperature = float(0.0)
# model.generation_config.repetition_penalty = float(1.0)
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
).to("cuda")
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
### Use with vllm
[vllm-project/vllm](https://github.com/vllm-project/vllm)
```python
from vllm import LLM, SamplingParams
model_id = "haqishen/h2o-Llama-3-8B-Japanese-Instruct"
llm = LLM(
model=model_id,
trust_remote_code=True,
tensor_parallel_size=2,
)
tokenizer = llm.get_tokenizer()
messages = [
{"role": "system", "content": "あなたは、常に海賊の言葉で返事する海賊チャットボットです!"},
{"role": "user", "content": "自己紹介してください"},
]
conversations = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = llm.generate(
[conversations],
SamplingParams(
temperature=0.6,
top_p=0.9,
max_tokens=1024,
stop_token_ids=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>")],
)
)
print(outputs[0].outputs[0].text.strip())
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(128256, 4096, padding_idx=128001)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=1024, bias=False)
(v_proj): Linear(in_features=4096, out_features=1024, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
(up_proj): Linear(in_features=4096, out_features=14336, bias=False)
(down_proj): Linear(in_features=14336, out_features=4096, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=128256, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. | {"language": ["en", "ja"], "license": "llama3", "library_name": "transformers", "tags": ["gpt", "llm", "large language model", "h2o-llmstudio"], "datasets": ["fujiki/japanese_hh-rlhf-49k"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "inference": false, "thumbnail": "https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico", "pipeline_tag": "text-generation"} | haqishen/h2o-Llama-3-8B-Japanese-Instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"ja",
"dataset:fujiki/japanese_hh-rlhf-49k",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:48:45+00:00 | [] | [
"en",
"ja"
] | TAGS
#transformers #safetensors #llama #text-generation #gpt #llm #large language model #h2o-llmstudio #conversational #en #ja #dataset-fujiki/japanese_hh-rlhf-49k #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-llama3 #autotrain_compatible #text-generation-inference #region-us
|
## Introduction
Who am I: Qishen Ha [Kaggle] [X] [LinkedIn]
This is a 'meta-llama/Meta-Llama-3-8B-Instruct' model that finetuned on Japanese conversation dataset.
Dataset: japanese_hh-rlhf-49k
Training framework: h2o-llmstudio
Training max context length: 8k
## Usage
To use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' library installed.
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave 'token=True' in the 'pipeline' and login to hugginface_hub by running
- Or directly pass your <ACCESS_TOKEN> to 'token' in the 'pipeline'
You can print a sample prompt after applying chat template to see how it is feed to the tokenizer:
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
### Use with vllm
vllm-project/vllm
## Quantization and sharding
You can load the models using quantization by specifying or . Also, sharding on multiple GPUs is possible by setting .
## Model Architecture
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. | [
"## Introduction\n\nWho am I: Qishen Ha [Kaggle] [X] [LinkedIn]\n\nThis is a 'meta-llama/Meta-Llama-3-8B-Instruct' model that finetuned on Japanese conversation dataset.\n\nDataset: japanese_hh-rlhf-49k\n\nTraining framework: h2o-llmstudio\n\nTraining max context length: 8k",
"## Usage\n\nTo use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' library installed.\n\n\n\nAlso make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.\n\n- Either leave 'token=True' in the 'pipeline' and login to hugginface_hub by running\n\n\n\n- Or directly pass your <ACCESS_TOKEN> to 'token' in the 'pipeline'\n\n\n\nYou can print a sample prompt after applying chat template to see how it is feed to the tokenizer:\n\n\n\nYou may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:",
"### Use with vllm\n\nvllm-project/vllm",
"## Quantization and sharding\n\nYou can load the models using quantization by specifying or . Also, sharding on multiple GPUs is possible by setting .",
"## Model Architecture",
"## Model Configuration\n\nThis model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.",
"## Disclaimer\n\nPlease read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.\n\n- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.\n- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.\n- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.\n- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.\n- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.\n- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.\n\nBy using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #gpt #llm #large language model #h2o-llmstudio #conversational #en #ja #dataset-fujiki/japanese_hh-rlhf-49k #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-llama3 #autotrain_compatible #text-generation-inference #region-us \n",
"## Introduction\n\nWho am I: Qishen Ha [Kaggle] [X] [LinkedIn]\n\nThis is a 'meta-llama/Meta-Llama-3-8B-Instruct' model that finetuned on Japanese conversation dataset.\n\nDataset: japanese_hh-rlhf-49k\n\nTraining framework: h2o-llmstudio\n\nTraining max context length: 8k",
"## Usage\n\nTo use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' library installed.\n\n\n\nAlso make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.\n\n- Either leave 'token=True' in the 'pipeline' and login to hugginface_hub by running\n\n\n\n- Or directly pass your <ACCESS_TOKEN> to 'token' in the 'pipeline'\n\n\n\nYou can print a sample prompt after applying chat template to see how it is feed to the tokenizer:\n\n\n\nYou may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:",
"### Use with vllm\n\nvllm-project/vllm",
"## Quantization and sharding\n\nYou can load the models using quantization by specifying or . Also, sharding on multiple GPUs is possible by setting .",
"## Model Architecture",
"## Model Configuration\n\nThis model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.",
"## Disclaimer\n\nPlease read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.\n\n- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.\n- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.\n- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.\n- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.\n- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.\n- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.\n\nBy using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it."
] |
null | null |
# DavidAU/Diamond-Arconte-13B-Q6_K-GGUF
This model was converted to GGUF format from [`ThenMagician/Diamond-Arconte-13B`](https://huggingface.co/ThenMagician/Diamond-Arconte-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ThenMagician/Diamond-Arconte-13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Diamond-Arconte-13B-Q6_K-GGUF --model diamond-arconte-13b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Diamond-Arconte-13B-Q6_K-GGUF --model diamond-arconte-13b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m diamond-arconte-13b.Q6_K.gguf -n 128
```
| {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "roleplay", "merge", "nsfw", "llama-cpp", "gguf-my-repo"]} | DavidAU/Diamond-Arconte-13B-Q6_K-GGUF | null | [
"gguf",
"not-for-all-audiences",
"roleplay",
"merge",
"nsfw",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-24T07:48:48+00:00 | [] | [] | TAGS
#gguf #not-for-all-audiences #roleplay #merge #nsfw #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us
|
# DavidAU/Diamond-Arconte-13B-Q6_K-GGUF
This model was converted to GGUF format from 'ThenMagician/Diamond-Arconte-13B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Diamond-Arconte-13B-Q6_K-GGUF\nThis model was converted to GGUF format from 'ThenMagician/Diamond-Arconte-13B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #not-for-all-audiences #roleplay #merge #nsfw #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/Diamond-Arconte-13B-Q6_K-GGUF\nThis model was converted to GGUF format from 'ThenMagician/Diamond-Arconte-13B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | peft |
References from **https://www.datacamp.com/tutorial/fine-tuning-google-gemma** | {"library_name": "peft", "datasets": ["hieunguyenminh/roleplay"], "pipeline_tag": "text-generation"} | YiKai03/gemma-7b-v2-role-play | null | [
"peft",
"safetensors",
"text-generation",
"dataset:hieunguyenminh/roleplay",
"region:us"
] | null | 2024-04-24T07:49:19+00:00 | [] | [] | TAGS
#peft #safetensors #text-generation #dataset-hieunguyenminh/roleplay #region-us
|
References from URL | [] | [
"TAGS\n#peft #safetensors #text-generation #dataset-hieunguyenminh/roleplay #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | superiort/kullm3_100QA_10epochs | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:49:34+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
| **Repository (Llama 3 Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:----------------------------------------|:------------:|:----------:|:---------:|:-----------:|:-----------:|:---------:|:--------:|:---------:|:--------:|:-----------:|:-----------:|:-----------:|:-------:|
| `Llama-3-8B-layer-mix-bpw-2.2` | 0.499 | 0.302 | 0.739 | 0.674 | 0.509 | 0.396 | 0.725 | 0.743 | 0.406 | 0.327 | 0.337 | 0.340 | 0.500 |
| `Llama-3-8B-layer-mix-bpw-2.5` | 0.506 | 0.298 | 0.760 | 0.684 | 0.513 | 0.418 | 0.744 | 0.756 | 0.389 | 0.335 | 0.335 | 0.335 | 0.509 |
| `Llama-3-8B-layer-mix-bpw-3.0` | 0.523 | 0.318 | 0.770 | 0.708 | 0.540 | 0.441 | 0.767 | 0.784 | 0.407 | 0.333 | 0.345 | 0.343 | 0.526 |
| `Llama-3-8B-layer-mix-bpw-4.0` | 0.542 | 0.338 | 0.791 | 0.729 | 0.591 | 0.484 | 0.797 | 0.799 | 0.398 | 0.337 | 0.345 | 0.352 | 0.545 |
| `Llama-3-8B-instruct-layer-mix-bpw-2.2` | 0.514 | 0.292 | 0.645 | 0.672 | 0.499 | 0.367 | 0.698 | 0.775 | 0.423 | 0.417 | 0.424 | 0.398 | 0.565 |
| `Llama-3-8B-instruct-layer-mix-bpw-2.5` | 0.528 | 0.304 | 0.741 | 0.681 | 0.512 | 0.412 | 0.749 | 0.798 | 0.425 | 0.417 | 0.410 | 0.390 | 0.498 |
| `Llama-3-8B-instruct-layer-mix-bpw-3.0` | 0.547 | 0.316 | 0.787 | 0.690 | 0.530 | 0.459 | 0.768 | 0.800 | 0.437 | 0.435 | 0.417 | 0.387 | 0.548 |
| `Llama-3-8B-instruct-layer-mix-bpw-4.0` | 0.576 | 0.344 | 0.808 | 0.716 | 0.569 | 0.513 | 0.778 | 0.825 | 0.449 | 0.462 | 0.449 | 0.432 | 0.578 | | {"license": "apache-2.0"} | GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:50:49+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| GreenBit LLMs
=============
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.